sql server – Is it possible to “bind” sp_executesql to the calling procedure, as recorded in the Query Store?

Statements executed with sp_executesql appear to be generally “unbound” from the procedure in which they appear. By “bind”, I simply mean “associated to the calling object”.

The goal is to make it simpler to associate statements in the Query Store:

select object_name(q.object_id) as (Statement Context)
from sys.query_store_query
where 1=1
    and object_name(q.object_id) like 'This will be the procedure name for ''normal'' statements'

While it’s possible to embed comments in the statements that will show up in the query_sql_text, this feels a bit extra-hackish.

Also, it seems that sp_executesql would need some form of context binding as the dynamic SQL can access non-global temp tables in the surrounding scope: without a binding, how can SQL Server ensure the validity & stability of the temp table schema in the created plans?

How can I query on-premise SQL Server database from Azure SQL database using stored procedure?

I have a Azure SQL database and it has a stored procedure from where I am trying to join a table which resides in on-premise SQL server database.
Essentially, I am trying to query a table which sits in on-premise SQL server’s database.

Are there any options to make cross-server queries from Azure SQL database?

Can a stored procedure increase performance on single-statement queries?

Let’s assume a trivial update query

UPDATE  t1
LEFT JOIN
        t2
ON      t2.id = t1.id
SET     t1.col1 = 'foo', t2.col1 = 'bar'
WHERE   t2.id IS NULL

Would wrapping the statement in a stored procedure significantly reduce execution time, considering the query would be invoked at a speed of 10.000 calls per minute?

mysql – Stored procedure retrieve

Is their any possibilities to retrieve stored procedure condition which is applied for insertion after insert..? Mysql php.

You have to drop it and create it a new

Also php and mysqli don’t add DELIMITER to the CREATe PROCEDURE, so you have to do it yourself.

So the code must loook like

DROP PROCEDURE IF EXISTS Insertion;
DELIMITER $$
CREATE PROCEDURE Insertion(IN firstname varchar(40),IN lastname varchar(40),IN email varchar(40),IN department varchar(40),IN doj date,IN basicpay int(11))
BEGIN
  
  DECLARE HRA decimal(20,2);
  DECLARE DA decimal(20,2);
  DECLARE PF decimal(20,2);
  DECLARE NET_SALARY decimal(20,2);
  
  IF department = 'HUMAN RESOURCE' THEN
  SET HRA = (5/100)*basicpay;
  SET DA = (7/100)*basicpay;
  SET PF = (10/100)*basicpay;
  
  ELSEIF department = 'MARKETING' THEN
  SET HRA = (5/100)*basicpay;
  SET DA = (7/100)*basicpay;
  SET PF = (10/100)*basicpay;
  
  ELSEIF department = 'PRODUCTION' THEN
  SET HRA = (5/100)*basicpay;
  SET DA = (7/100)*basicpay;
  SET PF = (10/100)*basicpay;
  
  ELSEIF department = 'FINANCE AND ACCOUNTING' THEN
  SET HRA = (5/100)*basicpay;
  SET DA = (7/100)*basicpay;
  SET PF = (10/100)*basicpay;
  
  
  ELSE 
  SET HRA = (5/100)*basicpay;
  SET DA = (7/100)*basicpay;
  SET PF = (10/100)*basicpay;
  
  
  END IF; 
SET NET_SALARY = basicpay+HRA + DA + PF;
    
  insert into employees(FIRST_NAME,LAST_NAME,EMAIL,DEPARTMENT,DATE_OF_JOINING,BASIC_PAY,HRA,DA,PF,NET_SALARY)
  values(firstname,lastname,email,department,doj,basicpay,HRA,DA,PF,NET_SALARY);  
END$$
DELIMITER ;

How to get applied stored procedure for inserted row in mysql php query

$query = “CREATE PROCEDURE IF NOT EXISTS Insertion(IN firstname varchar(40),IN lastname varchar(40),IN email varchar(40),IN department varchar(40),IN doj date,IN basicpay int(11))
BEGIN

DECLARE HRA decimal(20,2);
DECLARE DA decimal(20,2);
DECLARE PF decimal(20,2);
DECLARE NET_SALARY decimal(20,2);

IF department = ‘HUMAN RESOURCE’ THEN
SET HRA = (5/100)*basicpay;
SET DA = (7/100)*basicpay;
SET PF = (10/100)*basicpay;

ELSEIF department = ‘MARKETING’ THEN
SET HRA = (5/100)*basicpay;
SET DA = (7/100)*basicpay;
SET PF = (10/100)*basicpay;

ELSEIF department = ‘PRODUCTION’ THEN
SET HRA = (5/100)*basicpay;
SET DA = (7/100)*basicpay;
SET PF = (10/100)*basicpay;

ELSEIF department = ‘FINANCE AND ACCOUNTING’ THEN
SET HRA = (5/100)*basicpay;
SET DA = (7/100)*basicpay;
SET PF = (10/100)*basicpay;

ELSE
SET HRA = (5/100)*basicpay;
SET DA = (7/100)*basicpay;
SET PF = (10/100)*basicpay;

END IF;
SET NET_SALARY = basicpay+HRA + DA + PF;

insert into employees(FIRST_NAME,LAST_NAME,EMAIL,DEPARTMENT,DATE_OF_JOINING,BASIC_PAY,HRA,DA,PF,NET_SALARY)
values(firstname,lastname,email,department,doj,basicpay,HRA,DA,PF,NET_SALARY);

END”;

sql server – SQL DB corruption recovery procedure

I’ve like to have this discussion on db corruption topic. We run weekly dbcc checkdb and most time, this sql job went through without errors. However, if it does, that usually means big headache for any dbas.

The recommended way is to check if it can be fixed without data loss or restore from last good copy.

My question is about the procedure to restore from last good copy.

Our backup strategy is weekly full back on sunday, diff backup daily and tlog backup hourly.

Say if weekly integrity check indicated errors,

How to determine the last good backup
once last good backup is determined, say corruption on wed, should I use last week full backup + diff backup on tuesday + all tlogs after tuesday’s diff backup till current time to restore db? question
should I use replace option ?

SQL Server Always On Availability Group Zero Downtime Update Procedure

Here I have a 2-node SQL Server 2016 AlwaysOn Availability Group cluster, with 1 primary and 1 secondary.

The question is – what is the optimal way to install Microsoft Updates on the servers in the cluster? I have struggled to find good, clear recommendations on this.

Here is my current thinking:

  1. Install updates using Microsoft Update on the secondary
  2. Restart the secondary to finish the updates
  3. Perform a manual failover from the Primary to the Secondary
  4. Install updates using Microsoft Update on the new-secondary (former primary)
  5. Restart the new-secondary (former primary)
  6. Perform a manual failover from the new-Primary to the Secondary, making the original primary the primary again

My understanding is that this will:

  • Cause absolutely no application downtime
  • Cause no syncing errors
  • Cause no data corruption
  • The cluster will not generate errors when half the nodes are updated and the other half are not

Is this correct? Is there a better way to do this?

Thanks in advance – any help is greatly appreciated.

Procedure for creating a command-line tool in C or C++ that includes all recommended packaging for all platforms

One reason I might want to create a command line tool in NodeJs is because I can follow simple procedure such as this:
https://blog.bitsrc.io/how-to-build-a-command-line-cli-tool-in-nodejs-b8072b291f81

And then I can tell people “go do npm install -g cooltool” and my cool tool is on their computer ready to be used from the command line, such as git or 7zip, place in the path and everything. And it’ll work on any computer NodeJS is on.

But I’d rather do this in C/C++. Is there a guide to do this such that one can write a simple console place application (with no dependencies), that will work on any platform. It’s okay if it’s limited to gcc and/or cmake for instance.

But what I would also like is have all the packaging “just work”, including publishing the packages in the places expected by users of the given platform. would that be apt-get? chocolatey? can it auto-create an msi or setup.exe? how about a dmg for osx?

I feel like this is a solved problem that someone must have already spent the time addressing. A programmer should be able to write a C file, compile it, and if it works, press a button and all the rest of the stuff just happens, packaging, etc.

This can perhaps even be a cloud service (I wouldn’t paying for it)

stored procedure – MYSQL – Convertir filas a columnas

Debo hacer una consulta de 2 tablas donde el contenido de una de ellas -sus filas- debe estar mostrada en columnas.

El resultado que se desea obtener es:
introducir la descripción de la imagen aquí

He intentado realizandolo con un contador pero no logro que el contador se reinicie cuando cambie el nro de pedido, tambien con una subconsulta y tampoco.

PD: Solo debe mostrar solo las primeras 5 incidencias, así sean 8 o 9 incidencias.

Aquí les puedo dar un Script con los datos necesarios para pruebas:

> CREATE TABLE `tmp_guia` (   `sguia_numero_pedido` varchar(15) NOT NULL
> DEFAULT '',   `sguia_hoja_ruta` varchar(10) NOT NULL DEFAULT '',  
> `satencion_persona` varchar(20) NOT NULL DEFAULT '1' ) ENGINE=InnoDB
> DEFAULT CHARSET=utf8;
> 
> insert tmp_guia values ("N001", "HB001", "JOSE PEREZ"); insert
> tmp_guia values ("N003", "HB003", "JAVIER SOLIS"); insert tmp_guia
> values ("N002", "HB002", "MARIA ROSARIO");
> 
> CREATE TABLE `tmp_guia_incidencia` (   `sguia_numero_pedido`
> varchar(15) NOT NULL DEFAULT '',   `sguia_item` char(02) NOT NULL
> DEFAULT ''   `sguia_incidencia` varchar(20) NOT NULL DEFAULT '' )
> ENGINE=InnoDB DEFAULT CHARSET=utf8;
> 
> insert tmp_guia_documentos_cliente values ("N001", "01", "FUERA DE
> LUGAR"); insert tmp_guia_documentos_cliente values ("N003", "03",
> "CANCELADO"); insert tmp_guia_documentos_cliente values ("N003", "02",
> "NO HUBO COORDINACION"); insert tmp_guia_documentos_cliente values
> ("N002", "01", "INICIADO"); insert tmp_guia_documentos_cliente values
> ("N003", "01", "INICIADO"); insert tmp_guia_documentos_cliente values
> ("N002", "02", "FINALIZADO");

simulation – What is the procedure for performing a binning analysis in Monte Carlo, or more generally, estimating autocorrelation times?

I’m working on a monte carlo project similar to the Ising model. I’ve found many examples on which I’ve based my code: https://github.com/danielsela42/MC_TBG_Model/blob/master/mc_project/mcproj_binned.py (my code).

From some papers I read on binning analysis, the errors after each binning step are supposed to converge. Mine ended up oscillating after some binning step. And so I’m getting negative auto correlation times.

I was hoping someone could either verify my procedure is correct, or explain a good procedure for dealing with correlated sampling.

Thank you in advance for the help!

This is my first time here. If this is not the right place to post this, where would be better?