SQL Server – Does the clustered index order have the same constraints as the search nonclustered index?

I have a table with a lot of data and I want to use the following clustered index:

(account_id, group_id, run_id, page_id, date)

For a non-clustered index, if I were to use WHERE filters that involve account_id, page_id, and date_timeI know the index doesn't work because I don't include it either group_id and run_id. Does this also apply to the use of a clustered index? I often want to make a query in the form:

SELECT * 
FROM my_table
WHERE ad_account_id = %d
  AND page_id = %d
  AND date >= DATE(%s)
  AND date <= DATE(%s)

But sometimes I have questions like:

SELECT * 
FROM my_table
WHERE ad_account_id = %d
  AND group_id = %d
  AND date >= DATE(%s)
  AND date <= DATE(%s)

or

SELECT * 
FROM my_table
WHERE ad_account_id = %d
  AND group_id = %d
  AND run_id = %d
  AND date >= DATE(%s)
  AND date <= DATE(%s)

It is not obvious whether one of these uses the clustered index after the given value account_id (I'm pretty sure this isn't a full table scan, but I'm not sure even there).

How do I change / add a schema in a SQL Server Always On Availability group?

I have a 2-node "Always Available" group cluster in SQL Server 2016. I wonder how to add a new table or change the schema of an existing table in an "Always Available" group without downtime.

Is it fully automatic? I.E. I just go ahead and create my spreadsheet on the primary in an availability database and it will then replicate to the secondary? Or will something strange happen? I am concerned that there may be cases where this does not work and replication between primary and secondary is stopped. For example, what happens if SQL Server Management Studio decides to drop the table and recreate it? Does it work as expected? Are there any pitfalls to keep in mind?

Thank you in advance and any help is much appreciated. I am a long-time SQL Server user, but new to availability groups.

How to move the SharePoint SQL Server 2017 database on Sharepoint SQL Server 2016

I think with the database disassembly and mounting we cannot downgrade the database – but with the third-party migration tool like sharegate, Metalogix etc. We can do it.

You can also try downgrading the database version while performing the SQL level backup using only the database export Approach.

You can reference the following thread for the same:

Demote the SharePoint database from SQL2008 to SQL2005

A similar question is discussed here, the answer was below:

"First, run Test-SPContentDatabas to determine if fetures or site definitions are missing. If you do, you will need to install the missing solutions and site definitions.

Test-SPContentDatabase -name "wss_content" web application http: // sp76: 2222 -ServerInstance "SP76"

If you successfully ran Test-SPContentDatabase, run Upgrade-SPContentDatabase, which will migrate your content database schema to 2013 farm schema.

Upgrade SPContentDatabase wss_content

When your database is updated, mount it using the Mount-SPContentDatabase cmdlet. "

Source :

The content database has a schema version that is not supported by this version

For information on getting in and out, see the following MSDN article:

Attach or detach content databases in SharePoint Server

Move all databases to SharePoint Server

VB connection to Azure SQL

Here is the code to connect to Azure Sqdatabase through Active Directory, but I can't connect to miss anything. I think Azure AD is different, but I'm not sure what I'm missing

It works fine for AWS where I hosted my MSSQL, but it doesn't work in Azure

Sub SQL_Connection ()

Dim con As ADODB.Connection & # 39; to connect
Dim rs As ADODB.Recordset & # 39; to get the record
Dim query as string & # 39; to get the query
Set con = New ADODB.Connection & # 39; to end the connection
Set rs = New ADODB.Recordset & # 39; to the record & # 39; rs = reocrd set & # 39; to activate

strCon = "Provider = SQLOLEDB; Trusted_Connection = False; Encrypt = True; Data source = servername.database.windows.net, 1433; Initial Catalog = datbasename; Integrated Security = SSPI"
con.Open (strCon)
If con.State = adStateOpen Then
MsgBox "You are now connected!"
SQLStr = "SELECT TOP (10) * FROM xxxxxxxx"
rs.Open SQLStr, con, adOpenStatic
With worksheets ("sheet1"). Range ("a6: z500")
.ClearContents
.CopyFromRecordset rs
End with
Otherwise
MsgBox "Sorry. You don't have an access bud."
End if

rs.Close
Set rs = nothing
con.Close
Set con = nothing
End sub

mysql 5.6 – Create a type 2 SCD table from the change log table using SQL

I have a change log table that records the transition of a state with its time, for example:

user_id,user_status_from,user_status_to,  created_date
 7cc4d      A2               A1       2019-11-03 23:04:26
 7cc4d      A1               A6       2019-11-03 23:05:28
 7cc4d      A6               I4       2019-11-16 10:00:34
 7cc4d      I4               A1       2020-03-16 10:00:36

Basically, this table records the transition from one status to another, here is the transition from
A2 -> A1 occurred on 11/03/2019 at 11:04:26 p.m. (1st row) and then it moved from
A1 -> A6 on 2019-11-03 23:05:28 and so on.

Now I want to write an SQL that I should be able to use to create a table that shows the history of any status, for example: in particular, the status user was basically living with from that date to that date any status. I want to create a table like this:

   userId status statrtingdatetime  endingdatetime
   7cc4d   A1   2019-11-03 23:04:26 2019-11-03 23:05:28
   7cc4d   A6   2019-11-03 23:05:28 2019-11-16 10:00:34
   7cc4d   I4   2019-11-16 10:00:34 2020-03-16 10:00:36
   7cc4d   A1   2020-03-16 10:00:36 2020-04-02 08:24:50

If you see this table, it is just a logical table from the table above. It says that a certain status life was from one date to another.
How can you write SQL to create this final table? LAG / LEAD Function does not work in my version of MySql Server)

SQL Server – Added a transactional replication article that causes a block – Schema Mod Lock

We have transactional replication initialized from backup on a highly transactional OLTP database. When we add new articles (either from SSMS or using T-SQL), there is a massive block because adding an article process tries to get the schema-mod lock on all articles in the publication and not just on the article that will be added. Exactly the same problem listed here.

Does anyone know if this can be avoided?

T-SQL script to add a new article

exec sp_addarticle 
   @publication = N'blocking', 
   @article = N'rep6', 
   @source_owner = N'dbo', 
   @source_object = N'repl6', 
   @type = N'logbased', 
   @identityrangemanagementoption = N'manual', 
   @destination_table = N'repl6', 
   @destination_owner = N'dbo'

EXEC sp_addsubscription
   @publication = 'blocking',
   @article = N'rep6',
   @subscriber = 'sql3s14',
   @destination_db = 'repl2',
   **strong text**@sync_type = N'replication support only'

To edit::
The problem seems to be with sp_addsubscription. sp_addarticle Requires only a schema mode lock on the item being added. But sp_addsubscription requires Schema Mod Lock for all existing articles in the publication. This also applies to running sp_addarticle and then sp_refreshsubscriptions.

PHP – Is SQL Injection Still a Bad Thing When the User Is Restricted to Non-Harmful Queries?

Suppose I have a very simple PHP application that acts as the front end for an SQL database. The user enters their query in a field and the app displays the query results in a table.

To prevent a user from changing the table, the SQL user has only read-only query permissions. H. When a user tries to enter something like DELETE * FROM persons or DROP TABLE persons You will receive an error message in the text field.

Is it still considered a "bad form" if this web application is susceptible to SQL injection because the user can run his own (read-only) SQL queries against the database when he intends to use the app?

Retrieve by inserting ORDER of merged table in SQL Server

I have 3 tables like "BASE_Customer", "BASE_Invoice" and "BASE_Payment" the following structure,

Enter the image description here

Query:

CREATE TABLE BASE_Customer
(
    CustomerId INT IDENTITY(1,1),
    CustomerName VARCHAR(45)
    PRIMARY KEY(CustomerId)
)
INSERT INTO BASE_Customer (CustomerName) VAlUES ('LEE')

CREATE TABLE BASE_Invoice
(
    InvoiceId INT IDENTITY(1,1),
    InvoiceDate DATE,
    CustomerId INT NOT NULL,
    InvoiceMethod VARCHAR(45) NOT NULL, -- CASH or CREDIT
    Amount MONEY,
    PRIMARY KEY(InvoiceId)
)
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-16', 1, 'CASH', 1000);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-16', 1, 'CREDIT', 2000);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-17', 1, 'CREDIT', 500);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-18', 1, 'CREDIT', 2000);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-18', 1, 'CASH', 150);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-19', 1, 'CREDIT', 1000);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-19', 1, 'CREDIT', 3000);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-19', 1, 'CASH', 1000);
INSERT INTO BASE_Invoice (InvoiceDate, CustomerId, InvoiceMethod, Amount) VAlUES ('2020-01-20', 1, 'CREDIT', 2250);

CREATE TABLE BASE_Payment
(
    PaymentId INT IDENTITY(1,1),
    PaymentDate DATE,
    CustomerId INT NOT NULL,
    InvoiceId INT NULL,
    PaymentMethod VARCHAR(45) NOT NULL, -- CASH or CREDIT, ADVANCE
    Amount MONEY,
    PRIMARY KEY(PaymentId)
)
INSERT INTO BASE_Payment (PaymentDate, CustomerId, InvoiceId, PaymentMethod, Amount) VAlUES ('2020-01-16', 1, 1, 'CASH', 1000);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, PaymentMethod, Amount) VAlUES ('2020-01-16', 1, 'CREDIT', 500);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, InvoiceId, PaymentMethod, Amount) VAlUES ('2020-01-18', 1, 4, 'ADVANCE', 500);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, PaymentMethod, Amount) VAlUES ('2020-01-18', 1, 'CREDIT', 2000);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, InvoiceId, PaymentMethod, Amount) VAlUES ('2020-01-18', 1, 5, 'CASH', 150);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, PaymentMethod, Amount) VAlUES ('2020-01-18', 1, 'CREDIT', 5000);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, PaymentMethod, Amount) VAlUES ('2020-01-19', 1, 'CREDIT', 1200);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, InvoiceId, PaymentMethod, Amount) VAlUES ('2020-01-19', 1, 7, 'ADVANCE', 500)
INSERT INTO BASE_Payment (PaymentDate, CustomerId, InvoiceId, PaymentMethod, Amount) VAlUES ('2020-01-19', 1, 8, 'CASH', 1000);
INSERT INTO BASE_Payment (PaymentDate, CustomerId, InvoiceId, PaymentMethod, Amount) VAlUES ('2020-01-20', 1, 9, 'ADVANCE', 750);
  • The customer table should be linked with both tables "BASE_Invoice" and "BASE_Payment" (customer ID is the foreign key in both tables).
  • when inserting data into the "BASE_Invoice" table, if the CASH calculation method column at the time of inserting the invoice payment using the CASH payment method,

  • If you insert data into the "BASE_Invoice" table, if the CREDIT calculation method column sometimes inserts the payment with the ADVANCE payment method, or if we do not insert a payment into the payment table.

  • CREDIT payment methods can only be inserted in the payment table

I have to put all the tables together once and get the order of the output as shown in the screenshot below.

Enter the image description here

The problem is how to merge tables and get the order of transactions

Can I run a combination of SQL Server Basic availability groups with SQL Replication?

I need to use SQL Server Standard 2016 to keep costs down but achieve HA. We also need two reading nodes for reporting purposes. I wonder if I can use SQL Server Replication to get data for the two reading nodes. Is it possible to have this setup? What problems can this type of setup have later?

postgresql – move hierarchy with most revisions with more performance (SQL)

I have a hierarchical table for my features with a parent-child relationship. There was no clear limitation, which is why I have branches in the data in which several children refer to one parent. To add the restriction, I have to remove the branches.

CREATE TABLE feature_log
(
    feature_id UUID DEFAULT uuid_generate_v4(),
    revision_id UUID DEFAULT uuid_generate_v4(),
    parent_id UUID,
    project_id INTEGER,
    data_hash TEXT,
    created TIMESTAMP WITHOUT TIME ZONE DEFAULT NOW(),
    user_id INTEGER,

    CONSTRAINT pk_feature_log PRIMARY KEY (revision_id),
    CONSTRAINT fk_parent FOREIGN KEY (parent_id) REFERENCES feature_log (revision_id),
    CONSTRAINT fk_feature_data FOREIGN KEY (data_hash) REFERENCES feature_data (hash),
    CONSTRAINT fk_project FOREIGN KEY (project_id) REFERENCES project(id),
    CONSTRAINT uk_feature_log_parent_id UNIQUE (project_id, feature_id, parent_id) -- this was missing
);

CREATE INDEX feature_log_feature_parent_id_idx on myedit.feature_log (feature_id, parent_id);
CREATE INDEX feature_log_project_id_idx on myedit.feature_log (project_id);
CREATE INDEX feature_log_parent_id_idx on myedit.feature_log (parent_id);
CREATE INDEX feature_log_feature_id_idx on myedit.feature_log (feature_id);
CREATE INDEX feature_log_data_hash_idx on myedit.feature_log (data_hash);
CREATE INDEX feature_log_find_all_idx on myedit.feature_log (parent_id, revision_id, project_id);

If data_hash is NULL, the function is cleared. If a function has no child, it is a node function.

To fix this, I have the original table and two auxiliary tables. One that contains the fixed data (feature_log_resolve), the other is to delete identified damaged data and identified good data (feature_log_clipboard) – so only the features with the problem remain.

The problem is to solve the last part: if a feature has multiple child nodes, use the one with the most revisions. There are still around 700,000 features in this part, and there are too many loops that last for days. How can I make this script a little faster?

CREATE OR REPLACE FUNCTION get_node_features (
  p_only_alive BOOLEAN
)
RETURNS SETOF feature_log_clipboard AS
$$
	SELECT * FROM feature_log_clipboard AS child
    WHERE (
        (p_only_alive AND child.data_hash IS NOT NULL)
        OR
        NOT p_only_alive
    )
    AND NOT
        (EXISTS
            (SELECT parent.parent_id FROM feature_log_clipboard AS parent
            WHERE parent.parent_id = child.revision_id
            AND parent.project_id = child.project_id)
        );
$$
LANGUAGE 'sql';



CREATE OR REPLACE FUNCTION get_biggest_revision (
    p_only_alive BOOLEAN
)
RETURNS TABLE(
    generation INTEGER,
    feature_id UUID,
    revision_id UUID,
    parent_id UUID,
    project_id INTEGER,
    data_hash TEXT,
    created TIMESTAMP WITHOUT TIME ZONE,
    user_id INTEGER,
    row_number BIGINT
) AS
$$
    WITH RECURSIVE
    feature_hierarchy(generation, feature_id, revision_id, parent_id, project_id, data_hash, created, user_id) AS
    (
        SELECT 0, firstGeneration.feature_id, firstGeneration.revision_id, firstGeneration.parent_id, firstGeneration.project_id, firstGeneration.data_hash, firstGeneration.created, firstGeneration.user_id
            FROM feature_log_clipboard AS firstGeneration
            WHERE parent_id NOT IN (SELECT revision_id FROM feature_log_clipboard)
        UNION ALL
        SELECT parent.generation + 1, nextGeneration.feature_id, nextGeneration.revision_id, nextGeneration.parent_id, nextGeneration.project_id, nextGeneration.data_hash, nextGeneration.created, nextGeneration.user_id
            FROM feature_log_clipboard AS nextGeneration
            INNER JOIN feature_hierarchy AS parent ON nextGeneration.parent_id = parent.revision_id
    )

    SELECT *, ROW_NUMBER() OVER(PARTITION BY p.feature_id ORDER BY p.generation DESC, p.created DESC) AS row_number
            FROM feature_hierarchy AS p
            WHERE revision_id IN (SELECT revision_id FROM get_node_features(p_only_alive))

$$
LANGUAGE 'sql';

CREATE OR REPLACE FUNCTION get_biggest_revision_hierarchy (
    p_only_alive BOOLEAN
)
RETURNS TABLE(
    feature_id UUID,
    revision_id UUID,
    parent_id UUID,
    project_id INTEGER,
    data_hash TEXT,
    created TIMESTAMP WITHOUT TIME ZONE,
    user_id INTEGER,
    row_number BIGINT
) AS
$$
    WITH RECURSIVE
    biggest_revision_hierarchy AS (
        SELECT lastGeneration.feature_id, lastGeneration.revision_id, lastGeneration.parent_id, lastGeneration.project_id, lastGeneration.data_hash, lastGeneration.created, lastGeneration.user_id, lastGeneration.row_number
            FROM get_biggest_revision(p_only_alive) AS lastGeneration
            WHERE lastGeneration.row_number = 1
        UNION ALL
        SELECT nextGeneration.feature_id, nextGeneration.revision_id, nextGeneration.parent_id, nextGeneration.project_id, nextGeneration.data_hash, nextGeneration.created, nextGeneration.user_id, nextGeneration.user_id
            FROM feature_log_clipboard AS nextGeneration
        INNER JOIN biggest_revision_hierarchy AS child ON nextGeneration.revision_id = child.parent_id
    )

    SELECT * FROM biggest_revision_hierarchy;
$$
LANGUAGE 'sql';

CREATE OR REPLACE FUNCTION delete_moved_features ()
RETURNS VOID AS
$$
    DELETE FROM feature_log_clipboard
    WHERE feature_id IN (
        SELECT DISTINCT feature_id
        FROM feature_log_resolve
        WHERE parent_id IS NOT NULL
    )
$$
LANGUAGE 'sql';


-- copy node features of latest generation with their complete hierarchy to resolve table
INSERT INTO feature_log_resolve (feature_id, revision_id, parent_id, project_id, data_hash, created, user_id)
SELECT feature_id, revision_id, parent_id, project_id, data_hash, created, user_id FROM get_biggest_revision_hierarchy(TRUE);

-- delete whole feature_id of the node features of latest generation from clipboard
SELECT * FROM delete_moved_features();

-- copy node features of latest generation with their complete hierarchy to resolve table
INSERT INTO feature_log_resolve (feature_id, revision_id, parent_id, project_id, data_hash, created, user_id)
SELECT feature_id, revision_id, parent_id, project_id, data_hash, created, user_id FROM get_biggest_revision_hierarchy(FALSE);

-- delete whole feature_id of the node features of latest generation from clipboard
SELECT * FROM delete_moved_features();