We've encountered this weird mistake three times in the past few days after we've been flawless for eight weeks, and I'm at a loss.
This is the error message:
The execution of the query "EXEC dbo.MergeTransactions" failed with the
The following error: "Can not insert duplicate key row in object
& # 39; # 39 & sales.Transactions; with a unique index
& # 39; # 39 & NCI_Transactions_ClientID_TransactionDate ;. The double key value
is (1001, 2018-12-14 19: 16: 29.00, 304050920).
The index we have is Not unique. If you notice, the duplicate key value in the error message does not even match the index. Strangely enough, I manage to execute the process again.
This is the latest link I could find that has my problems, but I see no solution.
Error: Cannot insert duplicate key row in… a non-unique index?!
A few things about my scenario:
* The process updates the transaction ID (part of the primary key). I think that's the reason for the mistake, but I do not know why? We will remove this logic.
* Change tracking is enabled for the table
* The transaction is read unbound
FORCE [PK_Transactions_TransactionID] PRIMARY KEY CLOSED
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, DATA_COMPRESSION = PAGE) ON [Data]
) ON [Data]
CREATE AN INDEX NOT COMPLETED [NCI_Transactions_ClientID_TransactionDate] ON [sales],[Transactions]
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, DATA_COMPRESSION = PAGE)
Example of an update statement
t.transactionid = s.transactionid,
t.[PackageMonths] = s.[PackageMonths].
t.UpdateDate = @UpdateDate
FROM #working transactions s
TO JOIN [DB],[sales],[Transactions] t
ON s.[TransactionID] = t.[TransactionID]
WO CAST (HASHBYTES (& # 39; SHA2_256 & # 39 ;, CONCAT (p.[BusinessTransactionID], & # 39; & # 39 ;, S.[BusinessUserID], & # 39; & # 39 ;, etc)
<> CAST (HASHBYTES (& # 39; SHA2_256 & # 39 ;, CONCAT (T.[BusinessTransactionID], & # 39; & # 39 ;, T.[BusinessUserID], & # 39; & # 39 ;, etc)
My question is, what's going on under the hood? And what is the solution? For reference, the above link mentions the following:
At this point I have a few theories:
- Errors related to memory pressure or large parallel update schedule, but I would expect a different kind of error and so far can not establish any correlation
Low resources become timeframes for these isolated and sporadic mistakes.
- An error in the UPDATE statement or in the UPDATE data causes an actual duplicate primary key violation, but there is an obscure SQL Server error
which results in an error message indicating the wrong index name.
- Bad reads due to uncommitted read isolation, resulting in a large parallel update for the double insert. But ETL developers claim
By default, read commit is used and it is difficult to pinpoint
What level of isolation does the process actually use at runtime?
I suspect that if I optimize the execution plan as a workaround
MAXDOP (1) note or use of the session trace flag to disable spooling
Operation, the error will only disappear, but it is unclear how this
would affect performance
Microsoft SQL Server 2017 (RTM-CU13) (KB4466404) – 14.0.3048.4 (X64)
November 30, 2018 12:57:58
Copyright (C) 2017 Microsoft Corporation
Enterprise Edition (64-bit) on Windows Server 2016 Standard 10.0 (Build 14393 🙂