sql – Single column in multiple user-delimited columns

I have a column in which data in a column values ​​like
– Ahmad s / o Ali r / o Lahore
– Ali s / o Talib r / o Makkah
– Noor s / o Ahmad r / o Karachi
– MacDonald company
– ABC Bank
– Sara owner SA Traders

I want to split into columns where s / o and r / o occur, but in some lines it is possible that s / o and r / o do not occur, then all the data in a row

magento2 – Magento SQL Query Tuning

My Magento Store, which I acquired for another developer, currently shows a high time to the first byte (Time to First Byte, TTFB) of about 1s. I realize that much more flows into this number than SQL queries alone. However, running the Magento SQL profiler shows that index.php is loading 350 SQL queries be executed, amounting to around 0,50 s in processing time.

I want to tweak this to improve page load times. I understand MySQL relatively well, but I'm new to Magento. Where should I start to optimize or remove / replace some of these queries to improve page load times? Many Thanks!

sql – Find pairs of reviewers so that both reviewers gave the same book a rating

Description:

Examples

  • Find reviewer pairs so that both reviewers gave that a rating
    same book
  • Remove duplicates does not connect examiners
  • For each pair, return the names of the pair's two examiners in alphabetical order

solution

http://www.sqlfiddle.com/#!17/c1fc4/3/0

SELECT MIN (rev1.name) as Name1, MAX (rev2.name) as Name2 of Ratings AS r1
JOIN ratings AS r2 ON r1.book_id = r2.book_id AND r1.reviewer_id! = R2.reviewer_id
JOIN reviewers AS rev1 ON r1.reviewer_id = rev1.id
JOIN reviewers AS rev2 ON r2.reviewer_id = rev2.id
GROUP BY r1.book_id
ORDER BY name1, name2

Can I write this query more readable and perform better?

SQL Server – DBCC checkdb on tempdb

Supplement to Erik's answer

Running DBCC CHECKDB on tempdb does not perform any mapping or catalog checks and must acquire shared table locks to perform table checks. This is because database snapshots are not available for performance in tempdb. This means that the required transaction consistency can not be obtained.

The only reason I can think about running a checkdb against tempdb is if tempdb gets badly damaged and the sessions that use it start getting errors.

Even if tempdb gets corrupted, it is possible that your user databases could I also have corruption.

Personally, I did not do checkdb on tempdb.

SQL Server – Can not insert a duplicate key row into a non-unique index?

We've encountered this weird mistake three times in the past few days after we've been flawless for eight weeks, and I'm at a loss.

This is the error message:

The execution of the query "EXEC dbo.MergeTransactions" failed with the
The following error: "Can not insert duplicate key row in object
& # 39; # 39 & sales.Transactions; with a unique index
& # 39; # 39 & NCI_Transactions_ClientID_TransactionDate ;. The double key value
is (1001, 2018-12-14 19: 16: 29.00, 304050920).

The index we have is Not unique. If you notice, the duplicate key value in the error message does not even match the index. Strangely enough, I manage to execute the process again.

This is the latest link I could find that has my problems, but I see no solution.

Error: Cannot insert duplicate key row in… a non-unique index?!

A few things about my scenario:
* The process updates the transaction ID (part of the primary key). I think that's the reason for the mistake, but I do not know why? We will remove this logic.
* Change tracking is enabled for the table
* The transaction is read unbound

primary key

    FORCE [PK_Transactions_TransactionID] PRIMARY KEY CLOSED
(
    [TransactionID] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, DATA_COMPRESSION = PAGE) ON [Data]
) ON [Data]

Non-clustered index

CREATE AN INDEX NOT COMPLETED [NCI_Transactions_ClientID_TransactionDate] ON [sales],[Transactions]
(
    [ClientID] ASC,
    [TransactionDate] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, DATA_COMPRESSION = PAGE)

Example of an update statement

update t
to adjust
t.transactionid = s.transactionid,
t.[CityCode]= s.[CityCode].
t.[PackageMonths] = s.[PackageMonths].
t.UpdateDate = @UpdateDate
FROM #working transactions s
TO JOIN [DB],[sales],[Transactions] t
ON s.[TransactionID] = t.[TransactionID]
             WO CAST (HASHBYTES (& # 39; SHA2_256 & # 39 ;, CONCAT (p.[BusinessTransactionID], & # 39; & # 39 ;, S.[BusinessUserID], & # 39; & # 39 ;, etc)
<> CAST (HASHBYTES (& # 39; SHA2_256 & # 39 ;, CONCAT (T.[BusinessTransactionID], & # 39; & # 39 ;, T.[BusinessUserID], & # 39; & # 39 ;, etc)

My question is, what's going on under the hood? And what is the solution? For reference, the above link mentions the following:

At this point I have a few theories:

  • Errors related to memory pressure or large parallel update schedule, but I would expect a different kind of error and so far can not establish any correlation
    Low resources become timeframes for these isolated and sporadic mistakes.
  • An error in the UPDATE statement or in the UPDATE data causes an actual duplicate primary key violation, but there is an obscure SQL Server error
    which results in an error message indicating the wrong index name.
  • Bad reads due to uncommitted read isolation, resulting in a large parallel update for the double insert. But ETL developers claim
    By default, read commit is used and it is difficult to pinpoint
    What level of isolation does the process actually use at runtime?

I suspect that if I optimize the execution plan as a workaround
MAXDOP (1) note or use of the session trace flag to disable spooling
Operation, the error will only disappear, but it is unclear how this
would affect performance

execution

Microsoft SQL Server 2017 (RTM-CU13) (KB4466404) – 14.0.3048.4 (X64)
November 30, 2018 12:57:58
Copyright (C) 2017 Microsoft Corporation
Enterprise Edition (64-bit) on Windows Server 2016 Standard 10.0 (Build 14393 🙂

Apps from GKE can not be connected to Cloud SQL via Private IP

Using the Google Cloud Console:

  • Creates a VPC native cluster.
  • Use standard project VPC.
  • Cluster and SQL instance are in the same region and zone.
  • IP address range for the pod: 10.4.0.0/14
  • IP address range of services: 10.0.0.0/20
  • Private IP created for existing Cloud SQL instance.
  • VPC peering associated with target IP range: 10.97.64.0/24

Seems like networking stuff is alright here. The SQL instance is connected but aborted. Seems like this bug is mysql-specific. I use Hikari Pool.

  • Connection timeout: 30 sec
  • Idle Timeout: 10 min
  • max lifetime: 60 min
  • maximum connection size in a pool: 10 (also tested by increasing the size)

The error "Connection aborted" still appears. Any ideas for troubleshooting and fixing would be really helpful.