Efficient locking for a conditional insert inside a larger transaction

I have a script I’m using to merge incoming data into a table that includes a conditional insert to capture some related data.

For example, where (#Records) contains the incoming data:

BEGIN TRANSACTION

INSERT INTO (Batch) (Id)
SELECT DISTINCT BatchId
FROM (#Records)
WHERE BatchId NOT IN (SELECT Id FROM (Batch))

-- Do a MERGE here with lots of records that might take a while

COMMIT TRANSACTION

The problem with this insert is that it may fail if run concurrently (due to a primary key constraint on (Batch)).

The advice I found was that I should be using UPDLOCK and HOLDLOCK to lock the (Batch) table for this insert:

INSERT INTO (Batch) (Id)
SELECT DISTINCT BatchId
FROM (#Records)
WHERE Id NOT IN (SELECT Id FROM (Batch) WITH (UPDLOCK, HOLDLOCK))

It also suggested that I wrap that insert in its own transaction, however, irrespective of whether I use a “nested” transaction here or not, my understanding is that the lock on (Batch) will be held until the end of the “outer” transaction. Is this correct?

As the other statements in the transaction may take a while I’d like to avoid the (Batch) table being locked until the end.

What is the most efficient way to do this?

mouse – Kubuntu how to make close button larger

on Windows, I could just throw my mouse somewhere to the top right and then allways hit the close button, if the window was maximized. On Kubuntu/KDE however the circle hitbox doesn’t hit the most top right point of my screen. Is there any way, to make the button’s hitbox so large, that I would hit it, if my mouse is is in the most top right corner, while the drawn button is still as small, tidy and circlish, as ever? I would love to keep my current theme.
Also, I tested all square button themes that came with kubuntu, but no themed button seems to hit the most top right corner.

Thanks in advance.

How to set the innoDB limit to a larger value.

I was having trouble importing one table in my huge Mysql import and it kept telling me that my table is too long.. After some research i found that in the Maria Db there is an ini file that you can set it to anything your little heart desires… So I changed it and all is well i can now load huge table widths with tons of colums.. no longer exceeding the 8,164 tables which is the normal setting but low and behold when i went to look for my ini file on the web so I could go live and not run…

How to set the innoDB limit to a larger value.

sql server 2012 – How can this larger subquery be (much) faster?

In my organisation Ricoh is the supplier of printers and copiers. Papercut is used for registering which copies are made and for which account. Another application working with Ricoh is registering copies in a different database, but both are somewhat linked.

My organisation wanted a central place to retrieve the total cost so I wrote some queries. Most work fine, but something weird is going on with the one below and I’m unable to figure out why this is failing. What could be the reason of the following issue.

It’s hard to make a db<>fiddle as the databases are quite complicated. I’ll try to explain the issue here.

In essence there are two main tables. A table maintained by Ricoh containing client identifiers ClientId, a price (cost of the copywork), print timestamps and a job identifier. The table maintained by Papercut also has a timestamp, a printed flag and a reference to the corresponding JobId in its job_comment.

Now consider the following query which gives the total printcost when having printed using the seperate Ricoh application.

SELECT CliendId,
       SUM(ricoh.price) AS cost
FROM
  (SELECT a.SystemId as ClientId,
          a.ProcessInternalUid as JobId,
          a.price
   FROM   ricoh.printhistory AS a
   WHERE a.Exportdate >= '2020-10-01'
     AND a.Exportdate <= '2020-12-12') AS ricoh
INNER JOIN
  (SELECT substring(b.job_comment, 16, 36) AS JobId
   FROM  papercut.printer_usage_log AS b
   WHERE b.usage_date >= '2020-10-01'
     AND b.usage_date <= '2020-12-12'
     AND b.printed = 'Y'
   GROUP BY substring(b.job_comment, 16, 36)) AS papercut ON (papercut.JobId = ricoh.JobId)
GROUP BY ricoh.CliendId

If I run this query I have instant results, where the subqueries ricoh and papercut have 96.321 and 9.354 rows respectivly.

But if I change the lower bound of the date range to 2020-11-01 the query takes 49 minutes (!). While the subqueries have 46.223 and 4.547 records respectivly.

How can this be? How can a join with larger subqueries result in much faster results? I’m really puzzled by this. What could be a reason for this increase in time? (Then I can try to debug further)

I’ve done some (initial) debugging and noticed the following:

  • There is a limit date. If I set the lower bound to anything higher then 2020-10-08, the execution time gets large. I went looking for some troublesome records with that timestamp but found none. Also, choosing any date higher, like even 2020-12-01 results in high execution times.
  • If I remove the GROUP BY aggregation (and the sum) execution is instantaneous. But if I add an ORDER BY ricoh.price then execution time rises again. Could something with the price be an issue? It’s a floating point number. I tried converting it to an integer but the issues remains.

8 – How can we Make Body Edit Input Area Larger?

The Body field of the content on our website is generally quite large — multiple pages. When Editing in Drupal 8, the Input Area is quite small — 98 characters wide by 9 lines. The first thing we do is drag the bottom right of the Input Area and increase the size to 30-40 lines. This makes editing much easier.

Is there a way to make the Body Input Area size 30 lines long for our website? This would eliminate one step on every Edit!

ssd – What happens to empty space when cloning to a larger disk?

I am trying to learn about cloning a small SSD to a larger SSD using Macrium. My OS (Windows 10) is installed on a 120 GB SSD but I need to clone the OS to a 500 GB SSD and then insert the new SSD into the laptop. I’m doing this because there are some programs I need which take up a lot of space when installed. I have all the hardware so I’m ready to go. However, I’m having trouble understanding the process. The process is described on the Macrium website here, but it looks like that process results in unused space. I am trying to avoid having a large unallocated space as happened to this user.

This is what the program is showing me:
enter image description here

enter image description here

There should be around 357 GB of free space after cloning, but is that space still part of the C drive? When programs are installed on the C drive will they be able to use that space? Or is it just extra space for non-installed files like videos or pictures? How can I make sure that the new C drive will use that extra space for large program installations?

dnd 5e – Checking CR calculation for homebrew larger version of Giant Ape

I am considering allowing a larger version of a Giant Ape, in which some of the stats have been embiggened to achieve the low end of CR8.

I would appreciate a confirmation that I am calculating this correctly.

In particular, it appears that what the DMG calls Damage Per Round, what is actually being assessed is average damage per round, assuming that the maximum number of most damaging attacks hit.

CR7 Giant Ape
AC12, 157hp (15d12+60), At. 2 @ +9 to hit, 3d10+6 dmg, DPR 45

CR8 Proposed Giant Ape (all other stats as RAW Giant Ape)
AC13, 168hp (16d12+64), At. 2 @ +9 to hit, 3d12+6 dmg, DPR 51

CR7 Giant Ape Calculation
Hp 146-160 = CR6, AC is more than two below 15, adjust DCR -1 to 5
DPR 45-50 = CR7, to hit is more than two above +6, adjust OCR +1 to 8
Average CR 6.5, round to CR7

CR8 Proposed Giant Ape Calculation
Hp 161-175 = CR7, AC is two below 15, adjust DCR -1 to 6
DPR 51-56 = CR8, to hit is more than two above +6, adjust OCR +1 to 9
Average CR 7.5, round to CR8