What contributes to the transaction size on a MySQL UPDATE?

Background

We’re running a MySQL 8 INNODB cluster behind some Java services. We received an error from the application because we’d exceeded the group replication transaction size limit (“Error on observer while running replication hook ‘before_commit’”). Our limit was set to 150Mb at the time.

Problem

Looking at the transaction involved I don’t understand how it might have involved anything like 150Mb.

It involved an update to two tables

update my_table mt 
inner join my_table_aud mta on mt.id = mta.id 
  set mt.boolean_column_1 = TRUE, 
      mt.boolean_column_2 = TRUE, 
      mt.varchar_column = coalesce(mt.varchar_column, ?2), 
      mta.varchar_column = coalesce(mta.varchar_column, ?2) 
where mt.boolean_column_1 = FALSE 
AND mta.rev <= ?1

which involved approximately 100 rows in my_table and maybe 200 rows in my_table_aud. Plus one other simple insert to a different table. The varchar columns were updated with around 10 bytes of data.

However the two tables involved in the UPDATE do both have a different longtext column, which wasn’t updated. There would have been on average maybe 1MB in text per row updated in those columns.

The only explanation I can think of for us exceeding the transaction limit would be that the text in longtext columns contributed to the transaction size, even though they were not referenced in the update.

I searched for documentation on what contributes to the transaction size of a transaction in MySQL and haven’t been able to find anything useful.

Please can someone help my understanding of how the transaction size limit might have been exceeded in this scenario?

optimization – Given consumer grade hardware,what is a reasonable upper bound for size of search space?

There’s a gacha RPG I’m trying to get better at.

I estimate there are about 10^15 states for the 3 opening rounds of a match for which I am trying to evaluate damage output.

The equations themselves aren’t complicated: mostly linear ones, with the odd division and factorial with small integers (less then n=10) thrown in.

I’m trying to differentiate between “easy”, “tough but doable if you know what you’re doing” and “don’t even think about it”.

I suspect I’ll have to simplify the problem further, but is there a way to know at which point brute force becomes unreasonable before committing to code? I only have access to consumer grade hardware (and google colab).

Best practice for comma separated input size for the search field

So to reiterate:

  • Users have .csvs or other files where large numbers of IMEis are listed
  • They need to be able to search for these IMEIs in your system

Ideally you’d have access to analytics or user interviews that could help you define the upper limit users search. It sounds like you don’t have access to either, so in the meantime we can make a few assumptions.

As you said, it seems the most likely scenario is that they’ll be copying and pasting these numbers, as they are difficult to correctly input due to their length. They likely won’t be checking their work, again due to length, so displaying the pasted content is mostly irrelevant – you can display “1234567890abcde, 1234567890abcde, and 498 more“, which should give them enough information about their search to complete their task.

The main bottleneck will likely be your backend system, not the UI. If you paste 500 IMEIs, how fast does the system respond? If it slows at any point and effects the UX, you’ve found your limit. If it responds adequately for 10,000 IMEIs, there’s little reason to limit it at all.

Search field comma separated input size best practice

In case working with large number of entries in web UI such as IoT device list with IMEI. And in case there is a need to make quick multiple selection by IMEI.
Would it be good practice to provide search field with possibility to enter comma separated IMEIs?
In most cases those IMEIs will be pasted into search field.

What would be sane maximum limit for such input?
50 IMEIs (850 symbols), 100 IMEIs (1700 symbols), even more?
What is main bottle neck for such solution?

I understand CSV file import could be used, but we want quicker solution.

magento2 – Increase product image size in Amasty’s One Step Checkout Magento 2

Amasty put their product image size at 75px in the ‘Your Order ‘Order Summary area, and mine is displaying at half that due to some css I can’t locate.

Please put an item into the cart on their demo site at https://one-step-checkout-m2.magento-demo.amasty.com/checkout/

Is this possible to modify the image size via CSS? Or do I need to override the extension file? I can’t seem to locate it either.

matrices – Eigenvalues of large size “identity” matrix

In the context of AR(1) model, the following $n times n$ matrix plays an important role:

$$
V(rho)
=
{rho^{|i – j|}}_{1 leq i, j leq n}

(rho in (0, 1))
.
$$

I am interested in asymptotic properties of the following:

$$
hat I_n
:=
V(rho)^{1/2} V(hatrho)^{-1} V(rho)^{1/2}
$$

where $hat rho$ is an estimator of $rho.$
Intuitively, this matrix is close to the $n times n$ identity matrix, but the problem is that the size $n$ grows, so we cannot simply write like $hat I_n to I_n.$
Still, I believe that $hat I_n$ is close to the identity matrix in a sense; for instance, all the eigenvalues go to one.

Let $0 leq lambda^{(n)}_1 leq cdots leq lambda^{(n)}_n$ be the eigenvalues of $hat I_n.$
My conjectures are

(1) for fixed $i,$ $lambda^{(n)}_i overset{p}{to}1;$

(2) $lambda^{(n)}_n overset{p}{to} 1$ and $lambda^{(n)}_1 overset{p}{to} 1;$

(3) (hopefully) $sqrt{n} (lambda^{(n)}_n – 1) overset{d}{to} text{some distribution}$ and $sqrt{n} (lambda^{(n)}_1 – 1) overset{d}{to} text{some distribution}.$

Are these correct under some conditions? Thanks!

statistics – Finding the sample size, given the sample proportion and confidence interval.

A survey is to be carried out with the aim of having the $95%$
confidence interval for the population proportion equal to, or within,
$0.60$ to $0.70$. Taking the sample proportion as $0.65$, find the
sample size.

What I have tried so far:

The critical value for a $95%$ confidence interval is $1.960$, so:

margin of error = $1.960 *sqrt{frac{0.65(1-0.65)}{n}}$

However, since the margin of error is not given, I’m not sure how to find $n$, the sample size. Am I supposed to set up simultaneous equations given the info about the population proportion being equal to, or within $0.60$ to $0.70$? Any help would be appreciated.

statistics – When aiming to roll for a 50/50, does the die size matter?

I noticed how D&D 5e’s Hexblade Warlock subclass feature Armor of Hexes imposes a chance to miss regardless of the attacker’s roll. That chance is based on a d6: if it’s a 4 or higher it misses, and anything else it hits if the attack should have hit. To my understanding, this is simply a 50/50 roll on the d6 (success on 4, 5, 6, failure on 1, 2 and 3).

Out of curiousity, does it matter if the dice is a d4, d8, or even d100, as long as it’s an even-sided die and that it’s still 50/50? (On a d4 it would be a success on a 3 and 4, on a d10 it’s 6 and up, and so on.)

Compress The File Size of Your Video for $25

Compress The File Size of Your Video

I Will Compress/Reduce The File Size of Your Video!

Have a video with huge file size and need it reduced?

I’ll compress it and make it significantly smaller in file size, without losing any quality!

The compressed video will be in mp4 format, which is the most compatible format across any platform and device.

If you need in any other format, please let me know.

Features:

-Any format

-Upto 80% compression depending on the source video

-Up to 2 Gb file size(For bigger size videos, please check my offer extra)

.(tagsToTranslate)video(t)compression(t)reduce(t)size(t)gb(t)optimize

google sheets – Can you filter a column in size order of an imported HTML? I have tried using query with no success

Using the below importhtml I’m hoping to order the numerical data of the third column in size order, but it seems I have lost my way on the model I was following. Any help greatly appreciated.

=importhtml("https://fbref.com/en/share/SxTNE","table",0)

query(IMPORTHTML("https://fbref.com/en/share/SxTNE"”,”table”,2),”Select * where Col3=’Crdy'”)