## performance – Advice on Database Design

My Attempts:
For simplicity , Models are shown with the minimum attributes

• 1st:

-My concerning with this approach is that the amount of FK in the Numeral and Step model, since only one choice is possible, i would have choices-1 amount of useless columns per record

• 2nd:

-Im programatically checking for the choice field in the Numeral or the Step, and then using the proper table to fill the data, this second attempt started as a solution to fix the empty columns in the 1st approach

-My concerning with this approach is that the amount of tables will grown 2*n, since for each choice i need to create the corresponding table for step or numeral

## performance – Reducing allocations in arbitrary precision calculations with Julia

I am new to Julia and have only written in python before which is most likely also reflected by my coding style.
Unfortunately, my applications require very high numerical precision which is why I am resorting to `BigFloat` and `BigInt` in Julia. For some reason, this leads to a major increase in allocations besides the obvious increase in computational time.
I am wondering where my code can be improved.

I have attached the current version of my code below with example parameters that allow for execution under 1s.

``````using Combinatorics

function gamma_mpnq(m::Int, p::Int, n::Int, q::Int)::BigFloat
return sqrt(factorial(big(m)) * factorial(big(n))) / (big(2)^(p + q) * factorial(big(m - 2 * p)) * factorial(big(n - 2 * q)) * factorial(big(q)) * factorial(big(p)))
end

function general_binomial(alpha::Float64, k::Int)
"""
Implementation of the generalized binomial.
"""
_prod = BigFloat(1.0)
for kk in 1:k
_prod *= (alpha - kk + 1.)
end
return _prod / factorial(big(k))
end

function construct_M(nmax::Int, mmax::Int)::Array
"""
Construct the matrix M containing all moments <C^n * C*^m> up to nmax and mmax.
"""
_M = zeros(BigFloat, nmax, mmax)
for mm in 1:nmax
for nn in 1:mmax
if nn == 1
_M(nn, mm) = 1. / doublefactorial(BigInt(2*mm - 1))
elseif mm == 1
_M(nn, mm) = 1. / doublefactorial(BigInt(2*nn - 1))
else
_M(nn, mm) = (((nn-1) * _M(nn-1, mm) + (mm-1) * _M(nn, mm-1)) / (2. * (nn-mm)^2 + (nn-1) + (mm-1)))
end
end
end
return _M
end

function maclaurin_exp(n::Int, m::Int, p::Int, q::Int, M::Array, cutoff::Int)::BigFloat
"""
Calculate the expectation value of ((1+C)/(1+C*))^((n-m)/2) * C^p * C*^q
using the Maclaurin expansion and the matrix containing all moments M.
"""
inner::BigFloat = 0.0
@views for nn in 1:cutoff
for mm in 1:cutoff
inner += (general_binomial((n-m)/2., mm-1) * general_binomial((m-n)/2., nn-1) * M(q+nn, p+mm))
end
end
return inner
end

function Hmn(m::Int, n::Int, M::Array, cutoff::Int)::Float64
"""
Calculates a single entry of the Hmn matrix.
"""
ppmax = div(m, 2)
qqmax = div(n, 2)
outer = 0.0
for pp in 0:ppmax
for qq in 0:qqmax
outer += (gamma_mpnq(m, pp, n, qq) * maclaurin_exp(n, m, pp, qq, M, cutoff))
# println(pp, " ", qq, " ", outer)
end
end
return outer
end

dim = 128
cutoff = 64
M = construct_M(dim+cutoff, dim+cutoff)
@btime Hmn(12, 2, M, cutoff)

$$```$$
``````

## performance – Python pynput keyboard + mouse input printer

I needed to have input shown for some things in a game I play on Linux, but Linux doesn’t seem to have any good input display programs. I decided it would be good enough to have python print to a pair of xterm windows (one for keyboard and one for mouse) using pynput. I do this using two different python programs because the only way I found to have one python programs deal with two terminals is to constantly write to a file and use tail -f, and I didn’t want to do that.

Because both programs ended up being so small and simple, I tried to write it so that the key/mouse listeners would run their functions as fast as possible. But I might have missed some things that could be done. I’m wondering if anyone knows of any other ways to optimize this for response time. I know it won’t make anything close to a noticeable difference, I’m just asking out of curiosity.

*Edited because I thought of ways to make it faster

mouse_listener.py

``````from pynput import mouse

def main():

print_count = 0

def on_click(x,
y,
button,
pressed,
local_print=print,
dashes="--- ",
no_dashes="    ",
pressed_str=" mouse pressed",
released_str=" mouse released"
):
nonlocal print_count
if print_count >= 10:
if pressed is True:
local_print(dashes + button.name + pressed_str)
print_count = 0
else:
local_print(dashes + button.name + released_str)
print_count = 0
else:
if pressed is True:
local_print(no_dashes + button.name + pressed_str)
print_count = print_count + 1
else:
local_print(no_dashes + button.name + released_str)
print_count = print_count + 1

def on_scroll(x,
y,
dx,
dy,
local_print=print,
dashes="--- ",
no_dashes="    ",
up_str="scroll up",
down_str="scroll down"
):
nonlocal print_count
if print_count >= 10:
if dy == 1:
local_print(dashes + up_str)
print_count = 0
else:
local_print(dashes + down_str)
print_count = 0
else:
if dy == 1:
local_print(no_dashes + up_str)
print_count = print_count + 1
else:
local_print(no_dashes + down_str)
print_count = print_count + 1

try:
listener = mouse.Listener(on_click=on_click, on_scroll=on_scroll)
listener.start()
listener.join()
except KeyboardInterrupt:
listener.stop()

main()
``````

key_listener.py

``````from pynput import keyboard

def main():

print_count = 0
held = set()

def on_press(key,
local_print=print,
local_hasattr=hasattr,
held_local=held,
str_lower=str.lower,
dashes="--- ",
no_dashes="    ",
end_text=" pressed",
str_for_hasattr="name"
):
nonlocal print_count
if key not in held_local:
if local_hasattr(key, str_for_hasattr):
if print_count >= 10:
local_print(dashes + key.name + end_text)
print_count = 0
else:
local_print(no_dashes + key.name + end_text)
print_count = print_count + 1
else:
if print_count >= 10:
local_print(dashes + str_lower(key.char) + end_text)
print_count = 0
else:
local_print(no_dashes + str_lower(key.char) + end_text)
print_count = print_count + 1
hold(key)

def on_release(key,
local_print=print,
local_hasattr=hasattr,
unhold=held.remove,
str_lower=str.lower,
dashes="--- ",
no_dashes="    ",
end_text=" released",
str_for_hasattr="name"
):
nonlocal print_count
if local_hasattr(key, str_for_hasattr):
if print_count >= 10:
local_print(dashes + key.name + end_text)
print_count = 0
else:
local_print(no_dashes + key.name + end_text)
print_count = print_count + 1
else:
if print_count >= 10:
local_print(dashes + str_lower(key.char) + end_text)
print_count = 0
else:
local_print(no_dashes + str_lower(key.char) + end_text)
print_count = print_count + 1
try: # I don't trust this part
unhold(key)
except KeyError:
pass

try: # TypeError is possible because numpad 5's char attribute is None
listener = keyboard.Listener(on_press=on_press, on_release=on_release)
listener.start()
listener.join()
except (KeyboardInterrupt, TypeError):
listener.stop()

main()
``````

xterm_opener.sh

``````xterm -xrm 'XTerm.vt100.allowTitleOps: false' -T "keyboard" -geometry 28x16 -e python3 '/home/USERNAME/input_listener/key_listener.py' &
xterm -xrm 'XTerm.vt100.allowTitleOps: false' -T "mouse" -geometry 28x16 -e python3 '/home/USERNAME/input_listener/mouse_listener.py'
``````

## time complexity – How to determine which sorting algorithm will yield the optimum performance for an array?

I am presenting with the following arrays:

1. [‘F’, ‘E’, ‘D’, ‘C’, ‘B’, ‘A’]
2. [‘C’, ‘A’, ‘B’, ‘D’, ‘E’]

And for each array, I am asked to select which sorting algorithm would yield the optimal performance:

1. Insertion Sort
2. Quick Sort
4. Merge Sort

I am somewhat confused as to how to go about solving this problem and more generally, figuring out which sorting algorithm is optimal for a specific case. I know that insertion sort works well on small arrays and the radix sort is optimized for sorting strings, but how do I know which sorting algorithm is the best?

## performance – under 100 line Python pynput keyboard + mouse input printer

I needed to have input shown for some things in a game I play on Linux, but Linux doesn’t seem to have any good input display programs. I decided it would be good enough to have python print to a pair of xterm windows (one for keyboard and one for mouse) using pynput. I do this using two different python programs because the only way I found to have one python programs deal with two terminals is to constantly write to a file and use tail -f, and I didn’t want to do that.

Because both programs ended up being so small and simple, I tried to write it so that the key/mouse listeners would run their functions as fast as possible. But I might have missed some things that could be done. I’m wondering if anyone knows of any other ways to optimize this for response time. I know it won’t make anything close to a noticeable difference, I’m just asking out of curiosity.

mouse_listener.py

``````from pynput import mouse

def main():

print_count = 0

def on_click(x, y, button, pressed, local_print=print):
nonlocal print_count
try:
if print_count >= 10:
local_print(f"--- {button.name} mouse {'pressed' if pressed else 'released'}")
print_count = 0
else:
local_print(f"    {button.name} mouse {'pressed' if pressed else 'released'}")
print_count += 1
except AttributeError:
pass

def on_scroll(x, y, dx, dy, local_print=print):
nonlocal print_count
try:
if print_count >= 10:
local_print(f"--- {'scroll up' if dy == 1 else 'scroll down'}")
print_count = 0
else:
local_print(f"    {'scroll up' if dy == 1 else 'scroll down'}")
print_count += 1
except AttributeError:
pass

try:
listener = mouse.Listener(on_click=on_click, on_scroll=on_scroll)
listener.start()
listener.join()
except KeyboardInterrupt:
listener.stop()

main()
``````

key_listener.py

``````from pynput import keyboard

def main():

print_count = 0
held = set() # this is needed because it keeps calling on_press when you hold a key

def on_press(key, local_print=print, held_local=held, hold=held.add, str_lower=str.lower):
nonlocal print_count
if key not in held_local:
try:
if print_count >= 10:
local_print(f"--- {key.name} pressed")
print_count = 0
else:
local_print(f"    {key.name} pressed")
print_count += 1
except AttributeError:
if print_count >= 10:
local_print(f"--- {str_lower(key.char) if key.char is not None else 5} pressed")
print_count = 0
else:
local_print(f"    {str_lower(key.char) if key.char is not None else 5} pressed")
print_count += 1
hold(key)

def on_release(key, local_print=print, unhold=held.remove, str_lower=str.lower):
nonlocal print_count
try:
if print_count >= 10:
local_print(f"--- {key.name} released")
print_count = 0
else:
local_print(f"    {key.name} released")
print_count += 1
except AttributeError:
if print_count >= 10:
local_print(f"--- {str_lower(key.char) if key.char is not None else 5} released")
print_count = 0
else:
local_print(f"    {str_lower(key.char) if key.char is not None else 5} released")
print_count += 1
try:
unhold(key)
except KeyError:
pass

try:
listener = keyboard.Listener(on_press=on_press, on_release=on_release)
listener.start()
listener.join()
except KeyboardInterrupt:
listener.stop()

main()
``````

xterm_opener.sh

``````xterm -xrm 'XTerm.vt100.allowTitleOps: false' -T "keyboard" -geometry 28x16 -e python3 '/home/USERNAME/input_listener/key_listener.py' &
xterm -xrm 'XTerm.vt100.allowTitleOps: false' -T "mouse" -geometry 28x16 -e python3 '/home/USERNAME/input_listener/mouse_listener.py'
``````

## python – What kind performance analysis tasks can be applied on different database technologies?

I have an data mining algorithm running to find the similarity in information network and I have the data stored in different implementations i.e. Relational DB, Graph DB, MongoDB and Tile DB, through which I load the data by querying through DB and feed to my algorithm to process the results. Now I want to know which DB is better in my case in terms of pros and cons and also additional factors!

My question is how can I carry out the performance analysis on this, other than to calculate the time performance that how long does one take to produce the results this is pretty basic.

I want to know, is there any specific method or “framework” I can follow to access the pros and cons of these DB’s and also be able proof the results may be, through performance analysis graphs?

The development environment is Python!

Please don’t hesitate to go into details, I would really appreciate your help!

## mysql – Does a huge key length value for a mulibyte column affect the index performance?

When I look at the `EXPLAIN` results, the `key len` value is always calculated based on the actual column length multiplied on the maximum number of bytes for the chosen encoding. Say, for a `varchar(64)` using `utf8` encoding the key len is 192.

Does this number affect performance in any way and should I reduce it when possible? I mean, does it make MySQL to reserve some space somewhere that remain unused, or it’s just a maximum possible value while the used space is based on the exact data length?

So the actual question is: if I have a column that contains only Latin letters and numbers, should I change its encoding to `latin1` from `utf8` in regard of the space occupied by the index/overall index performance?

## performance tuning – Pair-wise equality over large sets of large vectors

I’ve got an interesting performance tuning/algorithmic problem that I’m running into in an optimization context.

I’ve got a set of ~16-50 lists of integers (usually in `Range(0, 5)` but no restricted to that).
The data might look like this (although obviously not random)

``````maxQ = 5;
ndim = 16;
nstates = 100000;
braVecs = RandomInteger({0, maxQ}, {ndim, nstates});
ketVecs = braVecs + RandomInteger({0, 1}, {ndim, nstates});
``````

now for every element `q ∈ Subsets(ndim, 4)` I need to determine where every pair of `braVecs` and `ketVecs` are the same, except for the indices in `q`, i.e. I for every possible `q` I need to calculate this

``````qComp = Complement(Range(ndim), q);
diffs = braVecs((qComp)) - ketVecs((qComp));
Pick(Range(nstates), Times @@ (1 - Unitize(diffs)), 1)
``````

Just as an example, this is the kind of thing I expect to get out

``````q = RandomSample(Range(ndim), 4);
qComp = Complement(Range(ndim), q);
diffs = braVecs((qComp)) - ketVecs((qComp));
{q, Pick(Range(nstates), Times @@ (1 - Unitize(diffs)), 1)}

{{2, 9, 6, 4}, {825, 1993, 5577, 5666, 9690, 9856, 11502, 13515, 15680, 18570,
19207, 23131, 26986, 27269, 31889, 39396, 39942, 51688, 52520, 54905, 55360,
60180, 61682, 66258, 66458, 68742, 71871, 78489, 80906, 90275, 91520, 93184}}
``````

This can obviously be done just by looping, but I sure there is an algorithmically more efficient way to do this, maybe using a Trie or maybe using some Boolean algebra to reuse prior calculations? This is important because my `ndim` can get up to ~50 and there are a huge number of elements in `Subsets(50, 4)`… I just don’t know what the best way to approach this kind of thing is.

## magento2 – This is not implemented, as it is not possible to implement Argon2i with acceptable performance in pure-PHP

This error occurs with Magento 2.4.2 when php does not include the `sodium` php extension.

You can check if this extension is installed with

``````php -i | grep sodium
sodium support => enabled
``````

To fix this problem install the `sodium` php extension.

NOTE the supported PHP version for Magento 2.4 is PHP 7.4

## performance tuning – all rooted subgraphs of size \$k\$ in the grid graph

I would like to compute all rooted subgraphs of size $$k$$ in the grid graph. I use the following approach, I start from the root, then traverse the graph along edges in all possible ways until I have reached $$k−1$$ other vertices.

``````g = GridGraph[{10, 10}, VertexLabels -> "Name"];

root=36;
k=4
y = {{root, funPathCal[{}, 36root}};
resul = y;
Table[

y2 = Table[{#, nextStep[y[[i]][[1]], #]} & /@ y[[i]][[2]], {i, 1,
Length[y]}];
resul = AppendTo[resul, y2];
y = Flatten[y2, 1];, {k-1}]
nextStep[curVer_, verl_] := Module[{},
DeleteCases[VertexList[NeighborhoodGraph[g, verl, 1]],
Alternatives @@ {verl, curVer}]

]
``````

I have two questions:
1]Any suggestion on how to calculate all possible walks?
2]How efficiency calculate walks from my `resul`?