algorithms – Maximum sum path in a matrix

As Pal have commented, this is a question in dynamic programming. I will now propose the solution using dynamic programming to the question, but I highly recommend trying it on your own first, and checking this later.


The algorithm:

  1. Create a new empty matrix $V$ of size N by N
  2. for $1 le i le N:$
    1. for $1 le j le N:$
      1. Set $V(i, j) = M(i,j) + max(V(i-1, j), V(i, j-1))$ (note that $V(-1,j) = V(i,-1) = 0$)
  3. Find $m = max_i,_j(V(i,j))$
  4. Count the number of times we see $m$ in $V$
  5. You can return the maximum sum $m$ and the number of times it occurs (as you have counted it)

Complexity:

The algorithm takes $O(N^2)$ time (which is $O(k)$ if we let $k$ be the input size) since it calculates a new matrix of size N by N, doing $O(1)$ operations per cell in it.

Notive that the algorithm’s space complexity is also $O(N^2)$ as we were required to create a whole new matrix.

Asymptotic behavior of maximum of bessel function

Let $J_n$ be the Bessel function of the first kind. Let $J_n^{(max)} = max_{x>0} J_n(x)$. What is known about the asymptotic behavior of $J_n^{(max)}$ at large $n$? Specifically, I am looking for a lower bound. It is ok if the result only holds for integer and half-integer $n$.

(It is potentially helpful to note that $J_n^{(max)} = J_n(j’_n)$ where $j’_n$ is the smallest positive zero of $J’_n$, i.e. the global maximum of $J_n$ occurs at the “first” local maximum.)

sql server – Maximum size transaction log and impact

I was doing some research on what the recommended maximum size for the transaction log, but I cannot find any substantiate or documented answer. I learned that a log size of about 25 – 50 percent of the DB is not uncommon. I also know that you want to set a recommended number of VLFs and must tune autogrowth to prevent VLF fragmentation. This can slow down your queries and log backups.

The only thing I cannot find is what would be a recommended maximum size in relation to your database size or possibly workload. Of course, you probably don’t want your log file to grow beyond the database size, but what would be the exact impact? Of course, log backups will take longer if the log file is huge AND the log file is filled for example 80 percent.

But what if I have a db of 1 TB and a log file of (hypothetically 🙂 ) of 2 TB, which would be filled for 10 percent. I can imagine that log backups will take longer because the log backup has to scan the complete log maybe? If your VLFs are not fragmented, then there is not really a problem right?

Log backups are critical to SQL Server and let’s say you would create a log disk that is 2 TB, make the log file 500 GB. Now, the backup fails for a few hours, the log fills for 1.5 TB. Your log backup will take longer, much longer, sure. We have a performance impact because of the autogrowth events, yes, but we made sure the DB does not stop. In such a scenario, when the backup is fixed and the first log backup has run, the log filing is low, but the log is unusually large. Now, I cannot think of any critical reason to get the log file back to the normal ( let’s say that 500 GB). Of course you don’t want to do that during production hours of course due to locking. However, I cannot imagine that there would be absolutely no impact with such a huge log file (which would be mainly empty).

Why does NMaximize miss this global maximum?

I am having trouble maximizing a function which appears as a curvature of a planar curve.

{tmin, tmax} = {0, 2 Pi}

f = -((6-3 Cos(t) - Cos(3 t))/((-11+6 Cos(t) + 8 Cos(2 t) - 6 Cos(3 t) + Cos(4 t))
  Sqrt(Cos(t)^2 + 9 Sin(t)^2 - 12 Cos(t) Sin(t)^2 + 4 Cos(t)^2 Sin(t)^2)));

NMaximize({f, tmin <= t <= tmax}, t)

says that the maximum of $f$ is attained at

{1.37888, {t -> 5.78352}}

But,

Plot(f, {t, tmin, tmax}, PlotRange -> Full)

plot of f

indicates that the true maximum is attained at $t=pi$.

Why is this happening?
I’m using Mathematica version 12.0.0 for Microsoft Windows (64-bit).

visas – Taiwan maximum stay rules with reentry

As a US passport holder, you’re “visa-exempt” and will generally be granted 90 days on arrival, no questions asked:

The nationals of the following countries are eligible for the visa
exemption program, which permits a duration of stay up to 90 days: …
U.S.A. …

Now, making a quick visit to another country for the sole purpose of renewing your visa is known as a “visa run”. Most countries will get suspicious if you do this too often, but fortunately, anecdotal evidence for Taiwan seems to indicate that they don’t care:

According to experiences on this forum, you can continue to do this as
long as you like. Of course, like anything, Immigration has the final
say, but as long as you keep your nose clean (ie. don’t overstay your
visa, don’t commit any felonies, etc.) you should be fine.

(courtesy “Steve4nLanguage” on Forumosa, the definitive Taiwan forum)

So the answers to your questions appear to be:

  1. No
  2. No

All that said, I wouldn’t rely on this for more than a few renewals. If you’re planning on staying in Taiwan for a longer time, you’d definitely best be off working out some sort of “real” visa that actually allows you to work legally.

algorithms – Maximum Circular Subarray sum

This is a question from the book Algorithms by Jeff Erickson.

Suppose A is a circular array. In this setting, a “contiguous subarray” can be either an interval A(i .. j) or a suffix followed by a prefix A(i .. n) · A(1 .. j). Describe and analyze an algorithm that finds a contiguous subarray of A with the largest sum.
The input array consists of real numbers.

My approach:

There are two possibilities

1.) No Wrapping around (We can use Kadane’s Algorithm)

2.) Wrapping around(ie; Starting index of the subarray is greater than the ending index) (I have doubt in this case)

Now return the maximum of two cases as result.

For the second case, searching on the internet provided an approach without proof.

The approach is to find the subarray with minimum sum and subtract it from the sum of the entire array (ie; sum(A) – min_subarray_sum(A)). How is this solution correct?

Link for the method used in the second case: https://www.geeksforgeeks.org/maximum-contiguous-circular-sum/

algorithms – Path with maximum coins in directed graph

You can solve your problem in linear time w.r.t. the size of your graph $ G $.

Start by computing, in $ O (n + m) $ time, the strongly connected components $ C_1, C_2, dots, C_h $ of $ G $.
Next, construct a new graph $ G '= (V', E ') $ from $ G $ by identifying each strongly connected component $ C_i $ into single vertex $ v $ having $ k (v) = sum_ {j in C_j} k_j $ coins. Then, compute a topological order $ v_1, v_2, dots, v_h $ of the vertices in $ G '$. This requires time $ O (h + | E '|) = O (n + m) $.

Consider the vertices in reverse topological order and compute, for each vertex $ v_i $, the maximum number of coins $ eta (v_i) $ that can be collected starting from $ v_i $. This can be done as follows: if $ v_i $ has no outoging edges in $ G '$ then $ eta (v_i) = k (v_i) $, otherwise:
$$
eta (v_i) = k (v_i) + max _ {(v_i, v_j) in E '} eta (v_j).
$$

This requires time proportional to the number $ delta (v_i) $ of outgoing edges of $ v_i $. The overall time required to compute all values $ eta (v_i) $ is then $ O left (h + sum_ {i = 1} ^ {h} delta (v_i) right) = O (h + | E '|) = O (n + m) $.

Finally, return $ max_ {i = 1, dots, h} eta (v_i) $.

Increase Maximum In-Call Volume – Earpiece and Speaker

I would like to increase the maximum in-call volume for the speaker and earpiece during calls.

The Huawei P40 Pro is capable of much higher volumes during media playback but the volume seems to be restricted for in-call volume levels.

Using the setedit android app, there are the following database table entries:
“volume_voice” “12”
“volume_voice_earpiece” “13”
“volume_voice_speaker” “15”

If I edit these entries, will that increase the maximum in-call volume? If so, to what value?

I have tried ‘volume booster’ apps but they do not affect in-call volumes.

Graphics card – The power supply x1.25 has been updated and is still switched off at maximum load

Some tasks that put the CPU + GPU or gaming in VR under serious pressure cause my computer to shutdown only abruptly. So I had to live with a CPU usage of 80%. That solved the problem, but of course I didn't pay for that CPU to be throttled. I decided the problem is that my (HEC600TC5WK) power supply is triggering its overcurrent and is being updated to Corsair RM750x (750W). It didn't fix anything. During the stress test, I get power outages just as easily. (The power goes out with a loud relay clang in the power supply). So I'm pretty desperate now. I tried different sockets etc. I've heard that Ausus motherboards have buggy surge protection, but that seems impossible to turn off.

  • ASUS PRIME Z370-P
  • Intel Core i7-8700K (not overclocked)
  • NVIDIA GeForce GTX 1080 Ti
  • Corsair RM750x

Windows – Firefox memory leaks? The memory in Task Manager rises higher than indicated by FF (roughly: performance) until the memory is maximum / freezes. 2 GB vs 8-14 GB +

I've had this issue on multiple Windows operating systems with 16GB to 32GB of storage space where Firefox memory usage reported in Task Manager continues to increase regardless of how the reported usage is listed in Firefox's memory management. The total memory usage of all the tabs and addons is roughly as follows: the performance never exceeds 2-3 GB, while my entire computer crashes every few days and forces me to end the Firefox process, which suddenly frees up 12 to 14 GB of memory sometimes.

I know addons can be a culprit, but the Firefox memory manager doesn't report anything strange about addons. There are 5 to 10 Firefox processes in Windows Task Manager at times, and the process that uses the most memory corresponds to the total that is specified in about: performance. For example, the primary process can report 2-3 GB, while the 9 other processes are between 200 and 700 MB. When I have to resort to a process force kill, terminating some Firefox processes often doesn't seem to close or affect the browser in any way, while terminating other processes cancels them all.

By default, for what it's worth, I use scripts / ad blockers to prevent scripts from running on malicious websites. I speculate that this might have something to do with Youtube buffering, as memory sometimes increases after loading YouTube videos, but seems to remain increased even after the tabs are closed – but this is just speculation.

Edit: I also tried Firefox's garbage collection / cycle collection (roughly: memory) and doesn't seem to have anything to do with the memory usage reported in Task Manager. So far, the only solution I've found to fix the excess storage processes is to shutdown all my Firefox processes every 1 or 2 days.