## replacement – Repeated ReplacePart On Each Element of a Square Matrix for Eigenvalue Difference

I have a large $$ntimes n$$ square matrix, whose elements are all either 0 or 1. I want to see by how much the single largest eigenvalue of the matrix (which Mathematica gives as the first element in the list from `Eigenvalues`) changes when I change each element of the matrix individually, so that I can produce a list of this eigenvalue difference for each element. A $$10times 10$$ matrix would thus give a list of 100 eigenvalue differences. By “changing each element”, I mean switching the element to 1 if it’s a 0 or to 0 if it’s a 1 in the original.

For instance, if the matrix is `{{0, 1, 0}, {1, 0, 1}, {1, 0, 0}}`, finding the largest eigenvalue of `{{1, 1, 0}, {1, 0, 1}, {1, 0, 0}}`, then that of `{{0, 0, 0}, {1, 0, 1}, {1, 0, 0}}`, then that of `{{0, 1, 1}, {1, 0, 1}, {1, 0, 0}}` etc…

I experimented with Loops but since $$i$$ and $$j$$ need to increase independently I couldn’t resolve that issue. I also thought of combining `ReplacePart` and `If` but can’t find a neat way of doing this for very large matrices.

## algorithm – Given a positive integer N, what is the minimum positive integer K such that K! Is a multiple of the square of N?

Pretty new to programming, been doing some pretty basic stuff but I came across a more complicated one now (C language), not sure how to start with it. I’m not understanding the definition of “K! is a multiple of the square of N” to be frank, so I can’t really tell what is expected from me here. If anyone could just explain the point to me, I would really appreciate it.

Given a positive integer N, what is the minimum positive integer K such that K! is a multiple of the square of N?

Note that a is a multiple of b if a=b*k for some integer k.
Moreover, note that for any positive integer M, M! is the product of all positive integers whose value is at most M.

Input:

The first line of input contains T, the number of test cases. The following lines describe the test cases.
Each test case consists of one line containing a single integer N.

Output:

For each test case, print a single integer which is the answer for that test case.

Constraints:
1 <= T <= 200000
1 <= N <= 200000

Sample Input:
5
4
5
7
11
24

Sample Output:
8
10
14
22
48

Thanks a lot in advance for any tips!

## Which “square filter” brand does this hood belong to?

I’ve bought this hood in a flea market some time ago thinking it’s a Cokin one but actually it’s not compatible. Notice the different notches. So I wonder what it is, so maybe I could pass it on when I know…

## geometry – Can a n by n square lattice grid be linear projection of vertex of some high dimensional convex polytope?

Given a $$n times n$$ square lattice grid. can it come from some linear projection of vertex of a high dimensional convex polytope?

E.g. $$n=2$$, it is obviously possible, it would be a cube projected on one face. but is this generally true for all $$nge 2$$?

## What does the blue square and a yellow equal sign mean?

Ive seen this symbol a few time on peoples cars and since it looks like my favicon Im curious what it actually stands for. Its just a blue square with a gold equal sign in the middle, do you know what it is?

## performance tuning – Finding if a number is perfect square

I used the following code to found if a specific number is a perfect square:

``````Clear("Global`*");
ParallelTable(
If(IntegerQ(Sqrt( ...)), ..., Nothing), {..., ..., ...}) //. {} ->
Nothing
``````

Mathematice can do this for numbers up to:

``````\$MaxMachineNumber
``````
``````1.79769*10^308
``````

But my code is way, way too slow. Is there an other way to write the code in Mathematica that will be much faster?

I would accept a recommendation to use another programming language to find if a number is a perfect square for large values (like $$10^{12}$$ and bigger)? I know that `ULLONG_MAX` in C++ can handle values up to $$18446744073709551615$$. But code in C++ is slow for larger values. I also thought about using properties of square numbers in my program, but that means that I also need to compute the values.

## Is Exit (no square brackets) equivalent to Quit[] for refreshing the Kernel from within an Evaluation Notebook?

I prefer to use `Exit` as it conveniently requires fewer key presses over `Quit[]`. But before I use it regularly I need to know if there any subtle differences between `Quit[]` and `Exit`. The Wolfram documentation pages for Quit and Exit appear to be very similar and even call these two functions synonymous but I just need to be sure.

Thanks.

## nt.number theory – Counting odd entries of Catalan numbers in a square array

The number of odd binomial coefficients in the $$n^{th}$$-row of the Pascal triangle equals to $$2^{s(n)}$$, where $$s(n)$$ denotes the number of $$1$$’s in the $$2$$-ary (binary) expansion of $$n$$.

Let $$C_k=frac1{k+1}binom{2k}k$$ be the familiar Catalan numbers. I’m interested in the enumeration of odd terms inside a square arrangement. In detail,

QUESTION. what is the number $$O_n$$ of odd Catalan numbers $$C_{i+j}$$ if $$0leq i, jleq n-1$$? It amounts to asking how many entries of the $$ntimes n$$ matrix $$M_n=left(C_{i+j}right)_{i,j=0}^{n-1}$$ are odd? I believe it equals $$2n-1$$. Can you provide a proof?

NOTE. It’s known that $$det(M_n)=1$$.

## what is the langth of a squares in dandd like how many feet is one square

how far can I move with 30 speed?

how much can I move with a 25 speed?

how long in-game feet is 1 square

how do I determine how far I can move?

## approximation – What algorithm do computers use to compute the square root of a number?

If you are not a hardware designer, then you likely use the Newton iteration method. Given an equation f(x) = 0, and an approximate solution $$x_0$$, Newton iteration calculates a (hopefully) better approximation as $$x_{n+1} = x_n – f(x_n) / f'(x_n)$$. If we change $$x = a^{1/2}$$ to $$x^2 = a$$ to $$x^2 – a = 0$$, then this gives the simple formula $$x_{n+1}$$ = $$x_n – ({x_n}^2 – a) / 2x_n$$ = $$(x_n + a/x_n)/2$$.

This works well with a good initial approximation, but each iteration round includes a division, and divisions are sloooow. We therefore use a different formula: Instead of solving $$x = a^{1/2}$$ we solve $$x = a^{-1/2}$$ and multiply the result by a, which gives $$a^{1/2}$$ as requested.

We rearrange $$x = a^{-1/2}$$ as $$x^2 = 1/a$$, then $$1/x^2 = a$$ and $$1/x^2 – a = 0$$. Now $$f'(x) = -2/x^3$$, so Newton-iteration gives $$x_n – (1/{x_n}^2 – a) / (-2/{x_n}^3)$$ or $$x_n + (x_n – a{x_n}^3) / 2)$$ or $$1.5x_n – (a/2){x_n}^3))$$. This can be calculated using multiplications only, which is much faster than division. On a modern processor which can calculate multiplications in parallel and has a fused multiply-add operation, you calculate $$r = 1.5x_n$$, $$s = (a/2)x_n$$, $$t = {x_n}^2$$, then $$x_{n+1} = r + s cdot t$$.

To support this, x86 processors for example have a very fast hardware instruction calculating an approximation to $$a^{-1/2}$$ with a relative error less than 1 / 1024, implemented using a table.