nt.number theory – Explanation of the dependence of the solvability of a system of linear equations on a number-theoretic property

The origin of this question is that I've found a way to 'weight off' vertex weights from weighted $ K_n $ d. h., if one assumes that the weight $ w_ {ij} $ the edge $ e_ {ij} $ can be expressed as $ pi_i + omega_ {ij} + pi_j $ with vertex potentials $ pi_i $ and $ pi_j $is it then possible to restore that $ omega_ {ij} $,
I was then curious to rediscover the vertex potentials and to this end set up a system of linear equations obtained from the edges of a Hamilton cycle.
After a corresponding renaming of the vertices, the equations have the form
$ pi_i + pi_ {i + 1} = w_ {i, i + 1} – omega_ {i, i + 1}; quad 0 le i le , quad n + 1: = 0 $

To my great surprise, it first turned out that one has one for the determinant of the associated matrix
$$ begin {vmatrix}
1 & 1 & 0 & 0 & 0 & 0 & Points & 0 & 0 \
0 & 1 & 1 & 0 & 0 & cdots & 0 & 0 \
0 & 0 & 1 & 1 & 0 & cdots & 0 & 0 \
vdots & vdots & ddots & ddots & ddots & ddots & vdots & vdots \
0 & 0 & Points & 0 & 1 & 1 & 0 & 0 \
0 & 0 & cdots & 0 & 0 & 1 & 1 & 0 \
1 & 0 & points & 0 & 0 & 0 & 0 & 1
end {vmatrix} = 0 iff n equiv0 mod2 $$
This means that the unknown vertex potentials can only be determined if their number is odd.

After some reflection on this observation, it became clear that this is the determinant of analog matrices $ m $ consecutive $ 1 $Instead of two, cyclically shifted from right to right from one row to the next by two, s does not appear to be exactly zero when $ m $ and $ n $ are relatively prime.

Question:

Is there a non-trivial explanation for the phenomenon of the solvability of related problems, the only noticeable difference being the number $ n $ of equations depends on the number of theoretical properties of the relationship between $ n $ and a fixed set of parameters that are independent of $ n $?
The arguments that I imagine would be based only on the "original" message of the problem and not on its "translation" into a system of linear equations.
Ideally, the arguments with no background in linear algebra would be understandable.

Linear Algebra – How can the Newell method be used to determine a plane equation to test for uncommon inputs?

Newell method for obtaining a plane equation $$ ax + bx + cz + d = 0 $$ to the $ n $ Dots has determined the factors as

$$ a = sum_ {i = 0} ^ {n} (y_ {i} – y_ {i + 1}) (z_ {i} + z_ {i + 1}) $$
$$ b = sum_ {i = 0} ^ {n} (z_ {i} – z_ {i + 1}) (x_ {i} + x_ {i + 1}) $$
$$ c = sum_ {i = 0} ^ {n} (x_ {i} – x_ {i + 1}) (y_ {i} + y_ {i + 1}) $$
$$ d = – frac {1} {n} sum_ {i = 0} ^ {n} V_i cdot [a, b, c]^ T $$

wherein the dot index addition is performed modulo $ n $,

See the full description here.

How can this method be used (without any further extensive calculations) to check the following unusual entries:

  • All points are colinear,
  • All points are identical,
  • Some points are not on the same level. (I realize that the method is written to work well, but it would be nice to have a metric that identifies the fit of the points.)

Are double cosets of cyclic subgroups separable in a special linear group?

To let $ A, B in mathrm {SL} _3 ( mathbb {Z}) $, to adjust
$$ S = langle A rangle cdot langle B rangle = {A ^ mB ^ n: m, n in mathbb {Z} }. $$

is $ S $ closed in profinite topology
$ mathrm {SL} _3 ( mathbb {Z}) $ ?

Equivalent (using the congruence subgroup feature) I ask for each $ C in mathrm {SL} _3 ( mathbb {Z}) $ for what $$ C equiv A ^ {m_k} B ^ {n_k} pmod k $$ holds for everyone $ k $we absolutely have $ C = A ^ mB ^ n $ for some $ m, n in mathbb {Z} $,

Complexity Theory – Determine if a system of linear $ n $ equations has solutions in $ {0, 1 } ^ n $ in polynomial time

I try to determine if it is possible to decide if a system of $ n $ linear equations with integer coefficients and $ n $ Variables has a solution in $ {0, 1 } ^ n $ in polynomial time.

In addition, all coefficients of $ A $ are in $ {- 1, 0, 1 } $but I could not find a way to use that.

The trivial case is when (matrix) $ A $ The system is invertible, there is only one solution, it is easy to check if they all belong together $ {0,1 } $,

But if you have an infinite number of solutions, and $ k $ Free variable, I can not find a way to better than to check all $ 2 ^ k $ Possibilities.

  • Do you know an algorithm to do this in polynomial time?

I also tried to make a reduction of SAT (or a variant with n-clauses and n-variables in each clause to show that it is NP-complete), but due to the fact that we do so $ Ax = b $ and not $ Ax geq b $I could not do that either.

  • Do you have a reduction to show that this problem is complete?

Linear algebra – A new generalization of the dimension?

During my research, I crossed the following thoughts:

Definition 1: A structure S, is a pair (X, T), where X is an amount and T is an amount of amounts of X that are stable through each point of intersection.

>

Definition 2: To let $ (u_1, … u_n) in X ^ n $ we denote
$_S = bigcap limits_ {F in T, {u_1, … u_n } subset F} F $

>

Definition 3: We say that S = (X, T) a structure has a dimension if:

$ forall n in mathbb N ^ *, forall (u_1, …, u_n) in X ^ n, $ and $ A =_S $,

With $ forall (v_1, .., v_ {n + 1}) in A ^ {n + 1} $ there is $ i in {1, .., n + 1 } $ so that $ v_i in _S $

>

Question: Is this generalization of the dimension already there?

PS: In this case, a set E has a dimension with the structure (E, P (E))

Check if an exam question was answered correctly (lin alg – linear assignments)

Let L be a linear mapping, so that $ = $ is an inner product.

Prove that L is an isomorphism.

To prove that it was one to one, I wrote that when $ x in Ker (L) $, then $ L (x) = vec {0} $ and

$ = = = 0 $

So x is the null vector, but I think I just assumed that x was one to one for this proof.

Can someone confirm that? Many Thanks

linear algebra – accelerate tensor contraction (matrix product states)

I am recently trying to implement some tensor contractions in Mathematica for use in Matrix product state algorithms.

Here is the operation I want to perform

$$
M ^ { sigma_ {i} sigma_ {i} & # 39;} _ {(b_ {i-1}, a_ {i-1}), (b_ {i}, a_ {i})}} = sum _ { sigma_ {i} & # 39;; T ^ { sigma_ {i} sigma_ {i} & # 39;} _ {b_ {i-1} b_ { i}} W ^ { sigma_ {i} & # 39; & # 39; sigma_ {i} & # 39;} _ {a_ {i-1} a_ {i}}
$$

Here's what I encoded, that produces properly what I want,

n = 10;

W = random complex[1 + I, {n, 2, 2, 64, 64}];

W[[1]]= Random complex[1 + I, {2, 2, 1, 4}];
W[[2]]= Random complex[1 + I, {2, 2, 4, 16}];
W[[3]]= Random complex[1 + I, {2, 2, 16, 64}];
W[[n - 2]]= Random complex[1 + I, {2, 2, 64, 16}];
W[[n - 1]]= Random complex[1 + I, {2, 2, 16, 4}];
W[[n]]= Random complex[1 + I, {2, 2, 4, 1}];

T = random complex[1 + I, {n, 2, 2, 2, 2}];
T[[1]]= Random complex[1 + I, {2, 2, 1, 2}];
T[[n]]= Random complex[1 + I, {2, 2, 2, 1}];

M = table[0,{k,1,n},{i,1,2},{j,1,2}];

Do[
   Do[
    temp[k, i, j] =
total[KroneckerProduct[T[[k, i, l]], W[[k, l, j]]], {l, 1, 2}];
, {i, 1, 2}, {j, 1, 2}];
Do[
    M = temp[k, i, j];
, {i, 1, 2}, {j, 1, 2}];
, {k, 1, n}]// RepeatedTiming // First

out[47]= 0.0041

I have two questions, The first is: is there a more compact way to code this operation than I did? The second is: is there a more efficient method for this calculation?

Regarding the second question I tried to use TensorContract and Dot, as suggested in some other posts, but I have not found any speed. That could be a bad implementation.

nt.number theory – A $ mathbb C $ linear map from $ M (p-1, mathbb C) $ to $ mathbb C ^ has G $, where $ p $ is an odd prime and $ G = mathbb Z / (p) ^ times $

To let $ p $ be an odd prime and $ G = ( mathbb Z / (p)) ^ times = {1,2, …, p-1 } $ i.e. $ G $ is a cyclic order group $ p-1 $, To let $ has G: = { chi: G to mathbb C ^ times: chi $ is a group homomorphism $ } $, For every sentence $ X $, To let $ mathbb C ^ X $ denote the set of all functions of $ X $ to $ mathbb $and note that this can be common $ mathbb $Algebra structure as $ (f + g) (x): = f (x) + g (x), for all x in X $ ; $ (f.g) (x) = f (x) g (x), for all x in X $, and $ (k.f) (x): = kf (x), for all x in X $,

To let $ n = p-1 $, To let $ omega = e ^ {2 pi i / p} $ and define a function

$ f: M (n, mathbb C) to mathbb C ^ has G $ as $ f (A) ( chi) = begin {pmatrix} chi (1) & … & chi (p-1) end {pmatrix} A begin {pmatrix} omega \ omega ^ 2 \. \. \. \ omega ^ n end {pmatrix}, forall A in M ​​(n, mathbb C), forall chi in has G $,

That follows easily $ f $ is a $ mathbb $-linear function.

in addition, $ f (A) = 0 implies A begin {pmatrix} omega \ omega ^ 2 \. \. \. \ omega ^ n end {pmatrix} = 0 $, It follows that since the minimal polynomial of $ omega $ over $ mathbb Q $ has grad $ p-1 = n $, so $ A in M ​​(n, mathbb Q) $ and $ A begin {pmatrix} omega \ omega ^ 2 \. \. \. \ omega ^ n end {pmatrix} = 0 implies A = O $so $ A in M ​​(n, mathbb Q) $ and $ f (A) = 0 implies A = O $,

Now my questions are the following:

(1) For everyone $ A, B in M ​​(n, mathbb Q) $are there $ C in M ​​(n, mathbb Q) $ so that $ f (A) .f (B) = f (C) $ ? (Note that such $ C $(if available, must be unique)

(2) How to show that there are Hermitian matrices $ A_1, …, A_n $ of rank $ 1 $ so that $ f (I) = f (A_1) + … + f (A_n) $ and $ f (A_j) f (A_k) = 0, forall j ne k $ ? (Maybe this has something to do with orthogonality of characters?)

Conceptual explanation for the strange linear algebra fact in a characteristic $ 2 $

All matrices and vectors in this post have entries in the field $ mathbb {F} _2 $,

Some repair $ n geq 1 $, For a $ n times n $ matrix $ X $, write $ X_0 $ for the column vector whose entries are the diagonal entries in $ X $, The following strange fact arose in a newspaper that I write:

fact: To let $ A $ be symmetrical $ n times n $ Matrix and let $ X $ be arbitrary $ n times n $ Matrix. Then $ (AXA ^ t) _0 = A (X_0) $, Here $ A (X_0) $ means that we multiply the column vector $ X_0 $ by $ A $,

This is easy enough to prove in a dirty way, but for me it basically comes out of nowhere. Does anyone know a conceptual reason for this? Or maybe a bigger picture, in which it sits?

linear algebra – quadratic form and determinant

I have seen an assertion that I can not prove / disprove.

Consider a square shape $ y: = x ^ T A ^ {- 1} x $ for a column vector $ x $ and a symmetric, positive definite matrix $ A $,

The claim is $ det (A) an infty Rightarrow y to 0 $,

I do not understand why that's true and why is it obvious if it's true.

Any help would be appreciated. Many Thanks.