Get a sparse solution for a poorly conditioned linear system with either or limitations

What is the best way to get a scant solution for a linear system? $ mathbf {A} vec {x} = vec {b} $ With $ x_n in mathbb {R} $? The linear system is special because I know that:

  • some columns $ vec {c} _n $ from $ mathbf {A} $ are nearly parallel, e.g. $ vec {c} _1 approximately vec {c} _ {2} $ or $ vec {c} _1 + vec {c} _ {2} about vec {c} _ {3} $Other combinations are also possible and could be identified in a preprocessing step
  • Either-or-restrictions exist for some elements of $ x_n $, e.g. either $ x_1 $ or $ x_2 $ should be $ 0 $
  • The linear system can be either under or over-determined
  • $ n leq500 $
  • about 30-40% of the entries in $ mathbf {A} $ are $ 0 $

The solutions known to me have certain disadvantages:

  • Use of lasso conditions (as L1 standard $ left vert vec {x} right vert_1 $ in combination with L2 standard $ left vert vec {x} right vert_2 $) does not address the poor conditioning of the matrix and the either-or conditions.
  • Either-or-restrictions are possible in linear programming with mixed integers, but do not help with the poor matrix condition
  • I could turn around $ mathbf {A} $ into an orthogonal matrix with PCA, but also the implementation of a sparse solution seems impossible to me.

linear algebra – Flat extensions of group rings

To let $ R $ be a commutative ring, $ f: H to G $ one surjektiv Group homomorphism and consider $ RG $ As a $ (RG, RH) $Module over $ g cdot h: = g cdot f (h) $ as usual. Now accept that $ RG $ is flat over $ H $, means that
$$ RG otimes -: H text {-} mathbf {mod} to G text {-} mathbf {mod} $$
is exactly. What can we say about that? $ f $?

If $ RG $ was even projectively over $ H $We would get a section $ s: RG to RH $ Tell us that $ # mathrm {ker} (f) < infty $ is a unit in $ R $but I suppose that does not work if $ RG $ is only considered flat?

Linear Algebra – How do I change the coordinate to get everything upright?

We have a two-dimensional smooth function $ f (x_1, x_2) $, We know that:

1) $ x_1 = y_1 $ and $ f (x) = f (y) $ implies that $ exists v in mathbb R ^ 2 $. $ forall a in mathbb R, $ $ f (x + av) = f (y + av) $,

2) $ x_2 = z_2 $ and $ f (x) = f (z) $ implies that $ exists u in mathbb R ^ 2 $. $ forall a in mathbb R, $ $ f (x + au) = f (z + au) $,

We are asked to show if there is a one-to-one change of coordinates ($ w = h (x) $)

So that $ g (w) = g (h (x)) = f (x) $ and $ for all w_1, w & # 39; _1, w_2, w & # 39; _2 in mathbb R $ we have $ g (w_1, w_2) = g (w_1, w_2) $ and $ g (w_1, w_2) = g (w_1, w_2) $?


At first I think it's very simple: we just have to do it $ v $ and $ u $ perpendicular to each other. Now, however, I think that it is impossible to do this, since changing the coordinate would also change the direction of $ (x_1, x_2) $ and $ (x_1, y_2) $and the line that passes these two points would not be parallel to the axis after changing the coordinates. For example, if we use directly $ (u, v) $ as base vectors, the required conditions are not met.

Linear Algebra – Bernoulli Random Matrix At least one column is zero

Accept $ M_n $ is a $ n times n $ Matrix with independent entries $$ M_n (i, j) $$ Be $ 0.1 $ Bernoulli random variables, each with $$ mathbb {P} (M_n (i, j) = 1) = dfrac { ln n} {n} $$

How do we show that as $ n to infty $the probability that at least one column is zero is delimited $ 0 $ (ie with a positive probability)?

We can use the inclusion and exclusion principle to obtain a series and use mathematica as the first term in the series $$ n (1- dfrac { ln n} {n}) ^ n to 1 $$ We can probably use the alternate series test to draw the conclusion. However, it is difficult for me to calculate the above limit (and the limit of the other terms in the series) by hand.

Every help is appreciated.

Assignment Problem – Explain this chart matching solved in linear time

I'm looking for a better explanation for the article Calculating optimal assignments in linear time for the approximate chart match.

The graph processing distance is approximated by assignments in linear time.

In short, there is an embedding of the optimal allocation cost into a Manhattan metric: $ φ_c (A) = (A_ {uv} ^ ← w (uv)) _ {uv∈E (T)} $, The Manhattan distance between these vectors corresponds to the optimal allocation cost between sets.

The problem is: it is not explained exactly how I think $ A_ {uv} ^ ← $ and how I use Weisfeiler-Lehman to label the vertices of a tree in the following illustration:

Enter image description here

Please explain what I think $ A_ {uv} ^ ← $ and how I label this tree.

Intersection of a given linear ideals of $ K[[X_1,ldots,X_{np}]]$ for $ { mathrm {ch}} (K) = p> 0 $

Accept $ { mathrm {ch}} (K) = p> 0 $ and we look at the formal power series ring $ K ((X_1, ldots, X_ {np})) $ about $ K $ in the $ np $ variables $ X_1, ldots, X_ {np} $, To let $ Lambda $ The quantity is defined as follows$ Colon $
begin {align *}
&
Lambda colon ! = { Mathrm {all ~ sets ~}} {(i_1, ldots, i_p), (i_ {p + 1}, ldots, i_ {2p}), ldots, (i_ {(n-1) p + 1}, ldots, i_ {np}) }, \
&
{ mathrm {where}}, ~ {1, 2, 3, 4, ldots } = {i_1, i_2, i_3, i_4, ldots } phantom {I} { mathrm {st}} phantom {i} i_k not = i_l phantom {i} { mathrm {for}} phantom {i} k not = l.
end {align *}

Namely, $ Lambda $ is the set of departments of $ (1, ldots, np) $ in $ n $ $ “ p $-Tuple & # 39 ;.

To the $ lambda = {(i_1, ldots, i_p), (i_ {p + 1}, ldots, i_ {2p}), ldots, (i_ {(n-1) p + 1}, ldots , i_ {np}) } in lambda $We will join the following ideal $ I _ { lambda} $ from $ A _ { infty} $$ colon $
begin {equation *}
I _ { lambda} colon ! = (X_ {i_1} + ldots + X_ {i_p}, X_ {i_ {p + 1}} + ldots + X_ {i_ {2p}}, ldots, X_ {(n-1) p + 1} + ldots + X_ {np}).
end {equation *}

We will define the ideal $ S_n $ of the ring $ K ((X_1, ldots, X_ {np})) $ by the following$ Colon $
begin {equation *}
S_n colon = underset { lambda in lambda} { bigcap} I _ { lambda}.
end {equation *}

Next we will specify the generators of $ S_n $ as follows$ Colon $
begin {equation *}
S_n = (θ, s_2, ldots, s_ {m (n)}),
end {equation *}

Where $ theta colon = X_1 + ldots + X_ {np} $,

Guess. The degrees $ { mathrm {deg}} (s_2), ldots, { mathrm {deg}} (s_ {m (n)}) $ diverge if $ n to infty $,

Linear Algebra – How do I find a set of matrices from a subset of vectors that solve a given eigenvalue problem?

I have a finite universe of $ m $ vectors $ V $ the length $ k $, For example:

$$ V_ {0} = [V_ {0,0}, V_ {0,1}, …, V_ {0, k}], \
V_ {1} = [V_ {1,0}, V_ {1,1}, …, V_ {1, k}], \
V_ {m} = [V_ {m, 0}, V_ {m, 1}, …, V_ {m, k}], $$

A subset of these vectors can be used to build a $ k $ X $ j $ Matrix, $ A_ {k, j} $, Please excuse my notation. Here is an example matrix:

$$
boldsymbol {A} = begin {bmatrix}
V_ {1,0} & … & V_ {1, k} \
V_ {3,0} & … & V_ {3, k} \
V_ {4.0} & … & V_ {4, k}
end {bmatrix}, $$

If I have a target vector, $ b $, so that

$$ b = begin {pmatrix}
b_ {0} \
… \
b_ {k}
end {pmatrix}, $$

How do I find the universe of $ A $There is a solution to the eigenvalue problem

$$ boldsymbol {A} x = b? $$
Is there an elegant solution to this kind of eigenvalue problem? If not, there is a method to find the smallest value of $ k $ that would solve the problem?

linear algebra – the inverse of the sum of an identity and a Kronecker product after adding a column or removing a row

To let $ Q = alpha mathbb {I} + (S otimes S) ^ T (S otimes S) = alpha mathbb {I} + S ^ TS otimes S ^ TS $, Where $ mathbb {I} $ is $ n ^ 2 times n ^ 2 $ Matrix, $ S $ is a $ m times n $ binary matrix and $ otimes $ is a Kronecker product. I have some questions:

First, if we can write $ S ^ TS = sum_ {i = 1} ^ m s_i ^ Ts_i $ With what could we use the first-rate update properties, what would the update come from? $ Q ^ {- 1} $if we remove that $ i $-th row of $ S $? Can we extend the Sherman-Morrison formula for this case? For example, if we have $ M = ( alpha mathbb {I} + S ^ TS) ^ {- 1} $ then the updated inverse after removing the $ i $-th row would be
$$ M _ {- i} = M frac {Ms_i ^ Ts_i M} {s_iMs_i ^ T-1} $$

Second, what about adding a column to the matrix? $ S $How about the opposite of $ Q $ Change? Can we again use similar properties of the partitioned matrix?

Linear Algebra – Isotropic, symmetric tensors assign symmetric tensors to a zero offset

I go through Chorins and Marsden's derivation of the Navier-Stokes equations in A mathematical introduction to fluid mechanics, There are three assumptions about the Cauchy stress tensor $ pmb sigma $ (which I paraphrase):

1) $ pmb sigma $ depends only on the speed gradient $ nabla mathbf {u} $, So written as a linear transformation $ pmb sigma = pmb sigma ( nabla mathbf {u}) $

2) $ pmb sigma $ is an isotropic tensor function (rotationally invariant), so that $$ pmb sigma (U nabla mathbfu U ^ T) = U pmb sigma ( nabla mathbfu) U ^ T $$ for a suitable orthogonal matrix $ U $,

3) $ pmb sigma $ is symmetrical.

Having stated these assumptions, they conclude that 3) and 2) show that $ pmb sigma $ can only depend on the symmetric part of $ nabla mathbf u $, In other words, when you write $$ nabla mathbf u = D + W $$ Where $ D = frac 1 2 ( nabla mathbfu + nabla mathbfu ^ T) $ (the symmetric part of $ nabla mathbf u $) and $ W = frac 1 2 ( nabla mathbf u – nabla mathbf u ^ T) $ (the antisymmetric part of $ nabla mathbf u $), then $ pmb sigma (W) = mathbf 0 $i.e. $ pmb sigma $ assigns second order antisymmetric tensors to the zero order tensor.

I'm not sure how the argument above works, and would be happy to help if I could see it.

Note: I have seen arguments that use the fact that $ pmb sigma $ must fulfill in component form

$$ sigma_ {ij} = -p delta_ {ij} + C_ {ijrs} u_ {r, s} $$

Where $ p $ is the pressure and it ends up being the tensor fourth order $ mathbf $ must take a special form (since it is an isotropic tensor symmetric in the first two indices and the last two indices). You can derive this from this special form $ pmb sigma $ assigns the zero order tensor to a second order tensor that has an offset.

I am interested in seeing an argument that does not rely on component analysis and does not immediately jump to the specific representation of second-order isotropic tensor functions with respect to their major invariants.

linear algebra – What does the notation $ P_ {ij} ([i, j].[i, j]) $ Mean for matrices?

I look at my notes from class. We get a matrix $ A $and in connection with Givens Rotations I see the notation $ P_ {ij} ((i, j), (i, j)) =
begin {bmatrix}
c & s \
-s & c
end {bmatrix}
$

So I suppose it means "that $ 2 times $ 2 Matrix with entry $ 1, $ 1 Be in $ a_ {ii} $ from $ A $entry $ 1, 2 $ Be in $ a_ {ij} $ from $ A $, etc. "

Does that really mean the notation?

Thank you very much.