Matrix – How do I perform LU decomposition without panning?

You can do that:

A = RandomReal({-1, 1}, {4, 4} 10);
{B, p, c} = LUDecomposition(A);
L = (LowerTriangularize(B, -1) + IdentityMatrix(Length(B), WorkingPrecision -> MachinePrecision))((
   InversePermutation(p)));
U = UpperTriangularize(B);
Max(Abs(L.U - A))

8.88178 * 10 ^ -16

It performs the LU decomposition with panning and rearranges the matrix L corresponding. Naturally, L is no longer necessarily the lower triangle. This may not be what you are looking for …

Numerics – Efficient creation of an interpolation matrix

I would like to know if there is a quick way to build the following matrix (note that the matrix is ​​defined with arbitrary precision):

MinPrecision=100;
t1 = AbsoluteTime();
If(ny == nyt,
  Miy = IdentityMatrix(ny + 1)
  ,
  ct(j_) := If(j == 0 || j == nyt, 2, 1);
  Cm = Table(
    N(2/(nyt ct(j) ct(i)) Cos((i j (Pi))/nyt), $MinPrecision), {i, 0,
      nyt}, {j, 0, nyt});
  Cm = If(ny - nyt == 0, Cm, 
    Flatten(Join({Cm, ConstantArray(0, {ny - nyt, nyt + 1})}), 1));
  CIm = Table(
    N(Cos((i j (Pi))/ny), $MinPrecision), {i, 0, ny}, {j, 0, ny});
  Miy = CIm.Cm;
  Clear(ct, Cm, CIm);
  Print(" Delta t= ", AbsoluteTime() - t1);
  Clear(t1)
  );

This matrix is ​​required to interpolate a function (via matrix multiplication) f(x) defined on a Chebyshev grid with nyt Points in a Chebyshev grid with ny Points, with nyt>ny.

Matrix – convert the Do expression to ParallelDo

I try to use ParallelDo to speed up my code, but one problem confused me. In the code below I use Do and get the expected results. It creates a correct matrix CC I want:

CC = Range(Nmax);
Do(
  A = 
    I/3.5 (n - m)!/(n + m)! SphericalHankelH2(n, 0.01) SphericalBesselJ(n, 0.016) 
      LegendreP(n, m, Cos(π/4));
  B = 
    1/(I 0.036) (n - m)!/(n + m)! SphericalHankelH2(n, 0.01) 
      (D(SphericalBesselJ(n, x), x) /. x -> 0.05) LegendreP(n, m, Cos(π/4));
  Kn(t_, p_) = 
   mSum(
     (A Sin(t) D(LegendreP(n, m, Cos(t)), {t, 2}) + 
       (A + B) Cos(t) D(LegendreP(n, m, Cos(t)), t) + 
       (A - B) Sin(t) LegendreP(n, m, Cos(t))) 
     Cos(m (p - π/2)), 
   {m, -n, n});
  CC((n)) = 
    Integrate(
      (Kn(t, p) (SphericalHarmonicY(1, 0, t, p))(Conjugate) Sin(t)), 
      {t, 0, π}, {p, 0, 2 π}), 
  {n, 1, Nmax});

CC // MatrixForm

But if I replace ParallelDo to the Dowho have favourited Matrix CC does not fill.
What should I do? ParallelDo to get the right result, the same as when using Do?

linear algebra – matrix psd inequality, for addition

Given four matrices $ A, widetilde {A}, B, widetilde {B} in mathbb {R} ^ {n times d} $, if
$ A ^ { top} A approx _ { epsilon} widetilde {A} ^ { top} widetilde {A} $, $ B ^ { top} B approx. _ { Epsilon} widetilde {B} ^ { top} widetilde {B} $, do we have
begin {align *}
(A + B) ^ { top} cdot (A + B) approx_ {10 epsilon} ( widetilde {A} + widetilde {B}) ^ { top} cdot ( widetilde {A} + widetilde {B})?
end {align *}

For a square matrix $ C, widetilde {C} $, we say $ C approx _ { epsilon} widetilde {C} $,
begin {align *}
(1- epsilon) widetilde {C} leq C preceq (1+ epsilon) widetilde {C}
end {align *}

linear algebra – upper limit for the condition number of the product of a random sparse matrix and a semi-orthogonal matrix

To let $ G in mathbb {R} ^ {n times m} $ (m> n, m = O (n)), whose all entries i. distributed as $ mathcal {N} (0, 1) * text {Ber} (p) $. To let $ V in mathbb {R} ^ {m times n} $ be a solid semi-orthogonal matrix, d. H. the columns of $ V $ are orthonormal vectors. Define $ A = GV $, For what $ p $ Can we give a polynomial cap for the condition number of? $ A $ i.e. $ kappa (A) leq text {poly} (n) $?

Interesting cases / related problems:

  1. To let $ V $ can be defined as $ V_ {i, j} = 1 $ if $ i = j $ and $ V_ {i, j} = 0 $ Otherwise. To let $ G = (g_1, g_2, ldots, g_m) $, in this case $ A = GV = (g_1, g_2, ldots, g_n) $. Hence in this case $ A $ has the same distribution as $ G $ except $ m = n $. This has been investigated by Basak and Rudelson, who have proven this $ kappa (A) leq text {poly} (n) $ to the $ p = Omega ( log n) / n $.

  2. To the $ p = 1 $, $ G $ is just a random Gaussian matrix and $ A = GV $ can also be considered a random Gaussian matrix if Gaussian vectors are isotropic. This is only a sub-case of 1.

linear algebra – left / right inverse matrix question

For what values ​​of $ a, b, c $ there is a left and / or right reversal for $ A = begin {bmatrix}
1 & a \
2 B \
3 & c
end {bmatrix} $
exist?

We know that a left inverse matrix $ X $ exists so that $ XA = I_2 $ Where $ I_2 $ is the $ 2 times 2 $ So identity matrix $ X $ is a $ 2 times 3 $ Matrix. What do we do next? Thanks a lot.

linear algebra – find the square root of a 7 * 7 matrix with real entries

Suppose I have the following symbolic 7 * 7 matrix

   mat= {{(a^2 b^2)/c^2, 0, 0, 0, 0, 0, (a^2 b^2 e11)/c^2}, {0, (a^2 b^2)/c^2,
       0, 0, 0, 0, (a^2 b^2 e22)/c^2}, {0, 0, (a^2 b^2)/c^2, 0, 0, 0, (
      a^2 b^2 e33)/c^2}, {0, 0, 0, (2 a^2 b^2)/c^2, 0, 0, (2 a^2 b^2 e12)/
      c^2}, {0, 0, 0, 0, (2 a^2 b^2)/c^2, 0, (2 a^2 b^2 e13)/c^2}, {0, 0, 
      0, 0, 0, (2 a^2 b^2)/c^2, (2 a^2 b^2 e23)/c^2}, {(a^2 b^2 e11)/
      c^2, (a^2 b^2 e22)/c^2, (a^2 b^2 e33)/c^2, (2 a^2 b^2 e12)/c^2, (
      2 a^2 b^2 e13)/c^2, (2 a^2 b^2 e23)/c^2, 
      b^2 (1 + (
         a^2 (e11^2 + 2 e12^2 + 2 e13^2 + e22^2 + 2 e23^2 + e33^2))/c^2)}}

for which I have the following information

a> 0, b> 0, c> 0, {e11, e22, e33, e12, e13, e23} are real.

How is it possible to symbolically get the square root of this matrix?

What I've tried so far has been to compose the matrix myself, but it doesn't give me a closed solution, although I declare the assumptions above, i.e. H.

 eigen= Assuming({e11,e22,e33,e12,e13,e23}(Element)Reals&&a>0&&b>0&&c>0,Eigenvalues(mat));

Are there other ways I can find the square root of this symbolic matrix?

How do I enter a 2D matrix if there is no spacing in neighboring elements while entering in c ++?

Thank you for looking around. So I'm trying to use an nxn matrix as input, the input has the following format
Example:

4
1123
3442
5632
2444

You see the input format, which is my problem. I don't want these elements to stick together, and c ++ reads the lines as if each line was a number, which means that "cin" only reads n elements and I expect them to be read every n × n elements separately. Forgive me if the question did not meet the requirements as this is my first question.

Plotting – Is it possible to use "ContourPlot" for a certain eigenvalue of a large matrix Mn (x, y)?

I have a large matrix, we say En Endepending on two variables x and y what has En Eigenvalues. I want to use ContourPlot to show a certain eigenvalue, we say (En/2)+1 in memory of x and y. For little ones En This is possible, but did not work for large ones En. For example with En=8 there is the desired result, but for larger ones En it does not work.

M1={{0, I Sin(x) + Sin(y), 3 - Cos(x) - Cos(y), -1}, {-I Sin(x) + Sin(y),
   0, -1, 3 - Cos(x) - Cos(y)}, {3 - Cos(x) - Cos(y), -1, 
  0, -I Sin(x) - Sin(y)}, {-1, 3 - Cos(x) - Cos(y), I Sin(x) - Sin(y),
   0}};

tc={{0, 0, 0, 0}, {0, 0, 0, 0}, {-1, 0, 0, 0}, {0, -1, 0, 0}};

Mn(n_) := 
 SparseArray({Band({1, 1}, {4 n, 4 n}) -> {M1}, 
   Band({1, 5}, {4 n, 4 n}) -> {tc}, 
   Band({5, 1}, {4 n, 4 n}) -> {ConjugateTranspose(tc)}})

 w = 2; En = 4 w;(*w must be >1*)
ContourPlot(
 Evaluate(Eigenvalues(N@Mn(w))((En/2 + 1)) == 0.4), {x, -2, 
  2}, {y, -2, 2}, ContourShading -> None, ContourStyle -> Blue)

Below is the result with w=2 (i.e. En=8) How can I do that too? w=40 (En=160)?
Enter the image description here

Statistics – Number of classes of the $ D $ th power of the transition matrix of an irreducible Markov chain with the period $ D $

Suppose a Markov chain has a transition matrix $ P $is irreducible and recurs positively with the period $ D $ (greater than $ 1 $). Consider a new Markov chain with a transition matrix $ P ^ D $. Can't that be reduced?

I think there will be exactly that $ D $ Classes communicate so that it is not irreducible. We can split up our original Markov chain $ D $ puts, $ T_i $ to the $ i = 0, 1, 2, …, D – 1 $ so every state in $ T_i $ must go to $ T_ {i + 1} $ and $ T_D = T_0 $. For our new Markov chain, these are our classes (not sure how to prove it). But at least $ T_0 $ and $ T_1 $ are not in the same class because things are in $ T_0 $ can only go in to things $ T_0 $ in a multiple of $ D $ Steps. Is that correct? How is it that we don't use the positive recurring criterion?

I also forgot the construction needed for this cyclical disassembly. So it would be nice if someone could link a proof.