Skunk Redux Probability Linear Algebra

Another version of Skunk redux uses a spinner with three equally likely outcomes: 0, 1 and 2, so that 0, 1 and 2 all come up with probability 1/3. If you are standing when a 1 or 2 comes up you add that to your current score, but if a 0 comes up the game is over and your score is 0. If you leave the game before a 0 is thrown, your payoff is your score at that point.
(a) Find the optimal strategy.
(b) Calculate the average payoff obtained using the strategy in (a). (The game is simple enough that you can simply make a list of all possible trajectories and take the weighted average of their payoffs. It’s useful to lump together (without listing) all trajectories which give you a zero payoff.)
(c) Calculate the average payoff for the strategy “sit after k spins if the game is not already over” for the values k= 2, 3 and 4, and compare with the answer to (b). You might do k= 2, 3 by making a list of all possible trajectories, but for k = 4 it’s worth trying to find a more general argument, particular anticipating part (d) below.
(d) Calculate the average payoff for the strategy “sit after k spins if the game is not already over” for the general value k

linear algebra – Graph Transformation – reflection and right shift

If I have a function $f(x)$ and I then want to move it to the left by 2, I could represent this as $f(x + 2)$. Similarly, if I want to move the function to the right by 2, I could represent this as $f(x-2)$.

However, if I have $f(2-x)$ = $f(-x+2)$, I would think that this is a reflection across the y-axis and then a left shift of 2 (because the +2). However, it seems as though this is a reflection across the y-axis and then a right shift of two.

Why is this?

Thank you

linear algebra – Multiplication of matrix-represented polynomials

Suppose we have a multivariate polynomial $f(mathbf{x})$ represented by some matrix $A$, i.e. $f(mathbf{x}) = b^T A b$, where $b = (1, x_1, x_2, dots, x_{m-1}x_m^{n-1}, x_m^n)^T$ is a monomial basis.

Now consider the multiplication $f(mathbf{x}) * g(mathbf{x})$ where $g(mathbf{x})$ is a known polynomial. The polynomial $f(mathbf{x}) * g(mathbf{x})$ also admits a matrix representation. Suppose $f(mathbf{x}) * g(mathbf{x}) = b^T B b$ (with possibly expansion of the basis $b$).

How to represent the matrix $B$ by the original matrix $A$ and polynomial $g(mathbf{x})$ using some matrix operations (such as multiplication, inverse or Kronecker product)?

linear algebra – Eigenvector help when Solving Many DOF System with Symmetry

I’m working on dealing with the simple harmonic oscillations of a benzene atom. We’re meant to solve it with symmetry. I can solve it by going the longer way via the Euler-Lagrange equation and finding the Eigenvectors that way, but I want to learn this new method.

I’ve solved for the eigenvalues: $lambda = pm 1, e^{frac{pm ipi}{2}}$. The issue that I’m having is solving for the eigenvectors of the latter $lambda s$.

$pm 1 rightarrow begin{pmatrix}
1\
1\
1\
1
end{pmatrix}, begin{pmatrix}
1\
-1\
1\
-1
end{pmatrix}$

When solving $bar{bar{S}}begin{pmatrix}
a\
b\
c\
d
end{pmatrix}=lambdabegin{pmatrix}
d\
a\
b\
c
end{pmatrix}$
I end up with $a=lambda d, b = lambda a, c = lambda b, d = lambda c$. I tried saying substituting and got $b=lambda^2 d, c=lambda^2 a$ and then going from there to get $begin{pmatrix}
1\
lambda^2\
1\
lambda^2
end{pmatrix}$
, but this isn’t correct. I’m not the greatest when it comes to solving eigenvectors this way. I am confused when solving for variables in terms of other variables and then creating a vector out of that. 2D is fine, but I can’t do >3D well.

linear algebra – Hermitian decomposition via Trace norm

I am doing some quantum information. I face some problems related to linear algebra. The problem is somewhat long, so I will provide the reference and detail problem that I am dealing with.
We first fixed our discussion. All the matrices discussed are finite dimensional matrices.

First we can define the trace norm( Schatten norms) of hermitian matrix (see https://en.wikipedia.org/wiki/Matrix_norm)
begin{equation}
|rho|=Trsqrt{rho^{dagger}rho}.
end{equation}

Notice that if A is the positive semidefinite matrix then $|A|=Tr(A)$.
Now in this paper (https://arxiv.org/pdf/quant-ph/0102117.pdf, Lemma 2), the author claims that given a hermitian matrix $A$ (all the eigenvalues $geq 0$). There exist $A^{+}, A^{-}$. Both of them are semidefinite hermitian matrices such that the following:
(1)
begin{equation}
A=A^{+}- A^{-}
end{equation}

For all the this type of decomposition in (1), which $Tr(A^{+})-Tr(A^{-})$ is minimal. For this decomposition. We have
begin{equation}
|A|=|A^{+}|+|A^{-}|=Tr(A^{+})+Tr(A^{-}).
end{equation}

Now the first question is where can I find the math literature discussing it. I have search a lot, but I have not seen anyone shown this fact in the math literature. The second question is given a hermitian matrix $A$, can I compute $A^{pm}$ concretely? If I can not compute, can I get some inequality on the element of $A^{pm}$?

I also have another question related to it. Suppose we have a hermitian matrix $rho$. Can I get some upper bound (or lower bound) of $|B rho B^{dagger}|$ or $Tr(B rho B^{dagger})$ by relating the property of $B$ and the property $rho$? For example suppose I assume that $|B|<1$ or $Tr(B) leq 1 $, can I get some bound on $|B rho B^{dagger}|$? notice that $B$ is the general complex matrix not need to be hermitian.

linear algebra – A matrix Riccati differential equation with constant coefficients? Is there a solution for this in closed form?

The following is a matrix Riccati differential equation with constant coefficient matrices.

$$Dfrac{partial{C(t)}}{partial{t}}S + frac{1}{n}C(t)QDC(t)S – EC(t)Q = 0$$ or
$$Ddot{C}(t)S + frac{1}{n}C(t)QDC(t)S – EC(t)Q = 0$$
given initial condition $C(0) = C_0$.

I stumbled upon this from some other problem and I don’t have any background in matrix differential equations and I’d like to know if there is any way to solve this equation. I read it can be reduced to an algebraic Riccati equation. Is there any closed form expression for solution of this equation? Or anything that is closest to solving this equation?

Matrix dimensions

$C(t)$———-> $(m+1)times n$

$S$————–>$ntimes 1$

$Q$————–>$ntimes(m+1)$

$D$————–>$(m+1)times(m+1)$ diagonal matrix. (it is also singular, as there is a diagonal entry that is 0).

$E$————–>$1times (m+1)$

If its useful to know, $n>>m$ and $mge 3$

linear algebra – Which of the following sets are linearly independent?

Which of the following sets are linearly independant?

1.{ $begin{pmatrix} 1 \ 2 end{pmatrix}$,$begin{pmatrix} 1 \ 1 end{pmatrix}$ }

2.{ $begin{pmatrix} 1 \ 1 end{pmatrix}$,$begin{pmatrix} 0 \ 0 end{pmatrix}$ }

3.{ $begin{pmatrix} 1 \ 1 end{pmatrix}$ }

4.{ $begin{pmatrix} 1 \ 1 \ 0 end{pmatrix}$, $begin{pmatrix} 1 \ 2
\ 1 end{pmatrix}$
}

5.{ $begin{pmatrix} 1 \ 1 \ 0 end{pmatrix}$, $begin{pmatrix} 1 \ 2
\ 1 end{pmatrix}$
, $begin{pmatrix} 1 \ 1 \ 1 end{pmatrix}$, $begin{pmatrix} 1 \ 2 \ 3 end{pmatrix}$ }

I chose the answers 2, 3, 5 but I am not sure as I might have missed some counterexamples for why the vectors are linearly dependant? Are my answers correct? Thank you.

linear algebra – Reducing generalized eigenvalue problem to regular eigenvalue problem

I expected generalized eigenvalue problem on $(A, B)$ to reduce to eigenvalue problem on $B^{-1} A$ when $A$, $B$ are full rank (Section 7.1 of generalized eigenvalue tutorial), but I get different results out of Mathematica, any idea why? Using Mathematica 12.1.0.0 on Mac

A = DiagonalMatrix({4, 10, 4, 10});
B = DiagonalMatrix({1, 2, 1, 2});
Eigenvalues({A, B}) (* 4, 4, 4, 4 *)
Eigenvalues(Inverse(B).A) (* 5, 5, 4, 4 *)

linear algebra – Problem with a rank of symmetric matrix

Thanks for contributing an answer to Mathematics Stack Exchange!

  • Please be sure to answer the question. Provide details and share your research!

But avoid

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.

To learn more, see our tips on writing great answers.