Rearranging linear system containing a symmetric matrix

I’m trying to rearrange the following equation to get only Q on the LHS:

$$
A = -Q + t(Q M + M^T Q – kQ)
$$

Q is a symmetric matrix. A is a known matrix and t, k are known constants.

This seemed straightforward to me at first given all terms share a common Q, but non-commutativity of the terms is tripping me up. I could inject identity matrices and vectorize both sides, but I’d rather keep this in matrix form if possible.

teaching – What do we learn from the Wronskian in the theory of linear ODEs?

For a real interval $I$ and a continuous function $A: I to mathbb{R}^{dtimes d}$, let $(x_1, dots, x_d)$ denote a basis of the solution space of the non-autonomous ODE
$$
dot x(t) = A(t) x(t) quad text{for} quad t in I.
$$

The mapping
$$
varphi: I ni t mapsto det(x_1(t), dots, x_d(t)) in mathbb{R}
$$

is usually called the Wronskian of the basis $(x_1,dots,x_d)$, and it seems to be an obligatory topic in every ODE course or book that I’ve seen.

So in an ODE course that I am currently teaching, I’m facing the following problem:

(1) Despite its prevalence in courses and textbooks, I’ve rarely (not to say never) encountered any situation where the Wronskian of an ODE is used in a way that sheds non-trivial insight onto the problem at hand – in particular not in any of the books where I’ve read about it. (Of course, I have also searched on the internet for it, but without any success.)

(2) I feel quite uneasy to teach a concept which I am unable to motivate properly.

(3) I’d feel even more uneasy to just omit it from the course, since chances are that my not knowing of an application of the Wronskian is just due to my ignorance.

Well, what I did is to merely mention the Wronskian in a remark – but of course (and fortunately) I did not get away with it, because quite soon a student asked what the Wronskian is good for.

So this is the

Question: What is the Wronskian (in the context of linear ODEs) good for?

Remarks.

  • One can show that $varphi$ satisfies the differential equation
    $$
    dot varphi(t) = operatorname{tr}(A(t)) varphi(t),
    $$

    and since this a one-dimensional equation we have the solution formula
    $$
    (*) qquad dot varphi(t) = e^{int_{t_0}^t operatorname{tr}(A(s)) , ds} varphi(t_0)
    $$

    for it (for any fixed time $t_0$ and all $t in I$). This is nice – but still I can’t see how to explain to my students that it is useful.

  • I’ve often seen discussions to the end that $(*)$ implies that “the Wronskian is non-zero at a time $t_0$ if and only if it is non-zero at every time $t$” – but I find this somewhat straw man-ish: the fact that $(x_1(t), dots, x_d(t))$ is linearly independent at one time $t_0$ if and only if it is linearly independent at every time $t$ is an immediate consequence of the uniqueness theorem for ODEs, without any reference to the Wronskian.

  • One can give a geometrical interpretation of $(*)$: For instance, if all the matrices $A(t)$ have trace $0$, and it follows that the (non-autonomous) flow associated with our differential equation is volume preserving. However, I’m not convinced that this serves as a sufficient motivation to give the mapping $t mapsto det(x_1(t), dots, x_d(t))$ its own name and to discuss it in some detail.

  • Maybe a word on the notion “good for” that occurs in the question: I’m pretty comfortable with studying and teaching mathematical objects just in order to better understand them, or for the sake of their intrinsic beauty. However, whenever we do so, this usually happens within a certain theoretical context – i.e., we build a theory, introduce terminology, and this terminology somehow contributes to the development (or to our understanding) of the theory.

    Some my question could be rephrased as:

    “I’m looking either (i) for applications of the Wronskian of ODEs to concrete problems (within or without mathematics) or (ii) for ways in which the concept ‘Wronskian’ facilitates our understanding of the theory of ODEs (or of any other theory).”

  • The term ‘Wronskian’ also seems to be used with a more general meaning (see for instance this Wikipedia entry). However, I am specifically interested in the Wronskian for the solutions of a linear ODE.

linear algebra – Matrix multiplication commutativity

We know that if A is 2×2 square matricxbegin{bmatrix}a&b\c&dend{bmatrix}, such that A is commutative over multiplication with any 2×2 matrix, then A is a scalar matrix.
To prove that I tried to rely on the matrix multiplication and then I got 4 equations to solve. I calculated the dot product of A and B:begin{bmatrix}e&f\g&hend{bmatrix} two times in both directions.
So ce + dg = ga + hc and af + bh = eb + fd
And for the diagonal elements we get gb = cf = ea. So the result of the multiplication give us a matrix s.t. its diagonal elements are the same. That’s all I could get.

I just needed to prove that the diagonal elements of A are scalars and the rest are zeros, but I faced a dead end, because I got stuck in a circle.
Can you help?!

linear algebra – Orthonormal columns implies orthonormal columns

I find it non-intuitive if I impose that all of a square matrix’s columns are normalized and mutually orthogonal, then all its columns are also normalized and mutually orthogonal. Any intuitive explanation for this? Also if I relax the conditions to be only mutually orthogonal without being normalized, is this still true? And why so?

linear algebra – Finding an eigenvector for a specific eigenvalue of a symbol matrix

The relevant matrix is
$$M=left( begin{matrix} p^3+frac{1}{sqrt{3}}p^8 & p^1-ip^2 & p^4-ip^5 \
p^1 +i p^2 & -p^3 + frac{1}{sqrt{3}}p^8 & p^6-ip^7\
p^4+i p^5 & p^6+i p^7 & -frac{2}{sqrt{3}}p^8end{matrix} right)$$

{{p3 + p8/Sqrt(3), p1 - I p2, 
  p4 - I p5}, {p1 + I p2, -p3 + p8/Sqrt(3), p6 - I p7}, {p4 + I p5, 
  p6 + I p7, -((2 p8)/Sqrt(3))}}

using the Eigenvectors() function returns only zero and an error “unable to find all eigenvectors”. I have separately found the eigenvalues $mu_i$ by solving the cubic characteristic polynomial. This was surely half of the difficulty of the problem. To find the corresponding eigenvector should simply be a problem of Gaussian elimination on $(M-mu_1I)vec{v}=0$

My question is, what is the most streamlined way to now find the eigenvector corresponding to a specific e-val, say $mu_1$? (I haven’t included the explicit expression for $mu_i$ as they are lengthy and it seems to me the problem can be equally solved while keeping it abstract.)

real analysis – Special property of the solution of a linear system

Consider the following symmetric matrix:
$$A=(a_{ij})_{1leq i,jleq n}$$
such that $a_{ij}=a_{ji}, a_{ii}=0$ for $|i-j|geq k$ where $kgeq3$. We also have $1leq a_{ij}leq2, 0<|i-j|<k$. Consider the solution of the following linear system:
$$begin{cases}
(sum_{jneq1}a_{1j})x_1-sum_{jneq1}a_{1j}x_j=1\
(sum_{jneq2}a_{2j})x_2-sum_{jneq2}a_{2j}x_j=-1\
(sum_{jneq2}a_{3j})x_3-sum_{jneq3}a_{3j}x_j=0\
(sum_{jneq2}a_{4j})x_4-sum_{jneq4}a_{4j}x_j=0\
vdots\
(sum_{jneq n}a_{nj})x_n-sum_{jneq n}a_{nj}x_j=0
end{cases}$$

I conjecture that there exists $C>0$ independent of $n$ such that
$$sum_{i,j}a_{ij}|x_i-x_j|leq C$$
The physical meaning of the conjecture is that, if we flow 1 unit of current from node 1 to node 2the sum of current of each edge in the given electrical network is bounded.
Simulation result indicates that this is indeed the case. However, I somehow can only prove bound involving $n$.

This is related to my previous question:
Voltage potential difference

linear algebra – Product of matrices has real eigenvalues?

Let $A$ be a (symmetric) positive definite matrix and $hat{n}$ be an arbitrary unit vector. Consider $b,c,d$ arbitrary positive integers (we may assume $bneq d$). I would like to know if the following matrix have real eigenvalues (I would like the more general answer, for more than product of three matrices but even for three i don’t know).

$$(I – hat{n}hat{n}^T)cdot A^bcdot(I – hat{n}hat{n}^T)cdot A^ccdot (I – hat{n}hat{n}^T)cdot A^dcdot (I-hat{n}hat{n}^T).$$

Thanks

real analysis – Extending a bounded linear functional on $C[-1,1]$ to $L^infty[-1,1]$

Let $X=L^infty(-1,1)$, let $Y=C(-1,1)$, and let $delta$ be the bounded linear functional on $Y$ defined by $delta(f)=f(0)$. I would like to prove that $delta$ can be extended to be defined on $X$ with the same norm. Here’s my idea. First, one can show that the norm of $delta$ is $1$. Next, we consider the sub-linear functional $p(x)=||x||_{L^infty}$ and find that $deltaleq p$ on $Y$. Now use the Hahn-Banach theorem to extend $delta$ to a linear functional $D$ on $X$. But how could $D$ be bounded with the norm? Thank you.

linear algebra – Observable nearly commuting with a complete set of commuting observables

Consider the Hilbert space $H = E^{otimes n}$ where $E=mathbb{C}^2$.

On $E$ we have an observable $O$ (i.e. a Hermitian matrix) that is diagonalizable in the standard basis with eigenvalues $1$ and $-1$. By tensoring $O$ with the identity on $E^{otimes n-1}$, and doing so for each of the $n$ possible positions for the factor $E$, we get a complete set of commuting observables $O_i$, $i=1dots n$.

Now if I have an observable $M$ on $H$ such that the operator norm of $(M,O_i)$ is at most $1$ for all $i$, how far from diagonal can $M$ be in the operator norm? What are explicit examples that are far from diagonal?

Any pointer or relevant remark for related questions welcome.

Integer Linear Program as a feasability test

I am a beginner in Integer Linear Programs and I have a question about a problem that I am dealing. This problem tracks a configuration of a graph by unitary transformations on the graph and I want to minimize these number of transformations to achieve another configuration of the graph. As I only allow exactly one transformation per step, minimizing the number of transformations is the same as minimizing the number of steps.

But, I enter in the following problem: There is no internal property that can be tracked so that I can check if one or other state is closer or farther from the wanted configuration. That means that I can only check if a specific sequence of transformation is correct in a prior-of-execution defined step, for example, $T$. Then, what I am thinking in doing is testing a range of values for $T$, as there is a polynomial upper-bound for this value, in increasing order (bound in the number of steps). Then I recover the answer of the first $T$ that gives me any answer, as I know it will be a optimal answer.

My questions are:

  • This sort of is a feasibility test for a fixed $T$, as if the polytope exists, any answer will be a optimal answer, as they all have the same number of steps $T$. This approach is possible? In the sense that it can be calculated given a infinite amount of time? Because I am not sure what is the behavior of a IL program when there is no possible answer (ie. no polytope).
  • If yes, there is some existing technique to deal/optimize this type of situations without finding a specific property?