How do I show that the eigenvalues of two square matrices of different dimensions are the same?

I have three matrices in a field $F$:

$X in F^{a,a}, Y in F^{b,b}, Z in F^{a,b}$, where $a,b in mathbb{N}, a geq b$ and $text{rank}(Z) = b$. The following term describes their relation:

$A cdot C = C cdot B$

I want to show that they have the same eigenvalues. I have started with the following about their determinant:

$text{det}(AC) = text{det}(CB) iff text{det}(A) = text{det}(B)$

However, I am not even sure if this is even relevant and how to go on from there.

eigenvalues – How to find graph with specific order?

I worked on a problem in my research. I have a graph, $G$, with $2n$ vertices. It has one connected component of order $2n-1$ and an isolated vertex. $lambda_1geq lambda_2geq ldots geq lambda_{2n}$ are the eigenvalues of $G$. I have some bounds for them.
$$2n-3leq lambda_1<2n-2,\
0leq lambda_2leq 1,\
-1leq lambda_ileq frac{1}{2}, 3leq i leq n+1,\
-3leq lambda_{n+2}leq -1,\
-3leq lambda_ileq frac{-3}{2}, n+ 3leq i leq 2n.$$

Also the maximum degree of this graph is $2n-2$.
How to find these graphs for $6leq n leq 10$ ?
Thanks for your help.

matrices – Eigenvalues of large size “identity” matrix

In the context of AR(1) model, the following $n times n$ matrix plays an important role:

$$
V(rho)
=
{rho^{|i – j|}}_{1 leq i, j leq n}

(rho in (0, 1))
.
$$

I am interested in asymptotic properties of the following:

$$
hat I_n
:=
V(rho)^{1/2} V(hatrho)^{-1} V(rho)^{1/2}
$$

where $hat rho$ is an estimator of $rho.$
Intuitively, this matrix is close to the $n times n$ identity matrix, but the problem is that the size $n$ grows, so we cannot simply write like $hat I_n to I_n.$
Still, I believe that $hat I_n$ is close to the identity matrix in a sense; for instance, all the eigenvalues go to one.

Let $0 leq lambda^{(n)}_1 leq cdots leq lambda^{(n)}_n$ be the eigenvalues of $hat I_n.$
My conjectures are

(1) for fixed $i,$ $lambda^{(n)}_i overset{p}{to}1;$

(2) $lambda^{(n)}_n overset{p}{to} 1$ and $lambda^{(n)}_1 overset{p}{to} 1;$

(3) (hopefully) $sqrt{n} (lambda^{(n)}_n – 1) overset{d}{to} text{some distribution}$ and $sqrt{n} (lambda^{(n)}_1 – 1) overset{d}{to} text{some distribution}.$

Are these correct under some conditions? Thanks!

Closed geodesics and eigenvalues in a non-regular graph

Let $Gamma$ be a graph the degree of whose $n$ vertices is $leq D$ without necessarily being constant. Say we have bounds of type $leq gamma^{2 k}$ for the number of closed geodesics of length $2 k$ for any large $k$, for some $gamma$. Can we bound the non-trivial eigenvalues of the adjacency matrix $A$ of $Gamma$?

(If the degree were constant, this would be easy, via the Ihara zeta function and/or Hashimoto’s operator. When the degree is non-constant, the relation between the Ihara zeta function, on the one hand, and the eigenvalues of $A$, on the other, is less clean.)

If it helps, you can assume $gamma$ is of size $O(sqrt{D})$.

fa.functional analysis – Lower-bounding the eigenvalues of a certain positive-semidefinite kernel matrix, as a function of the norm of the input matrix

Let $phi:(-1,1) to mathbb R$ be a function such that

  • $phi$ is $mathcal C^infty$ on $(-1,1)$.
  • $phi$ is continuous at $pm 1$.

For concreteness, and if it helps, In my specific problem I have $phi(t) := t cdot (pi – arccos(t)) + sqrt{1-t^2}$.

Now, given a $k times d$ matrix $U$ with linearly independent rows, consider the $k times k$ positive-semidefinite matrix $C_U=(c_{i,j})$ defined by $c_{i,j} := K_{phi}(u_i,u_j)$, where

$$
K_phi(x,y) := |x||y|phi(frac{x^top y}{|x||x|})
$$

Question. How express the eigenvalues of $C$ in terms of $U$ and $phi$ ?

I’m ultimated interested in lower-bounding $lambda_{min}(C_U)$ in terms of some norm of $U$ (e.g spectral norm or Frobenius norm).


Let $X$ be the $(d-1)$-dimensional unit-sphere in $mathbb R^d$, equipped with its uniform measure $sigma_{d-1}$, and consider the integral operator $T_phi: L^{2}(X) to L^2(X)$ defined by
$$
T_{phi}(f):x mapsto int K_{phi}(x,y)f(y)dsigma_{d-1}(y).
$$

It is easy to see that $T_phi$ is a compact positive-definite operator.

Question. Are the eigenvalues of $C_U$ be expressed as a function of (eigenvalues of) $K_{phi}$ ?

eigenvalues – How to make a cross-correlation between 2 Fisher matrices from a pure mathematical point of view?

Firstly, I want to give you a maximum of informations and precisions about my issue. If I can’t manage to get the expected results, I will launch a bounty, maybe some experts or symply people who have been already faced to a similar problem would be able to help me.

1)

I have 2 covariance matrices known $Cov_1$ and $Cov_2$ that I want to cross-correlate. (Covariance matrix is the inverse of Fisher matrix).

I describe my approach to cross-correlate the 2 covariance matrices (the constraints are expected to be better than the constraints infered from a “simple sum” (elements by elements) of the 2 Fisher matrices).

  • For this, I have performed a diagonalisation of each Fisher matrix $F_1$ and $F_2$ associated of Covariance matrices $Cov_1$ and $Cov_2$.

  • So, I have 2 different linear combinations of random variablethat are uncorraleted, i.e just related by eigen values ($1/sigma_i^2$) as respect of their combination.

These eigen values of diagonalising are contained into diagonal matrices $D_1$ and $D_2$.

2) I can’t build a “global” Fisher matrix directly by summing the 2 diagonal matrices since the linear combination of random variables is different between the 2 Fisher matrices.

I have eigen vectors represented by $P_1$ and $P_2$ matrices.

That’s why I think that I could perform a “global” combination of eigen vectors where I can respect the MLE (Maximum Likelihood Estimator) as each eigen value :

$$dfrac{1}{sigma_{hat{tau}}^{2}}=dfrac{1}{sigma_1^2}+dfrac{1}{sigma_2^2}quad(1)$$

because $sigma_{hat{tau}}$ corresponds to the best estimator from MLE method.

So, I thought a convenient linear combination of each eigen vectors $P_1$ and $P_2$ that could allow to achieve it would be under a new matrix P whose each column represents a new eigein global vector like this :

$$P = aP_1 + bP_2$$

3) PROBLEM: : But there too, I can’t sum eigen values under the form $D_1 + D_2$ since the new matrix $P= a.P_1 + b.P_2$ can’t have in the same time the eigen values $D_1$ and also $D_2$ eigen_values, can it ?

I mean, I wonder how to build this new diagonal matrix $D’$ such that I could write :

$$P^{-1} cdot F_{1} cdot P + P^{-1} cdot F_{2} cdot P=D’$$

If $a$ and $b$ could be scalars, I could for example to start from taking the relation :

$$P^{-1} cdot F_{1} cdot P = a^2*D_1quad(1)$$

and $$P^{-1} cdot F_{2} cdot P = b^2*D_2quad(2)$$

with $(1)$ and $(2)$ making appear the relation : $$Var(aX+bY) = a^2 Var(X) + b^2 Var(Y) + 2ab Cov(X,Y) = a^2 Var(X) + b^2 Var(Y)$$ since we are in a new basis $P$ that respect $(1)$ and $(2)$.

But the issue is that $a$ and $b$ seems to be matrices and not scalars, so I don’t know how to proceed to compute $D’$.

4) CONCLUSION :

Is this approach correct to build a new basis $P = a.P_1 + b.P_2$ and $D’ = a.a.D_1 + b.b.D_2$ assuming $a$ and $b$ are matrices ?

The key point is : if I can manage to build this new basis, I could return back to the starting space, the one of single parameters (no more combinations of them) by simply doing :

$$F_{text {cross}}=P . D’ cdot P^{-1}$$ and estimate the constraints with covariance matrix : $C_{text{cross}}=F_{text {cross}}^{-1}$.

If my approach seems to be correct, the most difficulty will be to determine $a$ and $b$ parameters (which is under matricial form, at least I think since with scalar form, there are too many equations compared to 2 unknown).

Sorry if there is no code for instant but I wanted to set correctly the problematic of this approach before trying to implement.

Hoping I have been clear enough.

Any help/suggestion/track/clue is welcome to solve this problem, this would be fine to tell it.

matrix analysis – Growth of eigenvalues for certain sequences of matrices

Suppose we have an aperiodic matrix $A_t$ that has entries that are either $0$ or are positive integer powers of $t$, i.e. we could have
$$A_t =
begin{pmatrix}
0 & t & t^2\
t & t^2 & 0\
t & 0 & t
end{pmatrix}$$

for example.

Suppose $t>0$ and let $Lambda(t)$ denote the unique, real, simple maximal eigenvalue of $A_t$ guaranteed by the Perron-Frobenius Theorem. If we consider the function
$$f(t) = logLambda(e^t)$$
then it is possible to show using a variational principle and perturbation theory that $f(t)$ is increasing, convex and analytic (this is non-trivial!) with uniformly bounded (for $tinmathbb{R}$) first derivative. In particular the limits
$$lim_{ttoinfty} frac{f(t)}{t} = alpha_1 text{and} lim_{tto – infty} frac{f(t)}{t} = alpha_2$$
both exist and are finite. My question is the following:
can we calculate the error term associated to these limits? That is, can we find $g(t)$ such that
$$f(t) = alpha_1 t + O(g(t))$$
as $ttoinfty$ for example?

Any thoughts/insights would be greatly appreciated – thanks!

eigenvalues – Sorting Eigensystem According to Complicated Rule

I have looked for an answer to this but the near duplicates I could find seemed slightly distinct.

I have a matrix $A$ which has eigenvalues in pairs $lambda_1,-lambda_1,lambda_2,-lambda_2,dots$. I would like to sort the eigensystem such that the eigenvectors are in this order, with the eigenvalues having descending real parts. That is, I want to sort in descending order of the function $f=|Re(cdot)|$ and break ties by $g=Re(cdot)$.

What I was hoping for was something like:

f(z_) := Abs(Re(z));
g(z_) := Re(z);
{eval,evec} = SortBy(Eigensystem(N(A))(Transpose),{f,z})(Transpose);

but this doesn’t work. Replacing {f,g} with Abs@*Re does work but not for the tiebreak (neither does {Abs@*Re,Re}).