reference request – Simple proof of formula related to asymptotics for eigenvalue problem for Laplacian

For the solution of
$$
begin{cases}
lambda u^epsilon – frac{epsilon^2}{2} Delta u^epsilon = 0 &text{in } Omega \
u^epsilon=1 & text{on } partial Omega
end{cases}
$$

Varadhan proved that
$$lim_{epsilon to 0} – epsilon log u^epsilon = sqrt{2lambda} mathrm{dist} (x,partial Omega) $$

Is it possible to give a simple and straightforward proof of this result? Maybe relying (only or mostly) on tools like the maximum principle or the Green function of the Laplacian?

ordinary differential equations – ODE eigenvalue problem with unusual boundary conditions

I am given:

y”+λy=0, y(0)=0, (1−λ)y(1)+y′(1)=0

As usual we are looking for not trivial solutions.
Looks like a standard eigenvalue problem and yet I am totally stuck.
The case when lambda = 0 is rather obvious. A=B=0. Not much fun.
But when I start trying for lambda greater or smaller than zero, I get to this:

  1. λ < 0, the solution is of the form:
    B((1+ω^2)sinh(ω)+ωcosh(ω))=0 where λ=-ω^2

  2. λ > 0, the solution is of the form:

    B((1-ω^2)sin(ω)+ωcos(ω))=0 where λ=ω^2

The question states:Find the nontrivial stationary paths,
stating clearly the eigenfunctions y. In the case 1) I cant see any non trivial solutions but… well in the second case I cant see either. I know there are solutions.
Any help would be highly appreciated

combinatorics – Largest eigenvalue of a simple graph

Let $Gamma$ be a simple graph with $n$ vertices, $e$ edges and largest eigenvalue $lambda_{max}$.
Show that $lambda_{max} = frac{2e}{n}$ iff $Gamma$ is regular.

I’ve already shown the if part, which is kinda obvious.

For the converse, I have no idea how to prove it. I’ve already checked in the internet and even here in MathStackExchange but found almost no clue, and the clue that I found was not useful at all (I will put a link to it). Can someone give me a really useful hint?

Do I need any deep knowledge in Linear Algebra that I don’t know because I had not been taught?

I’m currently taking a course in Combinatorics, in a master’s degree, and this is part of a suggested exercise by the Professor after a class.

Largest eigenvalue of the adjacency matrix of a graph

The Eigenvalue Problem: Perturbation Theory

Let $mathbf{K}$ be a square matrix and $rho(mathbf{K})$
is the spectral radius of $mathbf{K}$. Then, If $mathbf{M}= mathbf{K}+delta mathbf{A}$ for very small $delta$, Then i want
begin{itemize}
item(1.) Prove that
begin{eqnarray} label{asrr1}
rho(mathbf{M})=rho(mathbf{K})+delta langle mathbf{u}, mathbf{A} mathbf{v} rangle + O(delta^{2})
end{eqnarray}
where $mathbf{u}$ and $mathbf{v}$ are left and right eigenvector of $mathbf{K}$.

item(2.) Additionally, using 1. can we prove that
begin{eqnarray}
rho(mathbf{M})=rho(mathbf{K})+delta rho^{prime} (mathbf{K})+ O(delta^{2})
end{eqnarray}

That means, can we show $delta rho^{prime} (mathbf{K})=delta langle mathbf{u}, mathbf{A} mathbf{v} rangle$.
end{itemize}

where $prime$ is a notation for derivative.

I am very grateful if someone shows me the sketch of the proof. Also, I am very happy if you forward me a reference related to this, and I can read and understand more.

Thank you very much for your cooperation!

differential equations – Multiplicity in Laplace’s Eigenvalue Problem

I am computing Laplacian on a unit square $textbf{numerically}$.

Consider the eigenvalue problem on $Omega = (0 , 1)^2$ $$-Lu = lambda u$$ where $$L = frac{partial^2}{partial x^2} + frac{partial^2}{partial y^2}$$

The Dirichlet’s boundary condition is $u = 0$ on $partial Omega$.

I have written the following for first $100$ eigenvalues.

{ℒ, ℬ} = {-Laplacian(u(x, y), {x, y}),
DirichletCondition(u(x, y) == 0, True)};

{vals, funs} = 
DEigensystem({ℒ, ℬ}, 
u(x, y), {x, 0, 1}, {y, 0, 1},100);

vals

Now I am interested in computing multiplicity of eigenvalues. I already know that Tally can certainly count the occurrances of eigenvalues in the list called vals.

I know that $textbf{analytically}$ the eigenvalues are $lambda _{mn} = (m^2 + n^2) pi^2$ where $m,n =1, 2, 3cdots$. Moreover, calculating multiplicity of a specific eigenvalue is same as the number theory problem namely how many ways $frac{lambda_{mn}}{pi ^2}$ can be written as $m^2 +n^2$. For example, consider the eigenvalue $5pi^2$, the multiplicity of this eigenvalue is $2$ since $5$ can be written as $5 = 1^2 +2^2 = 2^2+ 1^2$. Similarly, Similarly, if we consider the eigenvalue $50 pi^2$, we can write $50 = m^2 + n^2$ in three ways, i.e. $50 = 1^2 + 7^2 = 7^2 + 1^2 = 5^2 + 5^2$. Therefore the multiplicity of the eigenvalue is $3$. Therefore, computing multiplicity (occurrances of eigenvalues in the list) of a eigenvalue $textbf{analytically}$ is equivalent to the above said number theory problem.

But I want to compute multiplicity without getting into analytic solution since most of the time analytic solutions are unavailable.

Also, the eigenvalue problem has infinite numbers of eigenvalue. So Tally(vals) in the following

{ℒ, ℬ} = {-Laplacian(u(x, y), {x, y}),
    DirichletCondition(u(x, y) == 0, True)};

    {vals, funs} = 
    DEigensystem({ℒ, ℬ}, 
    u(x, y), {x, 0, 1}, {y, 0, 1},100);

    Tally(vals)

does not work properly. Therefore, since there are infinite numbers of eigenvalues and we can not list all of them, I am looking to compute the multiplicity of a eigenvalue (occurrances of eigenvalues in the list) without any specification on ‘numbers of eigenvalues’ and that would give me the complete multiplicity of that particular eigenvalue.

But it may be that I did not write the question properly. If you please improve my question, it’ll be appricitable.

Thanking in advanced.

differential equations – Use of “Tally” in eigenvalue problem

$textbf{Eigenvalue problem on a unit square $Omega = (0,1)^2$
:}$

Consider the eigenvalue problem with the Dirichlet boundary condition that is, $$-Lu = lambda u$$ where $$L = frac{partial^2}{partial x^2} + frac{partial^2}{partial y^2}$$.

The boundary condition is that $u=0$ on $partial Omega$.

I am computing the $textbf{eigenvalues}$ and $textbf{eigenfunctions}$ numerically in Mathematica. Specially, I am interested about the $textbf{multiplicity}$ of a specific eigenvalue. I already know that Tally or Count can certainly count the occurrances of eigenvalues in the list called vals.

enter image description here

Now consider the eigenvalue $50 pi^2$. The $textbf{multiplicity}$ of $50 pi^2$ is $2$.

But if we add some more eigenvalues in the list (for example, I have made a list of $34$ eigenvalues), the $textbf{multiplicity}$ of $50 pi^2$ is $3$.

enter image description here

We know that in the problem there are infinite number of eigenvalues. So, we can not list all of them. I am looking for a code for $textbf{multiplicity}$ of a specific eigenvalue that counts the total number of occurrances.

Thanking in advanced.

pr.probability – Lower-bound for smallest eigenvalue of random $k times $k matrix $C(W)$ defined by $C(W)_{i,j} := 2(w_i^top w_j)^2 + |w_i|^2|w_j|^2$

Let $k$ and $d$ be positive integers such that $d/k:=lambda > 1$. Let $W$ be $k times d$ random matrix with rows $w_1,ldots,w_k in mathbb R^d$ drawn iid from $N(0,(1/d)I_d)$, and define the $k times k$ matrix $C(W)$ by setting $C(W)_{i,j} := 2(w_i^top w_j)^2 + |w_i|^2|w_j|^2$.

Question. Is there a high-probability good lower-bound for the smallest eigenvalue of $C(W)$ ?

Related: https://math.stackexchange.com/q/4005530/168758

N.B. I’m familiar with standard RMT.

differential equations – How to solve this problem similar to eigenvalue problem but with sources?

I have two coupled differential equations with this structure:

$$f_1(r)partial_r^2h_{00}(r)+f_2(r)partial_rh_{00}(r)+f_3(r)partial_rh_{22}(r)+(omega^2+V_1(r))h_{00}(r)+(omega^2+V_2(r))h_{22}(r)=S_1(r)$$
$$g_1(r)partial_r^2h_{22}(r)+g_2(r)partial_rh_{22}(r)+(omega^2+V_3(r))h_{22}(r)=S_2(r)$$
where all the functions $f_i(r),g_i(r),V_i(r)$ and $S_j(r)$ are known (and non linear). I have to solve for the parameter $omega$ and for the functions $h_{00}(r)$, $h_{22}(r)$. The boundary conditions are that the functions are bounded in the origin and fall off as $r^{-n}$ at infinity.

Without the sources, this is an eigenvalue problem, but I have sources: how do I treat this kind of problem?

For definiteness consider a system of the form:

r^-2 D(h00(r),{r,2}) + r^-3 D(h00(r),r) + r D(h22(r),r) + (w^2 + r^-2)h00(r) + (w^2 + 2 r^-2)h22(r) == Exp(-r)
D(h22(r),{r,2}) + r^-1 D(h22(r),r) + D(h00(r),r) + (w^2 + r^-3) h22 ==  r^2 Exp(-r)

EDIT I have been experimenting with NDEingenvalues and similar functions: they seem to ignore the source term, I mean the eigenvalues do not change with the source: do I miss something?

Efficient eigenvalue computation for Hessian of neural networks

I train a neural network – one of the Resnet variations ($approx 10^7$ parameters) on the CIFAR-10 dataset – and after each epoch, I would like to find the smallest/largest eigenvalues of its Hessian. For that, I can use hessian-vector products (i.e. $f(v) = H v$, where $H$ is the Hessian corresponding to the batch I’m currently using, PyTorch has a built-in mechanism for that), so, for example, I can use the power method.

Question: do you know of an efficient algorithm for this task? To be precise, I mean that both eigenvalues can be computed with a reasonable multiplicative error within at most 10 minutes.

Note that I’m asking about an algorithm that you know to be efficient for this (or similar) problem. I tried the power method, accelerated power method, Oja’s algorithm, gradient-based algorithm, its accelerated version, algorithms from https://arxiv.org/abs/1707.02670. All these experiments take a lot of time and so far I didn’t have any success, no matter how much engineering I used. When eigenvalues are close to $0$ (e.g. of order $-frac 12$), either convergence takes a lot of time or the results are unstable/unreliable.

Concentration of largest eigenvalue of quadratic

Suppose I have a fixed matrix $A in mathbb{R}^{a times b}$ and a random matrix $B in mathbb{R}^{b times c}$ with $c < b$ where $B’B = I_b$.

I am hoping to find a concentration inequality for the spectral norm of $AB$.

Clearly, we can write $B = Z(Z’Z)^{-1/2}$ where $Z$ has entries which are iid standard gaussian, so we may consider the concentration of the largest eigenvalue of the matrix $A Z(Z’Z)^{-1}Z’A$.

Are there a well known result about such a random quantity? Or any good places where I could read about this problem?