reference request – Stone-Weierstrass theorem for modules of non-self-adjoint subalgebras

In “Weierstrass-Stone, the Theorem” by Joao Prolla, there is a Stone-Weierstrass theorem for modules, stated as the following:

Let $mathcal{A}$ be a subalegebra of $C(X, mathbb{R})$ and $(E, |cdot|)$ be a normed space over $mathbb{R}$. Let $Wsubset C(X, E)$ be a vector subspace which is an $mathcal{A}$-module. For each $fin C(X, E)$ and $epsilon>0$, there exists $gin W$ such that $|f-g|<epsilon$ if and only if for each $xin X$, there exists $g_xin W$ such that $|f(t) – g_x(t)| < epsilon$ for all $tin (x)_{mathcal{A}}$, where $(x)_mathcal{A}$ is the equivalent class of $x$ under $mathcal{A}$.

I know that the above theorem can be extended to $mathcal{A}subset C(X, mathbb{C})$ with $mathcal{A}$ being a self-adjoint subalgebra. I wonder whether there are some similar results for modules of non-self-adjoint algebras.

Namely, let $Ssubset C(X, E)$ be any non-empty set and $mathcal{A}subset C(X, mathbb{C})$ be a subalgebra (not necessarily self-adjoint). Then $W := mathcal{A}S$ is a vector subspace which is an $mathcal{A}$-module. Shall we still claim that $fin overline{W}$ if and only if $fbigvert_{(x)_{mathcal{A}}} in overline{W}bigvert_{(x)_{mathcal{A}}}$? Or is there any counter-example to this statement?

differential equations – Tennis Racket theorem

Torque-free Euler equations experiment seen in low gravity of Russian spacecraft is modelled here with a view to see its tumbling motion around the intermediate axis $omega_2$ rotation. However its reversal is not observed here. The boundary conditions do play a role, varying them did not much change the sine behaviour towards interfering periodic flips.

Due to easy demonstration possibility here I posted this hopefully interesting problem although strictly it is a physics problem.

{I1, I2, I3} = {8, 4, 0.4};
Dzhanibekov = {I1 TH1''(t) == (I2 - I3) TH2'(t) TH3'(t), 
   I2 TH2''(t) == (I3 - I1) TH3'(t) TH1'(t), 
   I3 TH3''(t) == (I1 - I2) TH1'(t) TH2'(t), TH1'(0) == -0.4, 
   TH2'(0) == 0.08, TH3'(0) == 0.65, TH1(0) == 0.75, TH2(0) == -0.85, 
   TH3(0) == 0.2};
NDSolve(Dzhanibekov, {TH1, TH2, TH3}, {t, 0, 15.});
{th1(u_), th2(u_), th3(u_)} = {TH1(u), TH2(u), TH3(u)} /. First(%);
Plot(Tooltip({th1'(t), th2'(t), th3'(t)}), {t, 0, 15}, 
 GridLines -> Automatic)

Please help choose better initial conditions for getting a jump around $theta_2$ axis. Thanks in advance.

Wing Nut Flips

Wiki Ref

enter image description here

fa.functional analysis – Trying to recover a proof of the spectral mapping theorem from old thesis/paper with continuous functional calculus

In my research group in functional analysis and operator theory (where we do physics and computer science as well), we saw in an old Russian combination paper/PhD thesis in our library a nice claim about the spectral mapping theorem’s possible proof. Let me attempt to bring the context here. I should mention there are some nice results in this paper that I wanted to use and generalize for my own research, I hope to accurately bring the context below.

They bring up the continuos functional calculus $phi: C(sigma(A)) rightarrow L(H)$ for a bounded, self-adjoint operator on a Hilbert space A. This is an algebraic *-homomorphism from the continuous functions on the spectrum of $A$ to the bounded operators on $H$. The paper’s spectral mapping theorem basically says in this context $$ sigma(phi(f)) =f(sigma(A)) $$ and the paper says something nice about this. It does not actually give a proof but it says there is a nice way to prove it using both inclusions with the inclusion $ f(sigma(A)) subseteq sigma(phi(f)) $ sketched in the following way: the author supposes $ lambda in f(sigma(A)) $ and says “it is very obvious” that there exists a vector $h in H$ with $|h|=1$ such that $|phi(f)-lambda)h|$ is arbitrarily small which shows $lambda in sigma(phi(f))$ which shows the desired inclusion.

The author says that it is “very obvious” to show this but I am a bit stumped. The way I would construct the continuous functional calculus is to start with polynomials and then generalize to $ C(sigma(A)) $ based on the Weierstrass approximation theorem on the real compact set $sigma(A)$ and the BLT theorem. The inclusion $sigma(phi(f)) subseteq f(sigma(A))$ is, I think, quite obvious but the other one in the above context has me stumped. Since I am already working on generalizing some results, I would really love to know how the author proves the inclusion with the method of showing the mentioned vector exists. Maybe use approximation in some way, but even though I suspect it is simple, I still do not see the author’s proposed proof. Can someone here please help me recover it? I thank all interested persons.

real analysis – Equations that my relate to monotone convergence theorem?

I want to show the following equations, assuming that $X$ is a non-negative random variable.

$lim_{n to infty} n E(frac{1}{X}I_{{X>n}}) = 0$

$lim_{n to infty} frac{1}{n}E(frac{1}{X}I_{{X > frac{1}{n}}}) = 0$

I think they may be related to the monotone convergence theorem, but I have no clue how to approach them. Any hints?

dynamical systems – Applying the divergence theorem to find a trapping region

I want to check if my reasoning is correct. The problem is to show that the system

begin{cases}
dot x=x-y-x^3 \
dot y=x+y-y^3
end{cases}

has a periodic solution.

In order to apply the Poincaré-Bendixson theorem, I need to find a trapping region.

I’ve already shown that the only fixed point is the origin. So, at this point my idea is to look for an annulus centered on the origin such that the flux of the vector field is positive through the smaller circle and negative through the larger one.

By integrating the divergence in a circle (say C) of radius r I get, after a few computations

begin{gather*}
iint_C (2-3(x^2+y^2)),dx,dy=2pi r^2(1-frac{3}{4}r^2)
end{gather*}

which is positive for $r<frac{2}{sqrt{3}}$ and negative for $r>frac{2}{sqrt{3}}$. Does this mean that I can take as a trapping region whatever annulus centered at the origin with radiuses that respectively smaller and greater than $frac{2}{sqrt{3}}$? Is this kind of reasoning correct?

On a variation of Hartog’s separate analyticity theorem

Let $f(z_1,z_2,ldots,z_n)$ be a function on $mathbf{C}^n$ such that for all $i$, the restriction $(z_imapsto f(z_1,z_2,ldots,z_n))$ is a rational function. Then I would expect $((z_1,ldots,z_n)mapsto f(z_1,ldots,z_n))$ to be rational as well. There should be somewhere in the literature an elementary proof of this fact….

Note that if we replace in the above statement the word "rational" by "holomorphic" then the result holds true (this is the well-known result due to Hartogs) or if we replace it by "meromorphic" it is again true (due to Sakai 1957).

Are there elementary proofs of Hartogs’ and Sakai theorems that only use the usual basics which are covered in a first course in one complex variable ?

complexity theory – One the usage of Arora and Barak’s main lemma in their proof of the PCP theorem

I am working toward understanding a proof the the PCP theorem using Arora and Barak’s textbook Computational Complexity. I believe I found a few (fixable) errors in Section 22.2, in the part titled “Proving Theorem 11.5 from Lemma 22.4”, but I am not sure I completely understand. As I stated two years ago, I still can’t find any errata list that is very comprehensive.

I will copy their proof here (page 462 in my book) and then post my questions afterwards. Things I add are in brackets.


Recall that for a $q_0$CSP-instance $varphi$, we define $operatorname{val}(varphi)$ to be the maximum fraction of satisfiable constraints in $varphi$.

Their proof:

Definition 22.3 Let $f$ be a function mapping CSP instances to CSP instances. We say that $f$ is a CL-reduction (short for complete linear-blowup reduction) if it is polynomial-time computable and, for every CSP instance $varphi$, satisfies:

  • Completeness: If $varphi$ is satisfiable then so is $f(varphi)$
  • Linear blowup: If $m$ is the number of constraints in $varphi$, then the new $q$CSP instance $f(varphi)$ has at most $Cm$ constraints and alphabet $W$, where $C$ and $W$ can depend on the arity and the alphabet size of $varphi$ (but not the number of constraints or variables).

Lemma 22.4 (PCP Main Lemma) There exist constants $q_0 geq 3$, $epsilon > 0$, and a CL-reduction $f$ such that for every $q_0$CSP-instance $varphi$ with binary alphabet, and every $epsilon < epsilon_0$ the instance $psi = f(varphi)$ is a $q_0$CSP (instance) (over (a) binary alphabet) satisfying
$$ operatorname{val}(varphi) leq 1 – epsilon implies operatorname{val}(psi) leq 1 – 2epsilon$$

Proving Theorem 11.5 from Lemma 22.4
Let $q_0 geq 3$ (and $epsilon_0 > 0$) be as stated in Lemma 22.4. As already observed, the decision problem $q_0$CSP is NP-hard. To prove the PCP Theorem we give a reduction from this problem to GAP $q_0$CSP. Let $varphi$ be a $q_0$CSP instance. Let $m$ be the number of constraints in $varphi$. If $varphi$ is satisfiable, then $operatorname{val}(varphi) = 1$ and otherwise $operatorname{val}(varphi) leq 1 – 1/m$. We use Lemma 22.4 to amplify this gap (assuming $1/m$ isn’t big enough). Specifically, apply the function $f$ obtained by Lemma 22.4 to $varphi$ a total of $log m$ times. We get an instance $psi$ such that if $varphi$ is satisfiable, then so is $psi$, but if $varphi$ is not satisfiable (and so $operatorname{val}(varphi) leq 1 – 1/m$), then $operatorname{val}(psi) leq 1 – min{2epsilon_0, 1 – 2^{log m}/m } = 1 – 2epsilon_0$. Note that the size of $psi$ is at most $C^{log m} m$, which is polynomial in $m$. Thus we have obtained a gap-preserving reduction from $L$ to the $(1-2epsilon_0)$-GAP $q_0$CSP problem, and the PCP theorem is proved.


My questions:

First I will ask about what I think is an easy typo, and this question leads to my next question.

In the sentence beginning with “We get an instance $psildots”,$ instead of
$$operatorname{val}(psi) leq 1 – min{2epsilon_0, 1 – 2^{log m}/m } = 1 – 2epsilon_0$$
Don’t they instead mean
$$operatorname{val}(psi) leq min{1 – 2epsilon_0, 1 – 2^{log m}/m } = 1 – 2epsilon_0 ?$$

I am assuming (and tried to confirm) that their logarithm is base 2.

Second, I don’t buy that $operatorname{val}(psi) leq min{1 – 2epsilon_0, 1 – 2^{log m}/m }.$ In particular, they say “apply the function $f$ obtained by Lemma 22.4 to $varphi$ a total of $log m$ times”.

Shouldn’t they instead say, “apply the function $f$ obtained by Lemma 22.4 to $varphi$ up to a total $log m$ times, until you get $epsilon geq epsilon_0$.”?

This is because applying Lemma 22.4 to $varphi$ is only relevant if $epsilon < epsilon_0.$

Next, assuming the answer to my last question is “yes”, then what if after applying the function $f$ zero or more times, we get an $epsilon$ with $epsilon = .51epsilon_0$? In that case, when we apply $f$ once more, we amplify the gap to $2epsilon = 1.02epsilon_0$. In this case, we’d have $operatorname{val}(psi) leq 1 – 1.02epsilon_0$, in which case the lemma is no longer relevant. So I ask the next question:

Doesn’t the previous paragraph suggest that we only get $operatorname{val}(psi) leq 1 – epsilon_0$?

If this is the case, then I believe we can finish their proof by correcting their last sentence so that it says this: “Thus we have obtained a gap-preserving reduction from $L$ to the $(1-epsilon_0)$-GAP $q_0$CSP problem, and the PCP theorem is proved.”

If my interpretation is correct, then though I think these are mistakes in their proof, they don’t affect the proof, but rather, they were definitely confusing.

first order logic – Does FOL extended with least-fixed points satisfy the Compactness Theorem?

I am aware that first-order logics (FOL) satisfies the compactness theorem. That is, if a FOL theory is insatisfiable, a finite subset of the axioms of such theory is insatisfiable too.

Is it the case that FOL extended with least-fixed point (LFP) satisfies the compactness theorem too?