## algebraic geometry – Commutation of integral closure and group invariance

We work over $$mathbb{C}$$. Let $$G$$ be a reductive group acting on a normal affine variety $$X$$: we have an induced $$G$$-action on $$k(X)$$ given by
$$gcdot p(x)=p(g^{-1}cdot x).$$
Thanks to the GITwe know the good quotient $$Y$$ is affine and given by $$Y=text{Spec} k(X)^G$$.

I would like to prove that $$Y$$ is still normal, that is $$k(X)^G=k(Y)=overline{k(Y)}=overline{k(X)^G}$$ (where the closure is taken in the field $$k(X)$$, that is, the operation of integral closure and $$G$$-invariance is commutative

The inclusion $$k(X)^Gsubset overline{k(X)^G}$$ is obvious, so let us prove the reverse one. By the normality, it suffices to show that

$$overline{k(X)^G}subset overline{k(X)}^G,$$
I take $$fin overline{k(X)^G}$$, so that I can write

$$f^n(x)+a_{1}(x)f^{n-1}(x)+ldots+a_{n-1}(x)f(x)+a_n(x)=0$$
with $$a_iin k(X)^G=overline{k(X)}^G$$. The $$G$$-invariance allow me to see that the above formula works also
$$f^n(g^{-1}cdot x)+a_{1}(g^{-1}cdot x)f^{n-1}(g^{-1}cdot x)+ldots+a_{n-1}(g^{-1}cdot x)f(g^{-1}cdot x)+a_n(g^{-1}cdot x)=0$$
for any $$gin G$$, but now I’m a bit stuck as I don’t know how to continue (I can subsract them both but I don’t see what this imply).

Posted on Categories Articles

## pr.probability – How to prove the coupling version of the Donsker’s Invariance Principle?

Donsker’s invariance principle:
Let $$X_1,X_2,…$$ be i.i.d. real-valued random variables with mean 0 and variance 1. We define $$S_0=0$$ and $$S_n= X_1+ … + X_n$$ for $$n geq 1$$. To get a process in continuous time, we interpolate linearly and define for all $$t geq 0$$
$$S_t = S_{(t)}+ (t-(t))(S_{(t)+1}- S_{(t)}).$$
Then we define for all $$t in (0,1)$$
$$S^*_n(t)= frac{S_{nt}}{sqrt{n}}.$$
Let $$C(0,1)$$ be the space of real-valued continuous function defined on $$(0,1)$$ and endow space with the supremumnorm. Then $$(S^*_n(t))_{0 leq t leq 1}$$ can be seen as a random variable taking values in $$C(0,1)$$. Now let $$mu_n$$ be its law on that space of continuous functions and let $$mu$$ be the law of Brownian motion on $$C(0,1)$$. Then the following holds:

Theorem (Donsker): The probability measure $$mu_n$$ converges weakly to $$mu$$, i.e. for every $$F: C((0,1)) rightarrow mathbb{R}$$ bounded and continuous,
$$int F dmu_n rightarrow int F dmu$$
as $$n rightarrow infty$$.

But for a two-dimensional case, the ‘coupling version’ is as following.

$$textbf{‘coupling version’}$$: Fix a square $$S$$ of size $$s$$. Fix $$xin nS$$. Let $$X$$ be a random walk starting from $$x$$ until it exits the square $$S$$ and let $$B$$ be a Brownian motion until it exits $$S$$. For $$forall epsilon>0$$, then there exists $$N>0$$ such that $$nge N$$, one can couple $$X$$ and $$B$$ so that
$$d(X,B)le epsilon n.$$

My questions:
(1) Can we extended the classical Donsker’s to the two-dimensional case?

(2) Is there any reference for the proof of the ‘coupling version’?

Posted on Categories Articles

## ag.algebraic geometry – invariance by deformation of \$H^1(dlog)\$

Let $$X$$ be an algebraic variety. The canonical morphism of sheaves
$$dlog:{cal O}_X^*to Omega_X$$ defines a map $$c:Pic(X)to H^1(Omega_X)$$.
Is this map invariant by deformation ? (i.e if $$({cal L_s})_{sin S}$$ is a family of line bundles on $$X$$ parametrized by a smooth curve $$S$$, is $$c({cal L}_s)$$ independent of $$s$$ ?).

Posted on Categories Articles

## ag.algebraic geometry – An invariance property of rational singularities

Let $$X$$ be a normal variety over a field of characteristic zero with rational singularities.

If $$pi:Y to X$$ is a birational proper morphism with $$Y$$ also normal, then does $$Y$$ also have rational singularities?

It is easy to see that this is true if $$dim(X) = 2$$, but the higher dimensional case seems more difficult and perhaps it is even false. If true, I would also be interested in analogous results in positive or mixed characteristic e.g., for pseudo-rational singularities.

Posted on Categories Articles

## linear algebra – rotational invariance estimator diagonalizable

An estimator $$Xi(E)$$ an estimator of $$C$$ given $$E$$ is said to be rotationally invariant if and only if $$Xi (OEO’ ) = OXi (E)O’$$ where $$O$$ is any rotation matrix.

Question: why $$Xi(E)$$ can be diagonalized in the same basis as $$E$$, up to a fixed rotation matrix $$O$$?

Posted on Categories Articles

## Loop invariance insertion sort algorithm

I have the following pseudo code for a insertion sort algorithm

``````INSERTION-SORT

1 for j = 2 to A.length
2    key = A(j)
3    // Insert A(j) into the sorted sequence A(1..j-1)
4    i = j -1
5    while i > 0 and A(i) > key
6        A(i+1) = A(i)
7         i = i -1
8    A(i+1) = key
``````

I am trying to convert it into executeable code written in Python

``````def main():
A = (5,2,4,6,1,3)
for j in range(1,len(A)):
key = A(j)
i = j - 1
while i >= 0 and A(i) > key:
A(i + 1) = A(i)

i = i - 1
A(i + 1) = key
print A(0:j) #LOOP INVARIANCE A(indexstart .. j - 1)
return A

main()
``````

is this a correct translation ? Im not sure if i messed something up with the indexes.Furthermore im going to be testing correctness with initialization, maintenance and termination.

I added the line `print A(0:j)` to show initialization and maintenance but im not sure if it should be `print A(0:j-1)` because in my book it says `print A(0:j-1)`

Posted on Categories Articles

## Rotational Invariance of Divergence of any vector field

I am trying to find whether divergence of a vector field is invariant under rotational transformations.
But, I cannot move forward from this.enter image description here

Posted on Categories Articles

## Invariance of generalized eigenspace – Mathematics Stack Exchange

I have a lemma saying that each of the generalized eigenspaces of a linear operator $$T$$ is invariant under $$T$$. This means that if $$E_j$$ is a generalized eigenspace then $$T:E_j rightarrow E_j.$$

The proof of this goes like this.

Take a $$vin E_j$$ so that $$(T-lambda_j I)^{n_{j}}v=0$$.

Now we are going to show that the same holds for $$T(v)in E_j$$.

$$(T-lambda_j I)^{n_{j}}T(v)=(T-lambda_j I)^{n_{j}-1} T(T-lambda_j I)v=T(T-lambda_j I)^{{n_j}-1}v= T(0)=0$$.

Can someone help me with the proof?

Posted on Categories Articles

## Invariance of Chow groups of projective bundles under automorphisms of bundles

Suppose $$X$$ is a smooth scheme, $$E=O_X^{oplus n}$$ and $$varphiin SL_n(E)$$, i.e. $$varphi$$ has trivial determinant and is an isomorphism. Is the morphism
$$mathbb{P}(varphi)^*:CH^{bullet}(mathbb{P}(E))longrightarrow CH^{bullet}(mathbb{P}(E))$$
equal to the identity map?

Posted on Categories Articles

## ds.dynamical systems – Proving positive invariance

I need to prove that set $$D$$(A picture for Set $$D$$) given by
$$D={(x,y):0leq xleq L_0,~0leq yleq X_0,~0leq x+y leq R_0}subseteq mathbb{R}_+^2$$ of the system:
$$dot{x}=k_1(R_0-x-y)(L_0-x)-k_{-1}x,~dot{y}=k_2(R_0-x-y)(X_0-y)-k_{-2}y$$
is positively invariant. I thought of checking the direction of the vector fields on the boundary of $$D$$ (set $$D$$ is a trapezium as shown in the picture) and I showed that all the vector fields point inwards except at the corner (denoted $$C$$) where $$dot{x}=0,~dot{y}<0, text{for } x=0,~y=R_0$$. Here, the vector field points downwards tangential to $$D$$. To determine what happens at this point, I thought of checking the sign of the dot product between the vector field and the normal’s field. I chose the normal such that it points inside $$D$$ by considering $$vec{n}_C=(1,-1)$$ such that:
$$(1)cdotdot{x}+(-1)cdot{y}$$
and by substituting $$x=0,~y=R_0$$, I got $$(1)cdotdot{x}+(-1)cdot{y}=k_{-2}y>0$$ which is positive. The positive value means that the vector fields and the normals’ point in the same direct.

Is my approach fine? If it is correct, I would like to back my reasoning with some literature/theorem. Can someone help me which theorem can I use to back up my reasoning? Is there another way in which I can show that the vector fields point inwards? I appreciate in advance.

Posted on Categories Articles