linear algebra – What are the solutions for $ X $ for $ X ^ {T} A X = A $? If you know that, what is the solution for Y for $ Y = (I – X) (I + X) ^ {- 1} $ (with $ det (I + X) neq 0 $)?

All matrices are quadratic matrices with real numbers. The goal is to use these properties to show this $$ AY + Y ^ {T} A = 0 $$

What do the equations in the title reveal about the properties of $ A $. $ X $, and $ Y $? Can $ X $ just be the identity matrix?

Thanks a lot!

Complexity Theory – Would $ Sigma_i ^ P neq Pi_i ^ P $ mean that the polynomial hierarchy can not collapse to the $ i $ -th level?

If $ Sigma_i ^ P = Pi_i ^ P $, then it follows that the polynomial hierarchy collapses to that $ i $-th level.

What about the case? $ Sigma_i ^ P neq Pi_i ^ P $?
For example, consider the case of $ NP neq coNP $, As far as I know, this would mean that the polynomial hierarchy can not collapse to the first level, because if $ PH = NP $then in particular $ coNP subseteq NP $, which does ______________ mean $ NP = coNP $, Can we extend this idea to prove the general case:
$ Sigma_i ^ P neq Pi_i ^ P $ implied $ PH $ can not collapse $ i $-th level?

cv.complex variables – Why is $ (1- | Cu-D | ^ {- 1}) operatorname {Im} (C (Cu-D)) ^ 2 = operatorname {Re} (C (Cu-D) ) ^ 2 $ impossible for $ C neq 0 $

From the system of differential equations
$$
pmatrix {g & f \ – fg & 1 + f ^ 2 \ – f & g \ 1 + g ^ 2 & -fg}
pmatrix {f & # 39; & gt; \ g & # 39 ;;
=
pmatrix {6f? g? \ – 3gf & # 39; ^ 2 + 3ff? G & # 39; – 3f & # 39; ^ 2 + 3g & gt; 2 & gt; 3gf & gt; & gt; -3fg & gt; ^ 2}
$$

The first and third equations can be combined
$$
pmatrix {f & -g \ g & f} pmatrix {f & # 39; & gt; \ g & # 39;} = 3 pmatrix {f & gt; 2-g & gt; ^ 2 \ 2f & ggr; & gt;}
\ text {or} \
(f + ig) (f & + + ig &)) = 3 (f + + ig)) ^ 2
$$

this is through unique integration $$ f & # 39; + ig & # 39; = C (f + ig) ^ 3 tag {*}, $$ and twice
$$
(f + ig) ^ {- 2} = D-Cu implies f ^ 2 + g ^ 2 = | D-Cu | ^ {- 1}.
$$

Multiply the 4th or 2nd equation with $ f, $ or. $ g $ and sum there
$$
ff & # 39; & # 39; + gg & # 39; & gt; = – 3 (f ^ 2g? 2-2ff? Gg? + F? 2g ^ 2) = – 3 (fg? -F? G) ^ 2 = -3Im (((f-ig) (f + + ig)) ^ 2
$$

But also
$$
ff & # 39; & # 39; + gg & # 39; & gt; = Re ((f-ig) (f & + + ig &))) = 3 frac {Re Bigl ((f-ig) (f & # 39; + ig & # 39;) ^ 2 Bigr)} {f ^ 2 + g ^ 2}
$$

If we insert the above differential equation of first order (*), we get the identity
$$
– operatorname {Im} (C (f ^ 2 + g ^ 2) (f + ig) ^ 2) ^ 2 = frac { operatorname {Re} Bigl ((C (f ^ 2 + g ^ 2) (f + ig) ^ 2) ^ 2 Bigr}} {f ^ 2 + g ^ 2}
\ iff \
– (f ^ 2 + g ^ 2) operatorname {Im} (C (f + ig) ^ 2) ^ 2 = operatorname {Re} (C ^ 2 (f + ig) ^ 4) = Re (C ( f + ig) ^ 2) ^ 2-Im (C (f + ig) ^ 2) ^ 2
\ iff \
(1- | Cu-D | ^ {- 1}) operatorname {Im} (C (Cu-D)) ^ 2 = operatorname {R} (C (Cu-D)) ^ 2
$$

The solution says that the last should be impossible $ C neq 0 $, Can someone explain why this is impossible? It is clear if $ C $ is real, but I think it can be complex. features $ f $ and $ g $ are real functions of the same real parameter.

turing machines – A condition for $ emptyset neq S subset RE $, under the $ L_S notin RE $

I read some lecture notes on computational theory and after citing and proving the sentence: $ emptyset in S Rightarrow L_S = { langle M rangle: L (M) in S } notin RE $ it stands that $ emptyset in S $ is not a sufficient condition, i. $ L_S notin RE $ does not give way $ emptyset in S $by giving the counterexample $ L _ { Sigma ^ *} notin RE $ , However, it means that there is a necessary and sufficient condition under which $ L_S notin RE $, I searched for this condition in Sipser's book but did not find it. I would be very happy to receive a reference for this condition.


Edit: Given the answer from @dkaeae, I would like to know what the stronger feature is that can be derived from a non-trivial one $ S subset RE $ in the case $ L_S notin RE $,

Field theory – show that $ mathbb {Q} ( sqrt[n]{2}) neq mathbb {Q} ( sqrt[n]{3}) $

I want to show that $ mathbb {Q} ( sqrt[n]{2}) neq mathbb {Q} ( sqrt[n]{3}) $ for a even $ n $,

I was advised that I should use the following fact (which I have already proved):

if $ L / mathbb {Q} $ finite field extension and $ A = L cap bar { mathbb {Z}} $ (from where $ bar { mathbb {Z}} $ is the algebraic integer ring or $ O_L $) and $ B subseteq A $ Side ring with $ Frac (B) = L $ ($ Frac (B) $ is the fraction field), then $ n ^ 2 cdot Delta_ {A / mathbb {Q}} = Delta_ {B / mathbb {Q}} $ from where $ Delta_ {A / mathbb {Q}} $ is the discriminant of $ A / mathbb {Q} $ and $ n in mathbb {Z} _ {> 0} $,

thank you in advance

Homology Cohomology – If $ textrm {Tor} _n ^ A (-, A / radA) neq 0 $? ($ A $ a finite-dimensional $ K $ algebra)

This question arises from a proof in paper form: Unlimited derived categories and finite dimension presumption – Jeremy Rickard, more precisely theorem 4.3.
The question is:

To let $ A $ Be a finite dimensional algebra over a field $ K $ and $ M $ a right $ A $Module with projective dimension $ d $, So let it go $ P ^ { bullet} $ Let be the minimum projective resolution of $ M $ considered as complex. Then
$$ textrm {Tor} _d ^ A (M, A / radA) neq 0 $$
This is, $ P ^ { bullet}[-d] otimes_A (A / (radA)) $ has a non-zero cohomology in degrees zero. ($ P ^ { bullet}[-d]$ the complex is moved $ d $ times to the right)

I am thankful for every help.

Linear Algebra – Exercise with $ GL (V) $, which shows that the group for $ dim V geq 2 $ and $ F neq {0,1 } $ is not commutative

In my lecture scripts we introduced the general linear group. We already know that $ L (V, V) = {f: V rightarrow V | f text {is linear} } $ is a ring with unity and therefore $ GL (V) = {x in L (V, V) | x text {is invertible} } $ is a group. I do not understand the proof of the above statement in the script:

proof

I do not understand why $ A_2 ^ {- 1} = A_2 $, there $ A_2 (A_2 (v_1)) = A_2 (-v_1) $

What I do not understand here is the definition of $ A_2 $ we have that $ v_1 neq-v_1 $ and therefore $ A_2 (-v_1) = – v_1 $, On the other hand, a linear map is uniquely determined by the values โ€‹โ€‹of the basis vectors and thus $ A_2 $ has to be linear and that means $ A_2 (-v_1) = – (A_2 (v_1)) = v_1 $,

But then we have $ v_1 = -v_1 $, This is not the case, for example $ V = mathbb {R} ^ 2 $

Can someone tell me where I am wrong?

linear algebra – Show $ || A || _2 = mathrm {sup} _ {x neq 0} frac {x ^ T A x} {x ^ T x} $ where $ A $ is symmetric and positive definite

problem

Show:
$$ || A || _2 = mathrm {sup} _ {0 neq x in mathbb {R}} frac {x ^ T A x} {x ^ T x} $$
from where $ A $ : symmetrical and positive.


To attempt

Since

$$
begin {align}
|| A || _2 & = mathrm {sup} _ {0 neq x in mathbb {R}} frac {|| A x || _2} {|| x || _2} \
& = mathrm {sup} _ {0 neq x in mathbb {R}} frac {x ^ T A ^ T A x} {x ^ T x}
end
$$

I think the problem boils down to showing

$$
mathrm {sup} _ {x neq0} x ^ T A x = mathrm {sup} _ {x neq0} x ^ T A ^ T A x
$$

where I am stuck

Every help is appreciated.

formal languages โ€‹โ€‹- Construct a decidable set $ B $ such that $ B neq A_w $ stands for every $ w in Sigma ^ star $

I have been holding this problem for some time. All references would be grateful!

To let $ A subseteq Sigma ^ star $ be decidable Given $ w in Sigma ^ star $, define $$ A_w = {x in Sigma ^ star : | : langle x, w rangle in A }. $$
Construct a decidable set $ B $ so that $ B neq A_w $ for each $ w in Sigma ^ star $,