Commutative algebra – dimension of a given finite generated quotient module over a local ring.

I have dealt with the following question from the theory of dimension in commutative algebra.

To let $ (A, m) $ be a local ring and $ M $ a finally generated $ A $-Module.

given $ x_1, …, x_r in M ​​$, Prove that $ dim ( frac {M} {(x_1, …, x_r) M}) geq dim (M) – r $with equality, if and only if {$ x_1, …, x_r $} is part of a parameter system for $ M $,

Now I can show that, though $ A $ is a $ mathbf {regular} $ local ring so $ frac {A} {(x_1, …, x_r) A} $ is a regular local ring with dimension $ dim (A) – r $ then and only if {$ x_1, …, x_r $} is part of a parameter system for $ A $, But I do not know how to show that for the given case. I also can not show the inequality. I could not find any proof, so I would be grateful for any help!

Homological Algebra – Bimodul Resolutions

I asked this question on Mathematics Stack Exachange, but I have not received an answer yet. So I ask it here.

Let A be a finite-dimensional algebra. Let M be a left A-module and
Let N be a real A-module. Choose an injective resolution $ E_1 ^ * $ from $ M $ in the $ A $-mod and an injective resolution $ E_2 ^ * $ from $ N $ in mod$ A $, Then we have $ E_1 ^ * otimes E_2 ^ * $ is an injective resolution of $ M otimes N $ in the category of $ A $$ A $-Bimodule? I could show that $ E_1 ^ * otimes E_2 ^ * $ are injecting objects in the category $ A $$ A $-Bimodule. But how can you show that there is a solution?

Does that also apply to the projective solution?

Thank you in advance!

Linear Algebra – Blockwise inversion of the matrix with rectangular blocks

Given a matrix in block form:

$$ W = left ( begin {array} {cc}
A & B \
C & D
end {array} right) $$

from where $ A, D $ Are square matrices, we can write the inverse in the same block form:

$$ W ^ {- 1} = left ( begin {array} {cc}
A ^ {- 1} + A ^ {- 1} B (D – CA ^ {- 1} B) ^ {- 1} CA ^ {- 1} & – A ^ {- 1} B (DC
A ^ {- 1} B) ^ {- 1} \
– (D – C A ^ {- 1} B) ^ {- 1} C A ^ {- 1} & (D – C A ^ {- 1} B) ^ {- 1}
end {array} right) $$

provided that $ A $ and $ D-CA ^ {- 1} B $ are invertible.

Is there a more general version of the block inversion that allows it? $ A $ or $ D $ be rectangular?

linear algebra – Prove that $ text {(null} ~ T ^ *) ^ perp subseteq text {range} ~ T $

To let $ T in L (V, W) $ from where $ L (V, W) $ denotes the set of linear maps of $ V $ to $ W $, Prove that $ text {(null) ~ T ^ *) ^ perp subseteq text {range} ~ T $ from where $ T ^ * $ is the adjoint operator ( Not related to the adjoint matrix) and $ A ^ perp $ refers to the orthogonal complement of $ A $,

Attempt: To let $ w_2 in text {(null} ~ T ^ *) ^ perp $,

Our goal is to show that $ exists v in V $ so that $ Tv = w_2 $,

We can express $ W = text {null} ~ T ~ oplus ~ text {(null) ~ T ^ *) ^ perp $, So, leave $ w = w_1 + w_2 $ from where $ w_1 in text {null} ~ T $,

$ T ^ * (w) = T ^ * (w_2) = v $ for some $ v in V $

If we prove it somehow $<(Tv-w_2),~(Tv-w_2)>= 0 $

But the extension of the above just seems to make it more complicated.

Any ideas on how to move forward?

Many thanks

linear algebra – limiting the spectral gap of a simple symmetric matrix

I have a seemingly innocent linear algebra problem that I can not solve, and I hope you kindly give me an insight. Here is the description: Let $ mathbf {a} = (a_1, a_2, dots, a_d) ^ {T} $ be a positive probability vector, $ d. H. $ $ Vert mathbf {a} vert_1 = 1 $ and $ a_i> 0 $ for all $ i $, Leave matrix $ A $ be defined as follows: $$ A = textrm {diag} ( mathbf {a}) – mathbf {a} mathbf {a} ^ {T} $$ from where $ textrm {diag} ( mathbf {a}) $ means the diagonal matrix with the $ i $the diagonal entry is $ a_i $, It's easy to show that $ mathbf {1} _d $, the all-one vector of dimension $ d $is an eigenvector of $ A $ of eigenvalue $ 0 $, And the Gershgorin circle set shows that too $ A $The eigenvalues ​​of are greater or equal $ 0 $, My question is:

What is the smallest eigenvalue of $ A $ that is not null

I did the calculation when $ d = 3 $ and realized that there may not be a simple analytical formula for it and therefore a nice lower bound is much appreciated.

Thank you very much!

Modules – DG algebra and its zeroth cohomology are derived equivalently

This is somewhat related to this question: Can an algebra be equivalent to the dg extension? ,

Suppose we have a DG algebra $ A $, so that $ H ^ 0 (A) $ is Noetherian (left and right) and $ H ^ bullet (A) $ is gradually limited. Suppose further that the limited derived category is finally generated $ H ^ 0 (A) $ The module corresponds to the category of the (finally generated) limited DG $ A $Modules. Can we conclude something about it? $ H ^ i (A) $ to the $ i neq 0 $ (for example, they must be zero)?

Abstract Algebra – Equivalent Order Definitions

To let $ K $ be an algebraic number field and $ L $ the algebraic integers in $ K $, Acceptance of an order $ O $ is defined as a subring of $ L $ whose field of breaks is $ K $I want to prove that it is a subgroup of the finite index additive group $ L $?

The opposite is clearly the case, but I have trouble proving that this definition is equivalent. Thanks a lot!

linear algebra (It is very important) linear transformation

TL (V). The T-composition operator with itself is represented by T2; i.e. T2 = T∘T and for I the operator identity I (v) = v for all v∈V.

If T2 = I and different —> ± I

prove that v∈V ∖ {0}, so that T (v) = v

I did …

Since T2 (v) = v, we see that (T – I) (T + I) (v) = 0 for each v 0V.

Let T (v) ≠ v for every v∈V ∖ {0}. We want to show that this is impossible.

Note that (T – I) (T + I) (v) = 0, ∀v∀V, so T + IV sends into the subspace of V consisting of elements x of V such that (T – I ) (x) = 0, ie
T + I: V → Ker (T – I): = {x – V – (T – I) x = 0}.
Under the assumption, the RHS subspace {x {V∈ (T – I) x = 0} = {0}.

Consequently, (T + I) (v) = 0, ∀v∀V. This means that T = – I, is a contradiction. Therefore T (v) = v for some v∈V ∖ {0}.

But in the last steps I have to say .. Why can I confirm this statement …

Under the assumption, the RHS subspace {x {V∈ (T – I) x = 0} = {0}.
Why is {0} the x value?

Abstract Algebra – If X is a non-empty subset of R, show that C (X) = $ {c in R | cx = xc $ for all $ x in X } $ is a subring of R.

We have to check in 4 steps, if this is a subring.

$ 0 in C (X) $ there $ x.0 = 0.x $,

$ 1 in C (X) $ there $ x.1 = 1.x $,

Now we have to prove the conclusion with two operations. To let $ c_1 in C (X) $ and $ c_2 in C (X) $, Now we know that

begin {align}
c_1 & = xc_1x ^ {- 1} \
c_2 & = xc_2x ^ {- 1}
end

So we have to show that $ c_1 + c_2 in C (X) $, Now,

begin {align *}
c_1 + c_2 & = xc_1x ^ {- 1} + xc_2x ^ {- 1} \
& = x (c_1x ^ {- 1} + c_2x ^ {- 1}) \
& = x (c_1 + c_2) x ^ {- 1}
end {align *}

That's why, $ c_1 + c_2 $ is in $ C (X) $, We have to show that too $ c_1c_2 $ is in $ C (X) $, Now,
begin {align *}
c_1c_2 & = xc_1x ^ {- 1} xc_2x ^ {- 1} \
& = xc_1c_2x ^ {- 1}
end {align *}

Therefore, $ c_1c_2 $ is in $ C (X) $,
Since all four conditions for the subring are met, the ring is $ C (X) $ is a subring of R.

This was my solution, but apparently it's wrong. How do we solve this problem?