abstract algebra – $ g ^ l in langle f_1, …, f_k rangle $, if the ideal generated by $ f_1, ~ cdots, f_k, gy -1 $ in $ Bbb C[x_1,~cdots,x_n, y]$ contains $ 1 $

To let $ f_1, ~ cdots, f_k $ Be polynomials in $ Bbb C [x_1, ~ cdots, x_n] $. I want to show that for a polynomial $ g in Bbb C [x_1, ~ cdots, x_n] $, $ g ^ l $ is contained in the ideal that is generated by $ f_1, ~ cdots, f_k $ for some $ l $ if the ideal is generated by $ f_1, ~ cdots, f_k, gy -1 $ in the $ Bbb C [x_1, ~ cdots, x_n, y] $ contains $ 1 $.

Because of the term $ gy-1 $I thought this might be similar to Hilbert's zero-digit proof, so I tried to imitate the proof, but I can't see anything. Any clues? Thank you in advance.

linear algebra – matrix psd inequality, for addition

Given four matrices $ A, widetilde {A}, B, widetilde {B} in mathbb {R} ^ {n times d} $, if
$ A ^ { top} A approx _ { epsilon} widetilde {A} ^ { top} widetilde {A} $, $ B ^ { top} B approx. _ { Epsilon} widetilde {B} ^ { top} widetilde {B} $, do we have
begin {align *}
(A + B) ^ { top} cdot (A + B) approx_ {10 epsilon} ( widetilde {A} + widetilde {B}) ^ { top} cdot ( widetilde {A} + widetilde {B})?
end {align *}

For a square matrix $ C, widetilde {C} $, we say $ C approx _ { epsilon} widetilde {C} $,
begin {align *}
(1- epsilon) widetilde {C} leq C preceq (1+ epsilon) widetilde {C}
end {align *}

linear algebra – upper limit for the condition number of the product of a random sparse matrix and a semi-orthogonal matrix

To let $ G in mathbb {R} ^ {n times m} $ (m> n, m = O (n)), whose all entries i. distributed as $ mathcal {N} (0, 1) * text {Ber} (p) $. To let $ V in mathbb {R} ^ {m times n} $ be a solid semi-orthogonal matrix, d. H. the columns of $ V $ are orthonormal vectors. Define $ A = GV $, For what $ p $ Can we give a polynomial cap for the condition number of? $ A $ i.e. $ kappa (A) leq text {poly} (n) $?

Interesting cases / related problems:

  1. To let $ V $ can be defined as $ V_ {i, j} = 1 $ if $ i = j $ and $ V_ {i, j} = 0 $ Otherwise. To let $ G = (g_1, g_2, ldots, g_m) $, in this case $ A = GV = (g_1, g_2, ldots, g_n) $. Hence in this case $ A $ has the same distribution as $ G $ except $ m = n $. This has been investigated by Basak and Rudelson, who have proven this $ kappa (A) leq text {poly} (n) $ to the $ p = Omega ( log n) / n $.

  2. To the $ p = 1 $, $ G $ is just a random Gaussian matrix and $ A = GV $ can also be considered a random Gaussian matrix if Gaussian vectors are isotropic. This is only a sub-case of 1.

linear algebra – left / right inverse matrix question

For what values ​​of $ a, b, c $ there is a left and / or right reversal for $ A = begin {bmatrix}
1 & a \
2 B \
3 & c
end {bmatrix} $
exist?

We know that a left inverse matrix $ X $ exists so that $ XA = I_2 $ Where $ I_2 $ is the $ 2 times 2 $ So identity matrix $ X $ is a $ 2 times 3 $ Matrix. What do we do next? Thanks a lot.

ac.commutative algebra – special cases of the embedding problem

Embending problem. To let $ I $ be the ideal of polynomial algebra $ A = K ^ {[n]} $, so that $ A / I $ is also a polynomial algebra with a smaller number $ k $ of variables. Is it true that $ I $ is generated by $ n-k $ Variables of $ A $.

Definition. Call an ideal of polynomial algebra coordinate-like if it is generated by some coordinates of a polynomial algebra.

I am interested in the following special cases of the embedding problem.

Problem 1. Consider a polynomial algebra $ A = K ^ {[n]} $ and be $ k $ Polynomials $ f_1, …, f_k $. For a polynomial algebra $ B = K [y_1, …, y_ {n + k}] $ Consider the morphism that sends $ y_i $ to $ n $ Coordinates of $ A $ to the $ i leq n $ and $ y_j $ to $ f_ {y-n} $ to the $ j> n $. It is obvious that the core of this morphism is the ideal $ I $ With $ A / I cong B $. So is it like a coordinate?

Problem 2. If $ I subset J $ are two coordinate-like ideals of polynomial algebra $ A $, it is true that $ J / I $ is the coordinate – like ideal of $ A / I $?

It is obvious that these problems result from the embedding problem, but are they true?

linear algebra – the weight of the conjugate partition is greater than the weight of the nullity partition

This question comes from Exercise 4.1 of the lectures on geometric constructions, Kamnitzer-arXiv-Link, and is a consequence of a question that I have asked here that deals with Part 1 of the exercise. This question is part 2.

We get that in this exercise $ X: mathbb {C} ^ N to mathbb {C} ^ N $ is a not potent matrix with $ X ^ n = 0 $. The partition is connected to it $ mu = ( mu_1, dots, mu_n) $ With $$ mu_i = dim ker (X ^ i) – dim ker (X ^ {i-1}). $$

To $ X $ We can also map the partition $ nu = ( nu_1, dots, nu_m) $ where everyone $ nu_i $ is the size of the $ i $-th Jordan Block from $ X $Placing an order to make the 1st Jordan block the largest size, and so on. The young diagram of $ nu $ has a conjugate partition $ lambda $where everyone $ lambda_i $ is the number of $ j $ so that $ nu_j geq i $ (i.e. it is the number of Jordan blocks one size larger or equal $ i $).

The first part shows that for everyone $ k $, $$ mu_1 + dots + mu_k leq lambda_1 + dots + lambda_k. $$ Now I have to show that as $ GL_n $ Weights, $ lambda geq mu $.

We have $$ lambda – mu = (( lambda_1- mu_1), dots, ( lambda_n- mu_n)), $$ what I want to express as a sum $$ k_1 a_1 + dots + k_ {n-1} a_ {n-1}, $$ Where $ k_i $ are not negative integers and $ a_1 = (1, -1.0, points, 0), points, a_ {n-1} = (0, points, 0, 1, -1) $. In other words, the setup will $$ (( lambda_1- mu_1), dots, ( lambda_n- mu_n)) = (k_1, k_2-k_1, dots, k_ {n-1} -k_ {n-2}, -k_ { n-1}). $$

Starting from the left, we get an inductive equation $$ k_i = ( lambda_1 + dots + lambda_i) – ( mu_1 + dots + mu_i), $$ but I can't show that $$ – k_ {n-1} = lambda_n- mu_n tag {*}. $$ I'm pretty sure my calculations are correct, and the only place we use the part 1 inequality is to show that the constants $ k_i $ are not negative, but that's all right – the only part I'm sticking to is showing (*).

linear algebra – canonical form and basis of the orthogonal operator

Find the canonical form and the canonical basis of the orthogonal operator $ f $ which has the following matrix in an orthonormal basis $$ A_f = frac {1} {3} begin {bmatrix}
2 & -1 & 2 \
2 & 2 & -1 \
-1 & 2 & 2
end {bmatrix}. $$

I will show my approach and can you please help me to continue my considerations?

Approach: We know that there is a canonical basis for every orthogonal operator, so the matrix of the operator $ f $ is on that basis $$ begin {bmatrix}
pm 1 & 0 & 0 \
0 & cos varphi & – sin varphi \
0 & sin varphi & cos varphi
end {bmatrix}. $$
Since the determinant and the trace of the matrix of the linear operator are the same in every basis, we make the following remark: since $ det A_f = 1 $ then the first element of the first line should be the same in canonical form $ 1 $. Since $ text {tr} A_f = 2 $ then $ 2 cos varphi + 1 = 2 Leftrightarrow cos varphi = frac {1} {2} $. In order to $ sin varphi = pm dfrac { sqrt {3}} {2} $.

It also follows from this $ 1 $ is an eigenvalue of the operator $ f $ and is the corresponding eigenvector $ e_1 = frac {1} { sqrt {3}} (1,1,1) $. In order to $ e_1 $ can be taken as the first vector of the canonical base and we know that the canonical form is $$ begin {bmatrix}
100 \
0 & cos varphi & – sin varphi \
0 & sin varphi & cos varphi
end {bmatrix}. $$

I cannot consistently solve the following questions myself:

1) How do you find the rest of two canonical vectors?

2) And what value of $ sin varphi = pm dfrac { sqrt {3}} {2} $ I have to take?

I will be very happy about your detailed answer! I've been trying to think about this question for the past 2 days, but I haven't been able to resolve it consistently.

Representation theory – algebras based on Frobenius algebra

To let $ A $ be a commutative Frobenius algebra over a field $ K $ (We can accept that $ A $ is local).

To let $ B = {v_i } $ be a vector space basis of $ A $ with the unity of $ A $.
To let $ M_i: = v_i A $ and $ M: = bigoplus _ {} ^ {} {M_i} $ and $ C: = underline {End_A} (M) $ the stable endomorphism ring of $ M $.

The question is $ C $ regardless of the base chosen $ B $ up to isomorphism?

linear algebra – multi-variant Gaussian

I have a few questions about the multivariate Gaussian formulation. I've seen a lot of videos and read Wikipedia, but I don't quite understand why things happen.

formula

In this picture we have the general form of a multi-variate distribution.

My first question is roughly $ Sigma $ if it is the correlation or covariance matrix. I've seen different sources related to both, and I don't know why because they are different.

My second question concerns the form $ x ^ T Sigma x $ I do not understand this general form of wrapping a matrix in two x. WHY is that because we want x ^ 2? and if that's the case, why can't we do that? $ Sigma x ^ Tx $? Finally, that $ | Sigma | $ is the determination of $ Sigma $ but in the univariate case it is the standard deviation determined by $ Sigma $ equal to the standard deviation?

Any help on this intuition would help me a lot, thanks.