linear algebra – Are there any results in generalizing matrix theory to multidimensional arrays?

In matrix theory(2-dimensional arrays), we can define addition, multiplication, rank and determination etc. I’m working on generalizing these properties to multidimensional arrays as many as possible. Are there any results in this way? I’d really appreciate it if you could provide some references.

linear algebra – Diagonalization of nxn matrice

What conditions must a nxn matrice have to always be diagonalizable?
I do know that it has to have n distinct eigenvalues but let’s say if the only information we had is that if 0 is one of its eigenvalues then can it still be diagonalised? I personally don’t think so but i would just want to be sure of it. One of the other important conditions is that say matrice B

B= $lambda$I for where $lambda$ is not 0 right?

commutative algebra – The contraction of $B_pcdot q$ is $pcdot A_p$

I am trying to prove the following:

Let $Asubset B$ with $B$ integral over $A$. Let $qsubset q’ subset B$ be prime ideals. If $q^c=q’^csubset A$ then $q=q’$.

Here is my attempt with the points I am confused about inside brackets:
Let $p=q^c$ and take the localisation at $p$. Then $pcdot A_p$ is a maximal ideal in $A_p$.
Then it should be that $pcdot A_p$ is the contraction of $q cdot B_p$ (I’m not sure how to see this).
Since $pcdot A_p$ is maximal it gives that $q cdot B_p$ will be maximal.
This gives that $q’cdot B_p$ is either $q cdot B_p$ or all of $B_p$. (I can’t see why this cannot be all of $B_p$).
Now if we know that $qcdot B_p=qcdot B_q’$ (how can we lift this to get that $q=q’$).

numerical linear algebra – Numerically solving the optimization problem $min | x |_{ell^1} s.t. | Ax-b |_{ell^2} leq delta$

Consider a linear system $Ax=b$ with matrix $A$ and right hand side $b$ and suppose one is interested in a sparse solution of this system. In the situation where the right hand side is corrupted by noise one can solve the minimization problem
$$
min | Ax-b |_{ell^2} s.t. | x |_{ell^1} leq delta.
$$

This corresponds to the LASSO algorithm with regularization parameter $delta$. On the other hand one can try to solve the optimization problem
$$
min | x |_{ell^1} s.t. | Ax-b |_{ell^2} leq delta. (1)
$$

This problem was, for instance, considered in Candes famous paper “Towards a mathematical theory of super-resolution”. I’m interested in solvind problem (1) numerically with Python but I have limited Python skills. I was wondering if there is any implementation which solves the problem (1). For the LASSO there are many packages but I couldn’t find one for problem (1) so far.

Thanks a lot for your help!

Online Mathematica, pros and cons, linear algebra problem

I apologize in advance if this question is irrelevant to this website.

I would like to use Mathematica to solve a system of linear equations with lots of unknowns(729 unknowns), the unknowns are tensor components of curvature tensors arising from a differential geometry problem.

I would like to buy Mathematica for this purpose and I have to decide between buying it online or installing the desktop version on a PC. I m thinking of buying the online version. I have the following questions:

  1. What are the advantages and disadvantages of the desktop version over the online version ? For example, are there mathematical or programming functionalities which are available only on the desktop version and not in the online version ?

  2. I assume that if I buy the online version, then I will get a username and a password to access an online version of mathematica from any computer. (Just like how one can type latex on overleaf.com from an online account using any PC). Is my assumption correct ?

3)Does Mathematica provide a user friendly way for solving linear simultaneous equations with lots of unknowns ? Let me elaborate with an example: Say I want to solve the simultaneous equations $x=2y+a, y-3x=7x+2$ for $x,y$. I would like a software where I can just type: $x=2y+a, y-3x=7x+2$ and ask the software to solve for $x,y$ and just give me the solution symbolically in terms of parameter $a$ instead of me having to rearrange terms so that the equations become $x-2y=a, y-10x=2$ and then write it in matrix form, then ask it to make a matrix inversion. The difference I am talking about might seem silly in this example but it will not be silly in my original problem where I have 700 unknowns. If this feature exists in Mathematica, it will save me a lot of time.

Thank you,

banach spaces – Is it possible turn the Dirichlet ring into a Banch algebra?

The set of all arithmetic functions $f:mathbb{Z}^{+}tomathbb{C}$, under pointwise addition and Dirichlet convolution, is a commutative ring with inverse, but not all are invertible, so if we take the set to be the set of all such functions as above, for which $f(1)neq 0$, we would obtain a field.

So my question is:
To what subset of the arithmetic functions and under what norms can one attach a Banach algebra structure to the Dirichlet ring? Is it even possible?

linear algebra – Using Steinitz Exchange Lemma to prove that a set of four vectors in $mathbb{R}^2$ is linearly dependent

Cheers, so I am asked to prove that in the vector space of $V = mathbb{R}^2$, every set of 4 vectors is linearly dependent. I tried solving it using Steinintz Exchange Lemma, so:

Let $V = mathbb{R}^2$ be the vector space of interest, and a set ${ v_1, v_2 }$ be a basis for our vector space (e.g. ${ (1,0), (0,1) }$). Now let $A subset V, A = { a_1 , a_2 , a_3 , a_4 } $. As $A subset V$ we can say that: $$
a_i=sum_{j=1}^2 gamma_{ij}v_j qquad(i=1,dots,4)
$$
I then got a bit stuck, and saw a solution that proceeded by saying that:

If $alpha_1a_1+dots+alpha_4a_4=0$, then
$$
0=sum_{i=1}^4alpha_ia_i=
sum_{i=1}^4alpha_ibiggl(,sum_{j=1}^2gamma_{ij}v_jbiggr)=
sum_{j=1}^2biggl(,sum_{i=1}^4alpha_igamma_{ij}biggr)v_j
$$

so
$$
sum_{i=1}^4alpha_igamma_{ij}=0 qquad(j=1,2)
$$
and since $4 > 2$ there are infinetely many solutions so A is not linearly independent.

Although, I understand the whole logic here, and why the result solves our question, why did we suppose that $alpha_1a_1+dots+alpha_4a_4=0$, and especially why did we use the elements of A in tuples? Could it be done with any other way? Thanks for the help!