linear transformations – Range of a function defined by a matrix

I am studying vector calculus, and I encountered a question, which I find the solution to this question somewhat colliding with my knowledge of linear algebra.
The question is this.

Consider the function $f:R^2rightarrow R^3$ given by $f(x)=Ax$, where A=begin{bmatrix}
2 & -1 \
5 & 0 \
-6 &3
and the vector $x$ in $R^2$ is written as the 2×1 column matrix x=begin{bmatrix}
x_1 \

Describe the range of f.

From what I learned in my linear algebra course, f is a linear transformation with a standard matrix as A, and therefore the range of A is a span of the column vectors of A. So I thought that since the column vectors are linearly independent, the range is a plane in $R^3$ spanned by two vectors (2, 5, -6) and (-1, 0, 3).
However, the solution manual had a different answer, it said that $$f(x)=(2x_1-x_2, 5x_1, -6x_1+3x_2)$$
and since $$f_3(x_1, x_2)=-6x_1+3x_2=-3(2x_1-x_2)=-3f_1(x_1, x_2)$$
The range of f would be all of $y=(y_1, y_2, y_3)$ in $R^3$ such that $y_3=-3y_1$.

I am pretty much confused right now…can anybody explain to me about this result?

How to form a diagonal matrix from sub-matrices?

I have a 3×3 matrix (let’s say G) and a 3×3 matrix of zeros (let’s say zero). I want a diagonal matrix in the form Diag(G; G; G; zero) such that the size of matrix becomes 12×12. I actually have to add this diagonal matrix to another 12×12 matrix.

I have already tried the solutions of How to form a block-diagonal matrix from a list of matrices? and Create a matrix of matrices using Band and ArrayFlatten but they don’t give me a correct result when I check the dimensions of the diagonal matrix. Is there a method that I can use to achieve this result?

Is this benchmark sufficient to consider my algorithm as an efficient matrix multiplication algorithm?

I built a matrix multiplication algorithm and now I need some thoughts about following benchmark.

C++ chrono:: high resolution clock Time(micro second)

  1. (Dim)256–> (Naive algo ) 296807, (My algo) 187479
  2. (Dim)512–> (Naive algo) 2249495, (My algo) 1359046
  3. (Dim)1024–>(Naive algo) 27930970, (My algo) 12309645

FYI, I have no knowledge about strassen matrix multiplication algorithm and how to utilize a github project.Therefore,I stole some benchmark from a github account for sake of some comparisons with strassen matrix mulplication algorithm.Those are as below.

C++ chrono:: high resolution clock Time(micro second)

  1. (Dim)256–>(Naive algo) 260281,(Strassen algo) 216970
  2. (Dim)512–> (Naive algo) 2122299, (Strassen algo) 1580466
  3. (Dim)1024–>(Naive algo) 2algorithm?Strassen algo) 14696774

I don’t know specs of github user’s computer.I assume it’s better than mine after observing both execution times of naive matrix multiplication algorithm.(quoted from

Specs of my computer

Processor core i7 6th gen(2.60 GHz)

Ram 8gb

What do you think about my algorithm after observing above mentioned band what can I expect after comparing my algorithm with strassen matrix multiplication algorithm?

mp.mathematical physics – Diagonalization of the generalized 1-particle density matrix

Let $mathscr{H}$ be a complex separable Hilbert space and $mathscr{F}$ be the corresponding fermionic Fock space generated by $mathscr{H}$. Let $rho: mathscr{L}(mathscr{F}) to mathbb{C}$ be a bounded linear functional on all bounded operators of $mathscr{F}$ with $rho(I)=1$ and $rho(A^*)=rho(A)^*$, and define the 1-particle density matrix (1-pdm) by the unique bounded self-adjoint $Gamma: mathscr{H}oplus mathscr{H}^* to mathscr{H}oplus mathscr{H}^*$ such that

langle x|Gamma yrangle = rho((c^*(y_1)+c(y_2))(c(x_1)+c^*(x_2)))

where $x=x_1 oplus bar{x}_2$ and $y=y_1 oplus bar{y}_2$ (I use the notation $bar{x} (cdot) = langle x|cdotrangle$) and $c,c^*$ are the annihilation/creation operators.

In references V. Bach (Generalized Hartree-Fock theory and the Hubbard model)(Theorem 2.3) and J.P Solovej (Many Body Quantum Mechanics)(9.6 Lemma and 9.9 Theorem), the authors claim that (under suitable conditions) $Gamma$ is diagonalizable by a Bogoliubov transform $W:mathscr{H}oplus mathscr{H}^* to mathscr{H}oplus mathscr{H}^*$ so that $W^* Gamma W = operatorname{diag}{(lambda_1,…,1-lambda_1,…)}$. The main idea of the proof is that $Gamma$ is diagonalizable by an orthonormal basis, and that if $xoplus bar{y}$ is an eigenvector with eigenvalue $lambda$, then $yoplus bar{x}$ is an eigenvector with eigenvalue $1-lambda$. The proof is fine when $lambdane 1/2$, since the 2 eigenvectors are orthonormal to each other. However, if $lambda=1/2$, then things become a little more difficult. J.P Solove solves this in the case where the eigenspace of $lambda =1/2$ is even-dimensional, but as far as I know, I can’t understand why would it be.

Question. Is there something I’m forgetting? If not, is there a way or are there references that complete the proof?

algorithm – fast method to calculating signature of a matrix

For calculation of the topological properties of a hamiltonian, sometimes we need the signature of that matrix. This means we only need number of positive eigenvalues. One simple way is to first calculate the eigenvalues and then find difference of number of positive and negative eigenvalues. for example,

H = RandomReal({-1, 1}, {100, 100}) // (# + #(ConjugateTranspose)) &;
Total(If(# > 0, 1, -1) & /@ Eigenvalues(H))

I am interested to know i) is there any function in Mathematica calculate signature ii) is there any fast algorithm within Mathematica to do so fast?

table – Building an array of T NxM matrices where one item in each matrix changes

As I do not use Mathematica often enough, I am a bit rusty. I need to work on an array of T matrices and each of those NxM matrices has the following pattern:

<span class=begin{equation}
alpha_1(t) +beta(t) +bar{beta}(t) & beta(t)+bar{beta}(t) & … & beta(t)+bar{beta}(t) \
beta(t)+bar{beta}(t) & {alpha}_2(t) +beta(t)+ bar{beta}(t) & … & beta(t)+bar{beta}(t) \
… & … & … & … \
beta(t)+bar{beta}(t) & beta(t)+bar{beta}(t) & … & alpha_N(t)+beta(t)+bar{beta}(t)
end{equation}” />

where the alpha_1(t),beta(t) and bar{beta} are vectors of size T. As can be seen, in each matrix only the diagonal changes as the alpha_i(t) are added, with i in {1,N}, and t in {1,T}.
I am sure that the solution must be reasonably easy, but not for me.
Thank you.

inverse – establish matrix and inversion of a uppertriangularized matrix

Trying to create an uppertriangularized matrix with Poisson Distribution, find its inverse and multiply the inverse by a vector, i.e.

Daily = {a 158 element vector};
ME = UpperTriangularize(Table((mu^(lambda + 1 - j) Exp(-mu))/(lambda + 1 - j)!, 
{i, lambda}, {j, lambda}));
    Actual = Inverse(ME).Daily; 
    cumulative = FoldList(Plus, 0, Actual)

the notebook would not execute and displays a redline on the right edge of the screen and a statement below the code saying

ln(2) MatrixForm(ME(tt_, mm_) := UpperTriangularize(Table((mm^(tt + 1 - j) Exp(-mm))/(tt + 1 - j)!, {i, lambda}, {j, lambda})))

Out(2) MatrixForm=

Matrix multiplication-esque concatenation in google docs?

Im making a simple recipe maker in google docs, using row vectors and sumproduct I can get it to calculate the protein and calories for a given recipe. The way its set up i can be hard to read the recipe so i would like to implement something that can add the strings of the ingredients and amount of it then add all those to create a recipe string.
enter image description here

As you can see i need it to concatenate the number in a given row by the ingredient and then concatenate all of them. For example “Recipe in this row” should say “1 dl milk, 1 dl proteinpowder, 1 egg”. I get how to do this for a fixed number of ingredients but i want to be able to add ingredients and would need to have some type of running index, Im new to spreadsheets and have no idea how to do it.