analysis – Rank of the Jacobian matrix of $D(ftimes g)$.

Let $f:UsubsetBbb R^nto Bbb R^N$ and $g:VsubsetBbb R^mtoBbb R^M$ be smooth functions of rank of Jacobian matrix is $n$ and $m$ respectively where $U,V$ are open sets. Then I want to show that the rank of the Jacobian matrix of $ftimes g:Utimes VtoBbb R^{n+m}$ is $n+m$. To do this, the Jacobian matrix of $ftimes g$ is

$D(ftimes g)(x_1,x_2)begin{pmatrix}Df(x_1) 0\ 0 Dg(x_2)end{pmatrix}$

But as $(Df)$ and $(Dg)$ has rank $n,m$ respctively, the rank of $D(ftimes g)$ is $n+m$. Is that correct?

linear algebra – linearly independent vectors from a matrix product

I have a product of $n$ integer-valued matrices:

$V=M_1 M_2 cdots M_n ,.$

The $M_i$ are not square matrices, but the rows of $M_i$ and the columns of $M_{i+1}$ have the same length, so the multiplications make sense. $V$ is a matrix with a single column, so it is a vector.

When I take this product, I have two choices for each of the $M_i$, say $M_i=A_i$ and $M_i=B_i$.

Thus the naive bound on the number of different $V$‘s I can get is $2^n$. In practice, the number of linearly independent $V$‘s I can get is much much smaller.

My question is: how can I bound the number of linearly independent $V$‘s?

For example, is it possible get a bound that depends on the ranks of the $A_i$ and $B_i$?

I would be interested in references to any place in the literature where this sort of question has been studied.

matrices – Is there an alternative matrix multiplication?

Matrix multiplication can be thought of as a matrix of the sum of the products of the matrix elements. That is

$mathbf{AB}=mathbf{C}$

where

$c_{jk} = sum_{n=1}^max a_{nk}b_{jn}$

Is there a form of matrix multiplication where the result is the product of the matrix elements? That is

$c_{jk} = prod_{n=1}^max a_{nk}b_{jn}$

If so, what is the terminology? Is it still called matrix multiplication? Is there a whole different term for it? And has there been much mathematics developed from this variant of multiplication?

parametric functions – How to initiate a matrix with indexed parameters

I couldn’t find it in the documentation, so I’m posting it here.
I want to create two by two matrices as follows:
$$
`W = begin{bmatrix}w_{11} & w_{12} \ w_{21} & w_{22} end{bmatrix},
V = begin{bmatrix}v_{11} & v_{21} \ v_{12} & v_{22} end{bmatrix}`
$$

so I could calculate $V^{-1}$ and see the result with the parameters $v_{ij}$‘s.

Tnx.

matrices – Find a real orthogonal matrix $P$ such that its product with $M$ a unitary matrix is symmetric

Given $M in mathcal{M}_n(mathbb{C})$ that verifies $overline{M}^{T} M = I_n$can we find $P in mathcal{O}_n(mathbb{R})$ such that $PM$ is symmetric?

I would like to prove that there exists $P in mathcal{O}_n(mathbb{R})$ and $S in mathcal{S}_n(mathbb{R})$ such that $M = P exp(i S)$.

I have already shown that if $M$ is symmetric (and unitary), we can find $S in mathcal{S}_n(mathbb{R})$ such that $M = exp(iS)$. I have never seen unitary matrices before, therefore I was wondering if we could decompose easily $M$ as suggested? Otherwise I will look for another method.

Any help will be welcomed.

linear algebra – How to operate every element of a matrix to get a new matrix

If we give a upper triangular matrix $boldsymbol{B}$ with non-zero diagonal elements, how can we get a new square matrix $boldsymbol{A}$?

For instance, the order of upper triangular matrix is 5, then

the upper triangle matrix is given as $boldsymbol{B}_{0,(5times5)}=left{b_{,i,j}right}$, then

Now, I will only use the command to get the elements on the diagonal.

B0 = {{1, 2, 1, 4, 5},
     {0, 3, 7, -1, 3},
     {0, 0, 1, 6, 2},
     {0, 0, 0, 1, -1},
     {0, 0, 0, 0, 4}};
diag1 = Diagonal(B0, 1);
diag00 = Drop(Diagonal(B0), 1);
DiagonalMatrix(-diag1/diag00, 1) + 
DiagonalMatrix(Table(1, 5)) // MatrixForm

However, how to give a function that inputs an upper triangular matrix with non-zero diagonal elements and outputs a new upper triangular matrix?

enter image description here

(BTW, I don’t know why the previous post couldn’t be published)

c++ – C++20 Ndim matrix, computing eigenvalues and eigenvectors

My C++20 N-dimensional matrix project now supports basic linear algebra operations: https://github.com/frozenca/Ndim-Matrix

Today I want to get some reviews on computing eigenvalues and eigenvectors.
In particular, I don’t like so much duplicate codes. I think other parts are okay.. (hope?)

There are two functions: Computes eigenvalues only, and computes both eigenvalues and corresponding eigenvectors. These two functions share so much code, I want to know how to fix nicely..

Full code: https://github.com/frozenca/Ndim-Matrix/blob/main/LinalgOps.h#L956

constexpr float tolerance_soft = 1e-6;
constexpr float tolerance_hard = 1e-10;
constexpr std::size_t max_iter = 100;
constexpr std::size_t local_iter = 15;

template <isScalar T>
decltype(auto) getSign (const T& t) {
    if constexpr (isComplex<T>) {
        if (std::abs(t) < tolerance_soft) {
            return static_cast<T>(1.0f);
        }
        return t / std::abs(t);
    } else {
        return std::signbit(t) ? -1.0f : +1.0f;
    }
};

template <typename Derived, isScalar T>
Vec<T> normalize(const MatrixBase<Derived, T, 1>& vec) {
    Vec<T> u = vec;
    auto u_norm = norm(u);
    if (u_norm < tolerance_hard) {
        return u;
    } else {
        u /= u_norm;
        return u;
    }
}

template <typename Derived, isScalar T>
bool isSubdiagonalNeglegible(MatrixBase<Derived, T, 2>& M, std::size_t idx) {
    if (std::abs(M({idx + 1, idx})) <
        tolerance_soft * (std::abs(M({idx, idx})) + std::abs(M({idx + 1, idx + 1})))) {
        M({idx + 1, idx}) = T{0};
        return true;
    }
    return false;
}

template <isScalar T>
T computeShift(const T& a, const T& b, const T& c, const T& d) {
    auto tr = a + d;
    auto det = a * d - b * c;
    auto disc = std::sqrt(tr * tr - 4.0f * det);
    auto root1 = (tr + disc) / 2.0f;
    auto root2 = (tr - disc) / 2.0f;
    if (std::abs(root1 - d) < std::abs(root2 - d)) {
        return root1;
    } else {
        return root2;
    }
}

template <isScalar T, isReal U = RealTypeT<T>>
std::tuple<T, T, U> givensRotation(const T& a, const T& b) {
    if (b == T{0}) {
        return {getSign(a), 0, std::abs(a)};
    } else if (a == T{0}) {
        return {0, getSign(b), std::abs(b)};
    } else if (std::abs(a) > std::abs(b)) {
        auto t = b / a;
        auto u = getSign(a) * std::sqrt(1.0f + t * t);
        return {1.0f / u, t / u, std::abs(a * u)};
    } else {
        auto t = a / b;
        auto u = getSign(b) * std::sqrt(1.0f + t * t);
        return {t / u, 1.0f / u, std::abs(b * u)};
    }
}


// QR algorithm used in eigendecomposition.
// not to be confused with QR decomposition
template <typename Derived, isScalar U, isScalar T = CmpTypeT<U>> requires CmpTypeTo<U, T>
std::vector<T> QRIteration(const MatrixBase<Derived, U, 2>& mat) {
    std::size_t iter = 0;
    std::size_t total_iter = 0;
    std::size_t n = mat.dims(0);

    auto conjif = (&)(const auto& v) {
        if constexpr (isComplex<U>) {
            return conj(v);
        } else {
            return v;
        }
    };

    Mat<T> M = mat;
    std::size_t p = n - 1;
    while (true) {
        while (p > 0) {
            if (!isSubdiagonalNeglegible(M, p - 1)) {
                break;
            }
            iter = 0;
            --p;
        }
        if (p == 0) {
            break;
        }
        if (++iter > local_iter) {
            break;
        }
        if (++total_iter > max_iter) {
            break;
        }
        std::size_t top = p - 1;
        while (top > 0 && !isSubdiagonalNeglegible(M, top - 1)) {
            --top;
        }
        auto shift = computeShift(M({p - 1, p - 1}), M({p - 1, p}),
                                  M({p, p - 1}), M({p, p}));

        // initial Givens rotation
        auto x = M({top, top}) - shift;
        auto y = M({top + 1, top});
        auto (c, s, r) = givensRotation(x, y);
        Mat<T> R {{c, -s},
                  {s, c}};
        Mat<T> RT{{c, s},
                  {-s, c}};
        if (r > tolerance_hard) {
            auto Sub1 = M.submatrix({top, top}, {top + 2, n});
            Sub1 = dot(conjif(RT), Sub1);
            std::size_t bottom = std::min(top + 3, p + 1);
            auto Sub2 = M.submatrix({0, top}, {bottom, top + 2});
            Sub2 = dot(Sub2, R);
        }
        for (std::size_t k = top + 1; k < p; ++k) {
            x = M({k, k - 1});
            y = M({k + 1, k - 1});
            std::tie(c, s, r) = givensRotation(x, y);
            if (r > tolerance_hard) {
                M({k, k - 1}) = r;
                M({k + 1, k - 1}) = T{0};
                R({0, 0}) = RT({0, 0}) = R({1, 1}) = RT({1, 1}) = c;
                R({0, 1}) = RT({1, 0}) = -s;
                R({1, 0}) = RT({0, 1}) = s;

                auto Sub1 = M.submatrix({k, k}, {k + 2, n});
                Sub1 = dot(conjif(RT), Sub1);
                std::size_t bottom = std::min(k + 3, p + 1);
                auto Sub2 = M.submatrix({0, k}, {bottom, k + 2});
                Sub2 = dot(Sub2, R);
            }
        }

    }
    std::vector<T> res;
    for (std::size_t k = 0; k < n; ++k) {
        res.push_back(M({k, k}));
    }
    return res;
}

template <typename Derived, typename Derived2, isScalar T>
Mat<T> computeEigenvectors(const MatrixBase<Derived, T, 2>& M,
                           const MatrixBase<Derived2, T, 2>& Q) {
    std::size_t n = M.dims(0);
    Mat<T> X = identity<T>(n);

    for (std::size_t k = n - 1; k < n; --k) {
        for (std::size_t i = k - 1; i < n; --i) {
            X({i, k}) -= M({i, k});
            if (k - i > 1 && k - i - 1 < n) {
                auto row_vec = M.row(i).submatrix(i + 1, k);
                auto col_vec = X.col(k).submatrix(i + 1, k);
                X({i, k}) -= dot(row_vec, col_vec);
            }
            auto z = M({i, i}) - M({k, k});
            if (z == T{0}) {
                z = static_cast<T>(tolerance_hard);
            }
            X({i, k}) /= z;
        }
    }
    return dot(Q, X);
}

// QR algorithm used in eigendecomposition.
// not to be confused with QR decomposition
template <typename Derived, typename Derived2,
        isScalar U, isScalar T = CmpTypeT<U>> requires CmpTypeTo<U, T>
std::vector<std::pair<T, Vec<T>>> QRIterationWithVec(const MatrixBase<Derived, U, 2>& mat,
                                  const MatrixBase<Derived2, U, 2>& V) {
    std::size_t iter = 0;
    std::size_t total_iter = 0;
    std::size_t n = mat.dims(0);

    auto conjif = (&)(const auto& v) {
        if constexpr (isComplex<U>) {
            return conj(v);
        } else {
            return v;
        }
    };

    Mat<T> M = mat;
    std::size_t p = n - 1;
    Mat<T> Q = V;
    while (true) {
        while (p > 0) {
            if (!isSubdiagonalNeglegible(M, p - 1)) {
                break;
            }
            iter = 0;
            --p;
        }
        if (p == 0) {
            break;
        }
        if (++iter > 20) {
            break;
        }
        if (++total_iter > max_iter) {
            break;
        }
        std::size_t top = p - 1;
        while (top > 0 && !isSubdiagonalNeglegible(M, top - 1)) {
            --top;
        }
        auto shift = computeShift(M({p - 1, p - 1}), M({p - 1, p}),
                                  M({p, p - 1}), M({p, p}));

        // initial Givens rotation
        auto x = M({top, top}) - shift;
        auto y = M({top + 1, top});
        auto (c, s, r) = givensRotation(x, y);
        Mat<T> R {{c, -s},
                  {s, c}};
        Mat<T> RT{{c, s},
                  {-s, c}};
        if (r > tolerance_hard) {
            auto Sub1 = M.submatrix({top, top}, {top + 2, n});
            Sub1 = dot(conjif(RT), Sub1);
            std::size_t bottom = std::min(top + 3, p + 1);
            auto Sub2 = M.submatrix({0, top}, {bottom, top + 2});
            Sub2 = dot(Sub2, R);
            auto QSub2 = Q.submatrix({0, top}, {n, top + 2});
            QSub2 = dot(QSub2, R);
        }
        for (std::size_t k = top + 1; k < p; ++k) {
            x = M({k, k - 1});
            y = M({k + 1, k - 1});
            std::tie(c, s, r) = givensRotation(x, y);
            if (r > tolerance_hard) {
                M({k, k - 1}) = r;
                M({k + 1, k - 1}) = T{0};
                R({0, 0}) = RT({0, 0}) = R({1, 1}) = RT({1, 1}) = c;
                R({0, 1}) = RT({1, 0}) = -s;
                R({1, 0}) = RT({0, 1}) = s;

                auto Sub1 = M.submatrix({k, k}, {k + 2, n});
                Sub1 = dot(conjif(RT), Sub1);
                std::size_t bottom = std::min(k + 3, p + 1);
                auto Sub2 = M.submatrix({0, k}, {bottom, k + 2});
                Sub2 = dot(Sub2, R);
                auto QSub2 = Q.submatrix({0, k}, {n, k + 2});
                QSub2 = dot(QSub2, R);
            }
        }
    }
    std::vector<std::pair<T, Vec<T>>> res;
    auto eV = computeEigenvectors(M, Q);
    for (std::size_t k = 0; k < n; ++k) {
        res.emplace_back(M({k, k}), normalize(eV.col(k)));
    }
    return res;
}

} // anonymous namespace

template <typename Derived, isScalar U, isScalar T = CmpTypeT<U>> requires CmpTypeTo<U, T>
std::vector<T> eigenval(const MatrixBase<Derived, U, 2>& M) {
    std::size_t n = M.dims(0);
    std::size_t C = M.dims(1);
    if (n != C) {
        throw std::invalid_argument("Not a square Matrix, cannot compute eigenvalues");
    }

    if (n == 1) { // 1 x 1
        return {M({0, 0})};
    } else if (n == 2) { // 2 x 2
        return eigenTwo(M);
    } else if (n == 3) { // 3 x 3
        return eigenThree(M);
    } else { // for 4 x 4 we need advanced algorithm
        auto H = Hessenberg(M);
        return QRIteration(H);
    }
}

template <typename Derived, isScalar U, isScalar T = CmpTypeT<U>> requires CmpTypeTo<U, T>
std::vector<std::pair<T, Vec<T>>> eigenvec(const MatrixBase<Derived, U, 2>& M) {
    std::size_t n = M.dims(0);
    std::size_t C = M.dims(1);
    if (n != C) {
        throw std::invalid_argument("Not a square Matrix, cannot compute eigenvalues");
    }

    if (n == 1) { // 1 x 1
        auto val = M({0, 0});
        Vec<T> vec {T{1}};
        return {{val, vec}};
    } else if (n == 2) { // 2 x 2
        return eigenVecTwo(M);
    } else if (n == 3) { // 3 x 3
        return eigenVecThree(M);
    } else { // for 4 x 4 we need advanced algorithm
        auto (H, V) = HessenbergWithVec(M);
        return QRIterationWithVec(H, V);
    }
}

```

matrix – Can I use PartialCorrelationFunction on multivariate data?

I am doing a bit of work on multivariate time series and I need to calculate the partial autocorrelation function of a matrix. I will provide some details bellow and show an example of how I simulate my data:

Data simulation
First I create an autoregressive process that I need for my work. Mathematica’s ARProcess does not seem to work on multivariate data directly, however, I managed to find a workaround for that:

aCoef = {{{0, 0, 0, 0}, {0, 0, 0, 0.8}, {0, 0, 0, 0}, {0, 0, 0.8, 0}}, {{0, 0, 0, 0}, {0.8, 0, 0, 0},{0, 0, 0, 0}, {0.8, 0, 0, 0}}};

These coefficients allow me to simulate data that have 4 paths in the time series that influence each other in different time lags. Now that my coefficients are defined, I am simulating the data like this:

data = RandomFunction[ARProcess[aCoef, IdentityMatrix[4]], {0, 10000}];

Now what this process does, is to create a TemporalData object, which can be used for my further analysis. I would like to see what the partial autocorrelation is between the different channels. This will allow me to see if the simulated data are as I expect. I would expect to be able to use a function that would output something like this:

A matrix of the output of pacf function from R

This is the output of the pacf function in R, that I have implemented using REvaluate in Mathematica. It shows that there are values different from zero, where expected, i.e. the fourth element of the matrix on row 1, column 2, the third element of the matrix on row 1 column 4, the first element of the matrix on row 2 column 2 and finally the first element of matrix on row 2 column 4. The matrix rows represent time lags and the columns represent each path of the time series.
I have tried using PartialCorrelationFunction:
Output of the fucntion

According to the Wolfram Documentation, either data or tproc can used as 1st argument in that function, I have tried both with no success. Although this is not an error in the language, it does not provide the information I am looking for. I could use the R function, but ideally I would like to use the functions in Wolfram. Does anyone have any idea on how to fix this? Thanks!