## co.combinatorics – total unimodularity of a matrix

Let G be the node-arc incidence matrix of a given directed network (rows of $$G$$ correspond to nodes and their columns correspond to arcs. To let $$B_1, dots, B_K$$ designate a partition of the nodes of the network. Suppose the network is such that a directed arc can go in from a node $$B_k$$ to another node in $$B_ ell$$ only if $$k < ell$$, To let $$H$$ denote a matrix with $$K$$ Rows and assume that its columns are indexed by the arcs of the underlying network. We assume that $$H$$ is that his $$(k, e)$$-th entry is equal to one $$e in B_k$$and nothing else.

Is the matrix (H; G) (obtained by the concatenation of rows of $$H$$ and $$G$$) completely unimodular? If not, can you give a counterexample?

I numerically examined some examples and confirmed the complete unimodularity for these examples. I thought it might be possible to exploit the structure of $$H$$ (Note the special line structure) to formally prove the result. I tried to use the Ghouila Houri condition (see https://en.wikipedia.org/wiki/Unimodular_matrix), which appears to be a good candidate for exploiting the line structure. So far I have not been successful.

Posted on Categories Articles

## How complex is it to find e ^ (A) for a Hermitian matrix A?

If A is a hermitic matrix of size NxN. of steps required to calculate e ^ (A).

## ordinary differential equations – Calculate the determinant of a solution matrix for the following linear system.

But avoid

• Make statements based on opinions; Cover them with references or personal experience.

Use MathJax to format equations. Mathjax reference.

Posted on Categories Articles

## analytic geometry – semiaxes of the ellipsoid from the square matrix

In three-dimensional Euclidean space, a quadratic surface can be defined by the following analytic equation:

$$e_1 x ^ 2 + e_5 y ^ 2 + e_8 z ^ 2 + 2 e_2 y z + 2 e_3 x z + 2 e_4 xy + 2 e_6 x + 2 e_7 y + 2 e_9 z + e_ {10} = 0$$

or in homogeneous matrix form:

$$texttt {x} ^ T cdot texttt {E} cdot texttt {x} = begin {bmatrix} x_1 & x_2 & x_3 & 1 end {bmatrix} begin {bmatrix} e_1 & e_2 & e_3 & e_4 \ e_2 & e_5 & e_6 & e_7 \ e_3 & e_6 & e_8 & e_9 \ e_4 & e_7 & e_9 & e_ {10} end {bmatrix} begin {bmatrix} x_1 \ x_2 \ x_3 \ 1 end {bmatrix} = 0$$

Given only the matrix $$texttt {E}$$ and to know that the surface is an ellipsoid, the center $$texttt {x} _ texttt {c}$$ Ellipsoid is won by
$$texttt {x} _ texttt {c} = texttt {E} _ {33} ^ {- 1} begin {bmatrix} e_4 \ e_7 \ e_9 end {bmatrix}, Where , texttt {E} _ {33} = begin {bmatrix} e_1 & e_2 & e_3 \ e_2 & e_5 & e_6 \ e_3 & e_6 & e_8 end {bmatrix}$$

and the rotation matrix can be extracted from normalized eigenvectors of $$texttt {E} _ {33}$$,

But I do not know how to get the semi-axes of the ellipsoid. Any help grateful.

Posted on Categories Articles

## solve the equation by explaining the n * n matrix or the equation team

I want Solve n equations with the Gaussian elimination method, I have indicated root formulation and methodology in the picture. But I do not know how to do that. Should I explain n * n matrix? Or should I explain Equation team as picture above? Thanks for your help. Posted on Categories Articles

## Coordinate transformation – distance matrix within a neural network

So I want to create a NetGraph, which is a $$n times 3$$ (With $$n$$ varying length) List of 3D coordinates as input and creates one $$n times n$$ Distance matrix or one $$frac {n (n-1)} {2}$$ List of distances between these points. I would run this net for two lists of 3D coordinates and feed the results into a MeanSquaredLossLayer. This would allow the neural network to learn to reproduce an array of points with the loss that is invariant with respect to shifts and rotations in 3D. However, I have only vague ideas about which modules to use. Maybe NetMapOperators and NetMapThreadOperators? If yes how? Thank you for the reply in advance!

Posted on Categories Articles

## Linear Algebra – What will be the third column of the given matrix?

That's the problem My attempt: $$frac {1} { sqrt 2} x + 0.y + frac {1} { sqrt2} z = 0, x + z = 0, x = -z$$ .

And $$frac {-1} { sqrt 3} x + frac {-1} { sqrt 3} y + frac {1} { sqrt 3} z = 0$$Now put $$x = -z$$ we have $$frac {-2} { sqrt 3} x + frac {-1} { sqrt 3} y = 0$$ .$$-2x -y = 0$$. $$x = -y / 2$$

Now I take $$x = c$$, then third column becomes $$begin {bmatrix} -2 \ -1 \ 2 end {bmatrix}$$

Is it true?

## Matrix – How do I solve a large system of differential equations with indexed functions?

I firmly believe that matrix methods are the right answer here, but I can not imagine how to set them up. Imagine a large number of coupled functions:

AB(m,n)(t)


Where m and n are integer indices that can be up to hundreds of thousands or thousands of thousands, and t is a continuous time variable. Imagine AB (m, n) as the concentration of AB of type (m, n), and these concentrations of different types can evolve over time. In addition, we have a single additional feature:

B(t)


it is also a concentration that evolves over time, but for which there is only one type. Initial conditions are:

AB(a,0)(0) = A0
AB(anythingelse)(0) = 0
B(0) = B0


Here a is a constant integer in the hundreds to thousands.

AB (m, n) can be converted into other types by two processes:

AB(m,n) + B --> AB(m-1,n+b)
AB(m,n) --> AB(m-1,n-1)


Here b is a constant integer that is significantly smaller than a. That is, the differential equations that determine the evolution of the system are:

D(AB(m, n)(t), t) == -k1(m, n) AB(m, n)(t) B(t)
+k1(m + 1, n - b) AB(m + 1, n - b)(t) B(t)
-k2(m, n) AB(m, n)(t)
+k2(m + 1, n + 1) AB(m + 1, n + 1)(t)
D(B(t), t) == -k1(m, n) AB(m, n)(t) B(t)


Good assumptions for the forms for k1 and k2 that we can use for testing are:

k1(m_) = (m/a) k3
k2(m_, n_) = k4 n/(n + k5/m) + n k6 k1(m)


Here k3, k4, k5 and k6 are positive real numbers.

How on earth do I organize this system to solve it numerically? NDSolve (or ParametricNDSolve) or similar? The ultimate goal will be to have a measurable function that looks something like this:

Sum(m AB(m,n),{m,0,a},{n,0,infinity})


This function would then be suitable for experimental data that varies k3-k6 and possibly a and b. Later generalizations may be to have a broader distribution of starting concentrations, rather than all being identical, a range of different possible bs, etc.

Posted on Categories Articles

## Control Systems – Problem Solving the Kalman Filter in Mathematica: How to Define a Spectral Density Matrix and Calculate the Covariance Matrix?

I read the classic book on the theory of space control by Bernard Friedland.

To deepen my understanding of the Kalman filter, I would like to illustrate Example 11A Inverted Pendulum on page 418.

However, I can not find a way to define the spectral density matrices (of excitation noise and observation in equation (11A.2)) in Mathematica. How can I also calculate the covariance matrix of noise from a spectral density matrix?

Thank you very much. Posted on Categories Articles

## numerical analysis – How to solve a quadratic matrix equation with a positive semidefinite constant?

I have the following quadratic matrix equation:

$$XAX + X = B$$

Where $$A$$ and $$B$$ are all positive definite matrix.

The limitation here is that $$X$$ is actually a covariance matrix and should definitely be positive.

All I have is that if there is no restriction, the equation can be solved by Bernoulli iteration in the following form:

$$X_ {k + 1} = -A ^ {- 1} (I-BX_k ^ {- 1})$$

However, this does not seem to uphold the restriction.

Any guides would be appreciated, thank you.

Posted on Categories Articles