## linear algebra – canonical form of the skew-symmetric bilinear function and its transformation matrix

Reduce the skew-symmetric bilinear function to canonical form and find the matrix of the transformation.

$$varphi (x, y) = x_1y_2-x_2y_1 + 2x_1y_3-2x_3y_1-x_1y_4 + x_4y_1 + 4x_2y_4-4x_4y_2 + x_3y_4-x_4y_3$$

My approach: To let $${e_1, e_2, e_3, e_4 }$$ is the given basis for the vector space and the matrix of this form is $$B _ { varphi} ^ {(e)} = begin {bmatrix} 0 & 1 & 2 & -1 \ -1 & 0 & 0 & 4 \ -2 & 0 & 0 & 1 \ 1 & -4 & -1 & 0 \ end {bmatrix}$$

Let's find the new base in the form $$e & # 39; _1 = e_1, e & # 39; _2 = e_2$$ and $$e & # 39; _i = e_i + dfrac {b_ {2i}} {b_ {12}} e_1- dfrac {b_ {1i}} {b_ {12}} e_1$$ to the $$i geq 3$$, Where $$b_ {ij}$$ are elements of the matrix above (we find the new basis so that $$varphi (e & # 39; _1, e & # 39; _i) = varphi (e & # 39; _2, e & # 39; _i) = 0$$ to the $$i geq 3$$).

Then you can show that $$e & # 39; _3 = e_3-2e_2$$ and $$e & # 39; _4 = e_4 + 4e_1 + e_2$$,
The basic calculation also shows this $$varphi (e & # 39; _3, e & # 39; _4) = varphi (e_3-2e_2, e_4 + 4e_1 + e_2) = b_ {34} -2b_ {24} + 4b_ {31} -2b_ {21 } + b_ {32} = – 13.$$ Let's take the new base $$(e & # 39; & # 39;) = {e & # 39; _1, e & # 39; _2, -e & # 39; _3 / 13, e & # 39; _4 }$$ and then it follows that on this basis the matrix has a canonical form, i.e. $$B _ { varphi} ^ {(e & # 39; & # 39;)} = begin {bmatrix} 0 & 1 & 0 & 0 \ -1 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & -1 & 0 \ end {bmatrix}$$ and the function has the following canonical form $$(u_1v_2-u_2v_1) + (u_3v_4-u_4v_3)$$ and the transformation matrix is ​​this

$$begin {bmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 2/13 & -1/13 & 0 \ 4 & 1 & 0 & 1 \ end {bmatrix}$$

Can someone tell me that my reasoning and the answers are correct, please?

Would be very grateful!

## Python – Shortening the time for matrix multiplication of pixels

This part of my code takes 30 minutes to create several 300 RGB and depth images pixel by pixel. How can I shorten this time? This is only part of the big code, but it takes a lot of time.

``````def depth_to_xyz_and_rgb(uu, vv, dep):
# t1.tic()
# get z value in meters
pcz = dep.getpixel((uu, vv))
if pcz == 60:
return
pcx = (uu - cx_d) * pcz / fx_d
pcy = ((vv - cy_d) * pcz / fy_d)

# apply extrinsic calibration
P3D = np.array((pcx, pcy, pcz))
P3Dp = np.dot(RR, P3D) - TT

# rgb indexes that P3D should match
uup = (P3Dp(0) * fx_rgb / P3Dp(2) + cx_rgb)
vvp = (P3Dp(1) * fy_rgb / P3Dp(2) + cy_rgb)

# t1.toc()

# return a point in point cloud and its corresponding color indices
return P3D, uup, vvp
``````

Example Pictures.

## mathematical optimization – Find the minimum with the restriction of the positive certainty of the matrix

Suppose I want to find the minimum value of the determinant of a matrix on the condition that the matrix is ​​positive definite. So I try:

``````M = {{a,0},{0,b}}

FindMinimum[{Det[M],a>=1,b>=1,PositiveDefiniteMatrixQ[M]},{a,b}]
``````

This returns an error that `Constraints in {False} are not all equality or inequality constraints...`and suggested that the `PositiveDefiniteMatrixQ` is immediately evaluated for any `a,b` and not evaluated every iteration `a,b` Values.

Then I could try to delay the evaluation of `PositiveDefiniteMatrixQ` With `Delayed`, which returns a similar error `Constraints in {Delayed[PositiveDefiniteMatrixQ[M]],a>=1,b>=1} are not all equality or inequality constraints`,

How can I impose such a restriction on this? `FindMinimum` Function?

## list – Problems finding a certain value in the matrix in PYTHON

Sorry for the size of the statement

I have a problem with my code. I have a list like:

List (a) (b) (c)

By doing:

1. The addresses of my files are saved in the first position (a)
2. In the second position (b) my data values ​​from the columns of my files are saved (depth, time, GR, NPHI, …).
3. In the third position (c) the values ​​for each row of my data columns are saved in field (b).

I need to look for certain values ​​in my data and link them to values ​​in another column.

Example:

• In the data from the first file (a) = (0)
• Search GR = 37.1451
• Find out which DEPTH corresponds to this GR value.
• Then save this depth in a list that will be used later for other operations.

The program analyzes multiple .LAS files and there is no way to change them as this will be in the public area of ​​the university

I tried to use: `ArquivosLas(0)(1).index(37.1451)`, but since the first list is a file, it doesn't work

In (129): Type (ArchivesLas (0))

Out (129): lasio.las.LASFile

In (132): Type (Las1 files)

Oct (132): numpy.ndarray

In (133): Type (Las11 files)

Oct (133): numpy.float64

I thought about saving the numeric data from the original list – second (b) and third (c) positions in another vector to remove position (a) and convert the new list into just a matrix of numbers.

I add the code I used and the photo as the data is

``````    from tkinter import *
from tkinter import filedialog
import lasio
import numpy as np

EnderecoArquivosLas = list()
ArquivosLas = list()
x = 0

root = Tk()

EnderecoArquivosLas = filedialog.askopenfilenames(parent=root, title="Selecione os arquivos com banco de dados", filetypes=(("las files", "*.las"),("all files", "*.*")))

root.splitlist(EnderecoArquivosLas)

root.mainloop()

for i in EnderecoArquivosLas:

x = x + 1

#Procurar pelos valores especificos
PosicaoGrTopo = ArquivosLas(0)(1).index(37.1451))
``````

## How to prove the \$ n times n \$ matrix \$ A = big ( frac {1} {i + j + 1} big) _ {i, j in [n]} \$ is positive semi-positive?

We tried to show that the matrix
$$A = Big ( frac {1} {i + j + 1} Big) _ {i, j in (n)}$$
is positive semi-definite. We have tried to induce the Schur complement, but there is no easy analytical way to find it $$A_ {n-1} ^ {- 1}$$ for each $$n$$,

## Inverse of the All-One matrix

What is the easiest way to find out the inverse of an all-one matrix?
The matrix has the form (1 1 … 1), where each 1 represents a column vector of 1s.

Thank you.

## Matrix – Apply outer to the list of matrices and vectors

I have a list, M, of square nxn matrices, `{M1,M2,M3,...}`and a list V of nx1 vectors, `{v1,v2,v3,...}`and the corresponding transpositions of these vectors `{r1,r2,r3,...}`, I try to build the matrix
`{{r1.M1.v1, r2.M1.v2, r3.M1.v3,...},{{r1.M2.v1, r2.M2.v2, r3.M2.v3,...},...}`, Note that M and V are not necessarily the same length (i.e. there can only be 5 matrices but 100 vectors).

It seemed like a method `Outer` would work like:

`Outer(Transpose(#2).#1.#2 &,M,V)`, which should then only be a two-dimensional matrix of scalars.

However, I think this has a problem because lists M and V are themselves technically lists of lists (M is a list of matrices, V is a list of vectors), and so the exterior is divided into sublists rather than If I do the calculation I want, it will be a high-dimensional object. I tried playing around with different flattening schemes, but haven't quite figured it out yet – help implementing this functionality (is Outer the right functional tool at all)?

## Combinatorics – condition for the absence of a trivial matrix decomposition

To let $$A_1, dots, A_n$$ Be matrices without row or column of $$0s$$and so that there is something for everyone $$i = 1, points, n$$ there is no decomposition of $$A_i$$ the form
$$A_i = oplus_ {j = 1} ^ n B_j qquad ( exists j_1 neq j_2) ; ( exists k in mathbb {R}) , B_ {j_1} = kB_ {j_2},$$
The direct sum of the matrices is defined here.

Is there an appropriate compatibility criterion for that $$A_i$$ so that $$A = oplus_ {i = 1} ^ nA_i$$ does not allow such decomposition; d. that is: there are none $$C_1, dots, C_t$$ so that
$$A = oplus_ {i = 1} ^ t C_i mbox {and} ( exists t_1 neq t_2) ; ( exists k in mathbb {R}) , C_ {i_1} = kC_ {i_2}.$$

Non-example:
An interesting non-example that illustrates part of the problem is the following:
$$A_1 triangleq begin {pmatrix} 1 & 1 & 0 \ 1 & 1 & 0 \ 0 & 0 & 1 end {pmatrix} A_2 triangleq (2),$$
then $$A_1, A_2$$ are different and different sizes, but, $$B_1$$ is the 2×2 matrix of 1s, $$B_2 = (1)$$. $$B_3 = A_2$$ gives the decomposition that I don't want.

## fa.functional analysis – smallest eigenvalue for large kernel matrix

I am interested in the asymptotics of the minimum eigenvalue $$lambda_n ^ n$$ a class of kernel matrix $$P = (K (x_i – x_j)) _ {1 le i, j le n}$$With $$x_i$$ evenly distributed in the unit cube of $$mathbb {R} ^ d$$,

Here is the kernel $$K$$ is positive symmetric with finite smoothness, i.e. H. the Fourier transform $$widehat {K} ( omega) sim || omega || ^ {- beta – d},$$
Where $$beta> 0$$ is the smoothing parameter and $$d$$ is the dimension.
According to & # 39; error estimates and condition numbers for radial
Basic function interpolation ((Schaback), the minimum eigenvalue
$$c n ^ {- beta / d} le lambda ^ n_n le C n ^ {- beta / d} quad mbox {for some } c, C> 0.$$
My question is whether there is a result regarding the convergence of $$n ^ { beta / d} lambda_n ^ n$$ ? i.e. $$n ^ { beta / d} lambda_n ^ n rightarrow A$$ how $$n rightarrow infty$$ ? Is there any way to prove this result?

There is a closely related topic on the intrinsic values ​​of the continuous operator $$Tf: = int K (x – y) f (y) dy$$, The kernel matrix can be viewed as a discretization of the continuum operator.
To let $$lambda_1> lambda_2 ldots$$ are the eigenvalues ​​of $$T$$,
It is known that $$lambda_i$$ can be written as Kolmogorov n-latitude, and classic results by Joseph Jerome imply this
$$lambda_i sim Ci ^ {- ( beta + d) / d} quad mbox {for some} C> 0.$$
It is therefore natural to expect a similar result for the kernel matrix.
The quantification was also worked on $$| lambda ^ n_i / n – lambda_i |$$, e.g. & # 39; Exact error limits for the eigenvalues ​​of the core matrix & # 39; (Brown). However, the estimates are too extensive to be able to conclude.

## group theory – How do I check whether a given matrix is ​​in the picture of a representation?

To let $$G$$ Be a compact, simple Lie group and let $$rho$$ be an (accurate, unified) irreducible representation of it $$mathbb K$$-Dimensions $$n$$, Where $$mathbb K = mathbb C / mathbb R / mathbb H$$ if $$R$$ is real / complex / pseudoreal. It follows that there is a subset of $$SU (n) / SO (n) / Sp (n)$$each isomorphic to $$G$$, You can think $$rho$$ as a map of $$G$$ to this subgroup.

How can i check if a particular matrix $$M in SU (n) / SO (n) / Sp (n)$$ is in the picture of $$rho$$? In other words, given such a matrix $$M$$how can I decide if there are any $$g in G$$ so that $$rho (g) = M$$?

Let's say for completeness $$G = G_2$$ is the first extraordinary simple group, and let $$rho$$ be the representation with the highest weight $$2 omega_2$$ (this is real and $$27$$-dimensional). This means that for everyone $$g in G_2$$. $$rho (g)$$ is a $$27$$-dimensional orthogonal matrix. If I take them arbitrarily $$27$$-dimensional orthogonal matrix $$M$$how can i check if it can be written like this $$M = rho (g)$$ for some $$g in G_2$$?

Note: I am particularly interested in the case where $$M$$ is diagonal, but I would also be interested to learn about the general case. In the diagonal case, where everything is abelic and you can basically concentrate on a Cartan subalgebra, I assume that you can express the picture from pretty clearly $$rho$$, In general, I wouldn't be surprised if you have to work harder.