matrices – Find unknow Matrix values from 2 equations

I have 2 matrices 10×10 Dimension, Lets called them A and B.

I need to find matrices C (also 10×10) that will solve the next 2 equations:

  1. A = CBC(Transpose)
  2. B = CAC(Transpose)

How can I solve the issue?

matrices B –
((0.125+0.03125i,0,0,0,0,-0.0625,-0.0625,-0.03125i,0,0),
(0,0.0625,0,0,0,0,-0.0625,0,0,0),
(0,0,0.0625,0,0,0,0,-0.0625,0,0),
(0,0,0,0.125+0.03125i,0,0,-0.03125,0,-0.0625,-0.0625),
(0,0,0,0,0.0625,0,0,0,0,-0.0625),
(-0.0625,0,0,0,0,0.0625,0,0,0,0),
(-0.0625,-0.0625,0,-0.03125,0,0,0.125+0.03125i,0,0,0),
(-0.03125i,0,-0.0625,0,0,0,0,0.0625+0.03125i,0,0),
(0,0,0,-0.0625,0,0,0,0,0.0625,0),
(0,0,0,-0.0625,-0.0625,0,0,0,0,0.125))

matrices A –
((0.125+0.03125i,0,0,0,0,-0.0625,-0.0625,-0.03125,0,0),
(0,0.0625,0,0,0,0,-0.0625,0,0,0),
(0,0,0.0625,0,0,0,0,-0.0625,0,0),
(0,0,0,0.125+0.03125i,0,-0.03125i,0,-0.0625,-0.0625),
(0,0,0,0,0.0625,0,0,0,0,-0.0625),
(-0.0625,0,0,0,0,0.0625,0,0,0,0),
(-0.0625,-0.0625,0,-0.03125i,0,0,0.125+0.03125i,0,0,0),
(-0.03125,0,-0.0625,0,0,0,0,0.0625+0.03125i,0,0),
(0,0,0,-0.0625,0,0,0,0,0.0625,0),
(0,0,0,-0.0625,-0.0625,0,0,0,0,0.125))

They are symmetric and almost the same, with few changes.

Thanks!

graphs – Power of adjacency matrix

Let $G$ be a weighted graph with a weight function $w longrightarrow mathbb{R}^{+}$. Let $G’$ denotes the weighted matrix with adjacency matrix

$$A_{G’} = sum_{i=0}^{k} (xA)^{i}$$

where $k$ is integer and $x$ is a variable.

I am not getting what is $A_{G’}$ matrix? Is it contains all walks of length $k$ or is it something else?

linear algebra – A problem about determinant and matrix

Suppose $a_{0},a_{1},a_{2}inmathbb{Q}$, such that the following determinant is zero, i.e.

$
left |begin{array}{cccc}\
a_{0} &a_{1} & a_{2} \
\
a_{2} &a_{0}+a_{1} & a_{1}+a_{2} \
\
a_{1} &a_{2} & a_{0}+a_{1}\
end{array}right|
=0$

Show that $a_{0}=a_{1}=a_{2}=0$

I think it’s equivalent to show that the rank of the matrix is 0, and it’s easy to show the rank cannot be 1.

But I have no idea how to show that the case of rank 2 is impossible. So is there any better idea? Thanks.

r – How to create a Matrix using FOR Loop?

I need to reproduce this matrix using a FOR loop :

 0 1 2 3 4 5
 1 0 1 2 3 4
 2 1 0 1 2 3
 3 2 1 0 1 2
 4 3 2 1 0 1
 5 4 3 2 1 0

I’m trying to use something like this but i don’t know why it does not work: (mat_2 is the matrix)

  mat_2 <- matrix(1:5, by=1, nrow = 6,ncol = 6)
    mat_2
    for (fila in 1:nrow(mat_2))   
    for (columna in 1:ncol(mat_2))
    if(fila==columna) {
      mat_2(fila,columna) <- 0
    }
   

matrix – Invert parent transform (doesn’t work for combination of rotation and scale)

My problem

I’m working with Qt3D and my problem is almost exactly like this one:

https://stackoverflow.com/q/60995155/3405291

Suggested solution

A solution is suggested here:

https://stackoverflow.com/a/61315454/3405291

Understanding the solution

I have a problem understanding the suggested solution. Specifically:

The problem is that the QTransform node does not store the transformation as a general 4x4 matrix. Rather is decomposes the matrix into a 3 transformations that are applied in fixed order:

S – a diagonal scaling matrix

R – the rotation matrix

T – translation

and then applies it in the order T * R * S * X to a point X.

So when the transformation on the parent is M = T * R * S, then the inverse on the child will be M^-1 = S^-1 * R^-1 * T^-1. Setting the inverse on the child QTransform will attempt to decompose it in the same way:

M^-1 = T_i * R_i * S_i = S^-1 * R^-1 * T^-1

That doesn’t work, because particularly S and R don’t commute like this.

I don’t understand the above assertions. Can anyone explain them to me. Just help me realize.

linear algebra – Most accurate way to calculate matrix powers and matrix exponential for a positive semidefinite matrix

I do need to numerically calculate the following forms for any $xinmathbb{R}^n$, possibly in python:

  1. $x^T M^k x$, where $Minmathbb{R^n}$ is a PSD matrix, where $k$ can get quite large values $-$possibly up to order of hundreds. I prefer $k$ to be a real number, but it is ok if it can only be an integer, if that makes a considerable accuracy difference.
  2. Similarly I am interested in $x^Te^{-tM}x$ where $t$ is a real value.

For case 1 I can either:

  • Use the scipy.linalg.fractional_matrix_power to calculate $M^k$ and then derive $x^TM^Kx$, or
  • Use scipy.linalg.svd to find SVD of $M$ as $ULambda U^T$ and then finally evaluate the desired value using $x^TULambda^k U^Tx$.
  • Finally if $k$ is integer, and again based on SVD I can calculate $|x^TM^{k/2}|^2$.

For case 2

  • Again I can use off the shelf scipy.linalg.expm and then exponentiate the singular values of $M$
  • I can do SVD for $M$ and then go with $x^TUexp(Lambda) U^Tx$.
  • Finally since I am only interested in $x^T exp(M) x$, and not exactly $exp(M)$ it self, I can consider the Taylor expansion of $x^T{rm expm}(M)xapprox sum_{i=0}^{l} frac{1}{i!}x^TM^ix$ for some $l$ that controls the precision, and $x^TMx$ can be calculated based on case 1.

Can anybody guide me about what is the most precise way to calculate either of these expressions, up to hopefully machine precision? Any of these methods, or they’re better solutions out there? I would be happy also with references.

P.S. Not knowing if here or stackoverflow or math.stackexchange.com being a good place to share this question at, I will be cross-posting it on their with the same title and content.

tensorflow – Confusion Matrix code wrong, Val_accuracy nearly 99%, But confusion Matrix bad results

Getting ~99% accuracy but confusion matrix getting bad results, val_set giving 99% accuracy as seen in results. but confuion matrix gets 410 correct amon 532 images in validation set.
import os
import numpy as np
import matplotlib.pyplot as plt
import keras
from keras.applications import xception
from keras.layers import *
from keras.models import *
from keras.preprocessing import image

model = xception.Xception(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
for layers in model.layers:
    layers.trainable=False
    
flat1 = Flatten()(model.layers(-1).output)
class1 = Dense(256, activation='relu')(flat1)
output = Dense(1, activation='sigmoid')(class1)

model = Model(inputs = model.inputs, outputs = output)


model.compile(loss = 'binary_crossentropy', optimizer='adam', metrics = ('binary_accuracy', 'categorical_accuracy'))


train_datagen = image.ImageDataGenerator(
    rescale = 1./255,
    shear_range = 0.2,
    zoom_range = 0.2,
    horizontal_flip = True,
    )

test_datagen = image.ImageDataGenerator(rescale = 1./255)

train_generator = train_datagen.flow_from_directory(
    '/Users/xd_anshul/Desktop/Research/Major/CovidDataset/Train',
    target_size = (224,224),
    batch_size = 10,
    class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
    '/Users/xd_anshul/Desktop/Research/Major/CovidDataset/Test',
    target_size = (224,224),
    batch_size = 10,
    class_mode='binary')

#model Fitting

hist = model.fit(
    train_generator,
    epochs=2,
    validation_data=validation_generator)
    
    
    
    from sklearn.metrics import classification_report, confusion_matrix
    
    Y_pred = model.predict_generator(validation_generator) #steps = np.ceil(validation_generator.samples / validation_generator.batch_size), verbose=1, workers=0)
    y_pred=( np.argmax(Y_pred(i)) for i in range(validation_generator.samples))
    
    print('Confusion Matrix')
    print(confusion_matrix(validation_generator.classes, y_pred))
    print('Classification Report')
    target_names = ('Covid', 'Normal')
    print(classification_report(validation_generator.classes, y_pred, target_names=target_names))


Found 2541 images belonging to 2 classes.
Found 532 images belonging to 2 classes.
Epoch 1/2
255/255 (==============================) - 403s 2s/step - loss: 1.4669 - binary_accuracy: 0.9517 - categorical_accuracy: 1.0000 - val_loss: 0.0731 - val_binary_accuracy: 0.9944 - val_categorical_accuracy: 1.0000
Epoch 2/2
255/255 (==============================) - 434s 2s/step - loss: 0.2706 - binary_accuracy: 0.9866 - categorical_accuracy: 1.0000 - val_loss: 0.0774 - val_binary_accuracy: 0.9868 - val_categorical_accuracy: 1.0000

print('Confusion Matrix')
Confusion Matrix

print(confusion_matrix(validation_generator.classes, y_pred))
((410   0)
 (122   0))

linear algebra – How to prove convergence of components of a matrix to its arithmetic mean with repeated 2D convolution using a gaussian kernel?

I am applying 2D convolutions to matrix $A$ using the method scipy provides in ‘same’ mode with symmetry at the boundaries. I am using a 3×3 gaussian kernel: $$
k =
frac{1}{16}left(begin{array}\
1 & 2 & 1 \
2 &4 &2\
1 & 2 &1
end{array}right)
$$

If I apply this repeatedly to my matrix $A$ its components all eventually converge to the arithmetic mean of the matrix. Intuitively that’s easy to understand, but I have a hard time on how to prove that (for any matrix $A$). Also, I noticed that it does not converge to the mean for some other kernels, so how would you determine convergence without just trying it out?