Fourth order tensor rotation – Mathematica Stack Exchange

Thank you for your reply to Mathematica Stack Exchange!

  • Please be sure answer the question. Provide details and share your research!

But avoid

  • Ask for help, clarify, or respond to other answers.
  • Make statements based on opinions; Support them with references or personal experiences.

Use MathJax to format equations. MathJax reference.

For more information, see our tips on writing great answers.

How to reproduce this tensor calculation with Mathematica

The tensor operation shown in the red box is used in the textbook to prove that there are only 9 independent constants for orthotropic materials:

Enter the image description here

I want to use MMA to reproduce the operation of $ C_ {pqmn} = l_ {ip} ; l_ {jq} ; l_ {km} ; l_ {ln} ; C_ {ijkl} $ (Where $ C_ {ijkl} $ is the stiffness tensor), but currently I have no specific idea. I will continue to update the details to make them perfect.

Additional details:

Details will be added …

What should I do to get a fourth order tensor with only two independent components?

I see this example in the help SymmetrizedIndependentComponentsWe can see this matrix A has only four independent components:

A = {{{a, b}, {b, c}}, {{b, c}, {c, d}}};
sym = TensorSymmetry[A]
SymmetrizedIndependentComponents[Dimensions@A, sym]
SymmetrizedArrayRules[A, sym]

But if I apply the above method to the fourth order tensor, I encounter a problem with the matrix t should only have two independent components, but the result shows that there are 36 independent components:

t = {{{{a, 0, 0}, {0, b, 0}, {0, 0, 0}}, {{0, 0, 0}, {0, 0, 0}, {0, 0,
      0}}, {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}}, {{{0, 0, 0}, {0, 0, 
     0}, {0, 0, 0}}, {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}, {{0, 0, 
     0}, {0, 0, 0}, {0, 0, 0}}}, {{{0, 0, 0}, {0, 0, 0}, {0, 0, 
     0}}, {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}, {{0, 0, 0}, {0, 0, 
     0}, {0, 0, 0}}}}
sym = TensorSymmetry
Dimensions@t
SymmetrizedIndependentComponents[Dimensions@t, sym] // Length

What should I do to get the fourth order tensor with only two independent components?

SymmetrizedIndependentComponents[{3, 3, 3, 
   3}, {{{2, 1, 3, 4}, -1}, {{3, 4, 2, 1}, 1}}] // Length

Result of the continuum tensor product from Hilbert spaces

Let's take that with number $ mu_1 in mathbb {R} $ We associate a Hilbert room $ mathcal {H} _ { mu_1} $ with a continuous basis $ | 1 rangle _ { mu_1} $, $ | 2 rangle _ { mu_1} $, $ | 3 rangle _ { mu_1} $, $ ldots $ Analogous

$ mathcal {H} _ { mu_2} $:: $ mu_2 in mathbb {R} $ with base $ | 1 rangle _ { mu_2} $, $ | 2 rangle _ { mu_2} $, $ | 3 rangle _ { mu_2} $, $ ldots $

$ mathcal {H} _ { mu_3} $:: $ mu_3 in mathbb {R} $ with base $ | 1 rangle _ { mu_3} $, $ | 2 rangle _ { mu_3} $, $ | 3 rangle _ { mu_3} $, $ ldots $

$ vdots $

And so on.

Could you please tell me whether we formally create a next room (continuum tensor product of separable Hilbert rooms)?
$$
mathcal {H} = bigotimes border_ {k in mathbb {R}} mathcal {H} _ { mu_k},
$$

Would it be a Hilbert room (obviously not separable)?

P.S. Yes, I know that we cannot list all real numbers and the chosen notation for indexes of $ mu $ is not really good, but I have not thought of anything better.

homological algebra – finally generated module via PID with tensor with itself is zero

Hello, I have the next doubt about this problem:

Show that if $ A $ is a module finally created via a PID and $ A otimes _ { Lambda} A = 0 $, then $ A = 0 $.

I've done the next one, I'm looking at the next exact order

$ 0 rightarrow Tor (A) rightarrow A rightarrow A / Tor (A) rightarrow 0 $

We have that $ A / Goal (A) $ is therefore a finally created torsion-free module via a PID $ A / Goal (A) $ is a free module and that implies that the short exact sequence is split up.

That's why I have a morphism $ A / goal (A) right arrow A $ so that $ A / goal (A) right arrow A right arrow A / goal (A) $ is identity.

Now when I'm using tensor $ A $ I have the next composition

$ (A / Tor (A)) otimes A rightarrow 0 rightarrow (A / Tor (A)) otimes A $ is also identity

It follows $ (A / Tor (A)) otimes A = 0 $.

Since $ A / Tor (A) cong Lambda ^ {k} $ I have this $ A ^ {k} = 0 $

However, I don't know how to proceed and I got stuck with it, so any hint?

Functional analysis – Numerical range of the tensor product of two matrices

To let $ T in M_n $, The following is true
$$ bigcap limit_ {B in M_2 \ text {tr} (B) = 0} left {X in M_2: W (X) subseteq W (T) text {and} W ( B otimes X) subseteq W (B otimes T) right } subseteq bigcap border_ {B in M_2} ​​ left {X in M_2: W (B otimes X) subseteq W (B otimes T) right }? $$ Where $ W (S): = { langle Sx, x rangle: Vert x Vert = 1 } $ is called the numerical range of $ S $,

Remarks: I checked the above in MATLAB with a specific selection of $ T, B $ which turns out to be positive in this case. Then I tried to prove the above in the following way:

  • We know that $ W (X) subseteq W (T) $ iff a card $ varphi: text {span} {I, T, T ^ * } rightarrow text {span} {I, X, X ^ * } $ s.t. $ varphi (aI + bT + cT ^ *) = aI + bX + cX ^ * $ Where $ a, b, c in mathbb {C} $ is positive. According to the hypothesis, we have a positive card $ psi: text {span} {I_2 otimes I, text {span} {B _ { circ}, B _ {{ circ} ^ * } otimes text {span} {T , T. ^ * } } rightarrow M_4 $ s.t.$ psi (I_2 otimes I + B otimes T + B ^ * otimes T ^ *) = I_2 otimes I_2 + B otimes X + B ^ * otimes X ^ * $ is positive where $ B in M_2 $ s.t. $ tr (B) = 0 $ and $ B _ { circ} =
    begin {pmatrix}
    0 & 1 \
    0 & 0 \
    end {pmatrix}.
    $
    Now I have no idea whether the card $ varphi $ can be expanded to positive $ text {span} {I, B _ { circ}, B _ { circ} ^ * } otimes text {span} {I, T, T ^ * } $ to get the desired result.

It may be wrong, but I have no counterexample yet. Thank you in advance. Any comment is greatly appreciated.

machine learning – how does the BERT model (in tensor flow or paddle-paddle frameworks) relate to nodes of the underlying neural network that is being trained?

The BERT model in frameworks such as TensorFlow / Paddle-Paddle shows different types of calculation nodes (such as subtracting, accumulating, adding, multiplying, etc.) in a graphical form in 12 levels.

However, this diagram does not look like a neural network that is typically shown in textbooks (e.g. https://en.wikipedia.org/wiki/Artificial_neural_network#/media/File:Colored_neural_network.svg), in which each Edge is a weight that is being trained and there is an input and an output layer.

If I print out the BERT diagram instead, I cannot find out how a node in the BERT diagram relates to a node in the trained neural network.

I used the BERT framework models to compile them into a form in which we can run the model on a PC / CPU. But I am still missing this fundamental aspect of how BERT relates to the neural network, since I don't see which neural network topology is being trained (how I would expect topology / connections between / between different layers / nodes of the neural network to determine how that Training of the neural network takes place).

Could someone explain which underlying neural network is being trained by BERT? How are nodes in the BERT diagram related to neural network nodes and weights on neural network edges?