linear algebra – Problem finding the intersection of two vector spaces

Both subspaces are planes. The planes are not parallel and so they intersect in a common line. Use the ansatz

$x+2y – z = 0 = -x+z$, i.e., $2x+2y-2z=0$.
Points satisfying this are $A(1,1,2)$ and $B(0,1,1)$. The line through these points is the searched intersection

$x = (1,1,2) + rcdot (0-1,1-1,1-2) = (1,1,2) + rcdot (-1,0,-1)$.

algebra precalculus – Solving these two equations and trying to get $Q1$ and $Q2$

Thanks for contributing an answer to Mathematics Stack Exchange!

  • Please be sure to answer the question. Provide details and share your research!

But avoid

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.

To learn more, see our tips on writing great answers.

linear algebra – Condition for a family of elements of an affine space to be affinely independent

Let $E$ be an affine space attached to a $K$-vector space $T$. For a
family $(x_i)_{iin I}$ of elements of $E$ and any $ain E$, the set
$$left{sum_{iin I}lambda_i(x_i-a)+a | lambdain
K^{(I)}landsum_{iin I}lambda_i=1right}$$
is the affine linear
variety generated by
$(x_i)_{iin I}$.

Let $(a_i)_{iin I}$ be a nonempty family of elements of $E$ and $kin
. This family is said to be affinely independent if and only if the
family $(a_i-a_k)_{ine k}$ is linearly independent in $T$.

Now, suppose $(a_i)_{iin I}$ be a nonempty family of elements of $E$. Suppose that, for every $kin K$, $a_k$ doesn’t not belong to the affine variety generated by $(a_i)_{ine k}$. I want to show that this implies that $(a_i)_{iin I}$ is affinely independent.


Since $Ineemptyset$, let $kin I$. According to the above, it is sufficient to show that the family $(a_i-a_k)_{ine k}$ is linearly independent in $T$. So, let $lambdain K^{(I-{k})}$ such that $sum_{ine k}lambda_i(a_i-a_k)=0$. Then $sum_{ine k}lambda_i(a_i-a_k)+a_k=a_k$. This implies that $sum_{ine k}lambda_ine 1$. But how can this help me show that $lambda=0$?

linear algebra – The optimality of Kalman filtering

It is known that the Kalman filter estimates the state of the following system recursively.
$$x_{k+1}=Ax_k+w_k, w_k sim mathcal{N}(0,Q)$$
$$y_k=Cx_k+v_k, v_k sim mathcal{N}(0,W)$$

In the case of Gaussian process and measurement noises, suppose $P_k$ is the posterior estimation error covariance matrix of the Kalman filter, $tilde{P}_k$ is the estimation error covariance matrix of an arbitrary estimator (maybe nonlinear). Since Kalman filter is an MMSE estimator, we know

$$Trace(tilde{P}_k-P_k) geq 0$$

Now I wonder whether a more strong conclusion is true, i.e.,

$$tilde{P}_k-P_k succeq 0$$

which means that $tilde{P}_k-P_k$ is always positive semi-definite. For an arbitrary linear filter, I know this is true; but I am not sure for an arbitrary nonlinear estimator. Any reference paper is appreciated.

linear algebra – Why is this special row of a matrix always independent of the others?

We are given 3 functions:

f(x) &= x \
g(x) &= x^{c_1} + x^{c_2} \
h(x) &= x^{c_3} + x^{c_4}

We are also given a matrix whose columns are of the following form:

$$f(a^{text{row}})^{u} g(a^{text{row}})^v h(a^{text{row}})^w$$

where $u$, $v$, and $w$ are randomly chosen, but the same for each column. For example, column 3 might be:

f(a^{1})^{2} g(a^{1})^3 h(a^{1})^5 \
f(a^{2})^{2} g(a^{2})^3 h(a^{2})^5 \
f(a^{3})^{2} g(a^{3})^3 h(a^{3})^5 \
vdots \
f(a^{m})^{2} g(a^{m})^3 h(a^{m})^5

My question is, if there is only one column consisting of strictly powers of $f(x)$ (i.e. $f(x)^2g(x)^0h(x)^0$), why is this column always independent of the other columns? For example, the column

f(a^{1})^{2} \
f(a^{2})^{2} \
f(a^{3})^{2} \
vdots \

will always be independent of the other columns, assuming that all other columns contain at least one power of either $g(x)$ or $h(x)$.

In other words, the column that is not a product of sums, but instead strictly a product of $a$, will always be independent of the other columns.


I thought that perhaps key to the argument is the fact that the special column will always be a column from a Vandermonde matrix, which are known to have full rank. The form of Vandermonde matrices are:

a^{1 cdot 1} & a^{1 cdot 2} & a^{1 cdot 3} & dots & a^{1 cdot n} \
a^{2 cdot 1} & a^{2 cdot 2} & a^{2 cdot 3} & dots & a^{2 cdot n} \
a^{3 cdot 1} & a^{3 cdot 2} & a^{3 cdot 3} & dots & a^{3 cdot n} \
vdots & vdots & vdots & ddots & vdots \
a^{m cdot 1} & a^{m cdot 2} & a^{m cdot 3} & dots & a^{m cdot n} \

soft question – Latest “A Term of Commutative Algebra” by Altman and Kleiman?

(Moderator, please turn this question to a community-wiki. I’ll post my answer soon. TIA.)

Where can I find the latest revision of A term of Commutative Algebra by Allen B. ALTMAN and Steven L. KLEIMAN? Is my 2013 version ok?

It is hard to locate the latest one; many old revisions and pointers to them are randomly scattered across the web.

This free textbook is intended to be an update of, and an improvement to “A & M”, i.e. Introduction to Commutative Algebra by Atiyah and MacDonald.

abstract algebra – What primes factor both $2$ and $3$, with no restriction on the domain?

What numbers (especially primes) factor both $2$ and $3$ (with no restriction on the domain)? How many answers are there? I’m looking preferably for a prime.

My attempt:

No natural number (other than 1 and themselves), obviously.

Ignore rationals because there are no good answers there.

I had a think about Gaussian integers and figured $2,3$ would have to be in unit ratio with each other, which they’re not so I think that rules that out.

That got me to thinking that if $2,3$ are primes then I guess they could only be factored by units so either they’re not primes in the domain or there would be a unit ratio between them.

Then I looked at the finite field with four elements. It looks like we can reasonably assign $2$ and $3$ to numbers, both of which factor both but that’s about as far as I can generalise – and since they are both factored by each other I guess they can’t be considered prime in this context.

linear algebra – A problem about determinant and matrix

Suppose $a_{0},a_{1},a_{2}inmathbb{Q}$, such that the following determinant is zero, i.e.

left |begin{array}{cccc}\
a_{0} &a_{1} & a_{2} \
a_{2} &a_{0}+a_{1} & a_{1}+a_{2} \
a_{1} &a_{2} & a_{0}+a_{1}\

Show that $a_{0}=a_{1}=a_{2}=0$

I think it’s equivalent to show that the rank of the matrix is 0, and it’s easy to show the rank cannot be 1.

But I have no idea how to show that the case of rank 2 is impossible. So is there any better idea? Thanks.

algebra precalculus – How to find the time for a treatment when counting the difference of pills taken?

The problem is as follows:

Louis took three and a half type $A$ pills every twelve hours and a
half type $B$ pills every $6$ hours. He did this until the difference
in the number of pills taken was $25$. If he starts taking both types
of pills together, how long does the treatment last and how many pills
had he taken in total?.

The alternatives given in my book are as follows:

1.&textrm{4.75 days and 44 pills}\
2.&textrm{4.75 days and 45 pills}\
3.&textrm{3.75 days and 43 pills}\
4.&textrm{4.5 days and 45 pills}\

How exactly should I solve this problem?.

What I’ve attempted so far was to use this formula which is based on the fact that the number of pills taken can be found by computing the division of the total time with that of the interval between each dose plus $1$ which guarantees to account all together and without falling in the off by one error.

Thus the labels are as follows:

$t_{1}$: total elapsed time

$t_{2}$: interval time between dose

$textrm{total of A pills: a}$

$textrm{total of B pills: b}$




Replacing with the given information:




Then solving this yields:


But this number is not an integer and it doesn’t seem to help me to get the requested time. Thus I need help in the right approach for this question. Can someone help me here? It would help me a lot a wordy answer so I can understand what is going on.