Please help me with this kind of problem. It’s all about linear programming.

You are an architect working on constructing an 80 square meter bungalow house situated on a 120 square meter lot. You’re contemplating in hiring skilled and unskilled workers for your project. The number of skilled workers should be greater or equal to the number of unskilled workers. You estimated that a group consisting of 10 workers will be able to do the job in 6 months. However, every additional group of 10 workers will reduce the building time by a month. You estimated that 3 groups will be the maximum number of groups you will hire. The monthly wage of a skilled worker is P12, 400 while an unskilled worker is P10, 000. How many will you hire to minimize your labor cost?

a. What will be your objective function?

b. What are the constraints needed in the problem?

c. Determine the vertices of the feasible region.

d. What will be the minimum cost for labor?

machine learning – Does Linear Discriminant Analysis make dimensionality reduction before classification?

I’m trying to understand what LDA exactly does when used as a classifier, i’ve understood how the dimensionality reduction works and i’ve understood that the classification task is carried out with the application of Bayes’ theorem, but i still can’t figure out if LDA executes both operation when used as a classification algorithm.

It’s correct to say that LDA as a classifier executes by itself dimensionality reduction and then applies Bayes’ theorem for classification?

If that makes any difference, i’ve used LDA in Python from the sklearn library.

dg.differential geometry – Are all linear vector field geodesible vector field?

I had already asked this question in MSE then I ask here at MO

Assume that $Ain M_n(mathbb{R})$ is a non singular matrix.

Is the flow of linear vector field $X’=AX$ a geodesible flow on $mathbb{R}^n setminus {0}$?Namely, is there a Riemannian metric on $mathbb{R}^n setminus {0}$
such that the trajectories of the linear vector field are unparametrized geodesics?

Remark: For $n=2$ the answer is affirmative, as we explain below:

Fact: A linear vector field associated to a non singular$ 2 times 2$ real matrix is a geodesible vector field on the punctured plane.

Proof:

Let $A$ be an invertible matrix. We denote by $X$ the linear vector field associated to $A$.

We consider two cases:

1)$A^2$ has no real eigenvalue.

  1. $A^2$ has real eigenvalue.

Case 1) In this case the linear vector field $Y$ associated to matrix $A^{-1}$ is transverse to $X$ on the puntured plane and satisfies $(X,Y)=0$ this obviously implies that $X$ is a geodesible vector field.

Case 2) If $A^2$ has real eigenvalue then $A$ is similar to one of the following matrices:

$$begin{pmatrix} a&0\ 0& b end{pmatrix};; begin{pmatrix} a&epsilon\ 0& a end{pmatrix} ;;begin{pmatrix} 0&b\ -b& 0 end{pmatrix} $$
For the first matrix the closed one form $psi=axdx+bydy$ satisfies $psi(X)>0$.So $X$ is a geodesible vector field. For the second matrix the $1$-form $psi=axdx+aydy$ satisfies $psi(X)>0$. For the third matrix the vector field is geodesible because we have a foliation of punctured plane by closed curve.

The reason of geodesibility of case $1$ and three matrices in case $2$ is discussed in the following post which is essentially based on page 71 of “Geometry of foliation ” by Philip Toender, Propsition $6.7$ and $6.8$

Finding a 1-form adapted to a smooth flow

Please see also this related post:

Is every real matrix conjugate to a semi antisymmetric matrix?

linear algebra – A problem about determinant and matrix

Suppose $a_{0},a_{1},a_{2}inmathbb{Q}$, such that the following determinant is zero, i.e.

$
left |begin{array}{cccc}\
a_{0} &a_{1} & a_{2} \
\
a_{2} &a_{0}+a_{1} & a_{1}+a_{2} \
\
a_{1} &a_{2} & a_{0}+a_{1}\
end{array}right|
=0$

Show that $a_{0}=a_{1}=a_{2}=0$

I think it’s equivalent to show that the rank of the matrix is 0, and it’s easy to show the rank cannot be 1.

But I have no idea how to show that the case of rank 2 is impossible. So is there any better idea? Thanks.

why isn’t my proof valid? system of linear equations

The problem was: given the matrix $A in M_{mtimes n}$ and let $b in F^{m}_{col}$.
$y in F^{m}_{col}$ is a solution of the system of equations $AX=b$.
prove: every solution of the system of equations $AX=b$ can be represented as $y+x$, where $x in F^{m}_{col}$ is the solution of the homogeneous system $AX=0$.

so my proof was like that:
because $y in F^{m}_{col}$ is a solution of the system of equations $AX=b$ we can conclude:
$$(1) A cdot y=b$$
and because $x in F^{m}_{col}$ is a solution of the system of equations $AX=0$ we can conclude:
$$(2) A cdot x=0$$
from (1) and (2) we get:
$$ A cdot y+A cdot x=0+b$$
according to the rules of matrix multiplication we get:
$$(3) A cdot ( y+ x)=0+b=b$$
and therefore according to (3) the solution which is represented by $z=x+y$ is also a solution of the system of equations $AX=b$

my instructor gave me only 3 points for that telling that “it isn’t a valid proof”. Why is that? what’s the problem with it? and what should I say in order to appeal his decision.
thank in advance.

linear programming – When LP solution is ILP solution?

For many discrete problems, it’s natural to consider their continuous relaxations. A common case is when instead of $x_i in {0, 1}$ we allow $x_i in (0, 1)$. In certain cases, the original problem is an integer linear programming (ILP) problem, and its relaxed version becomes a linear programming (LP) problem, which we can efficiently solve.

Questions: Are there common techniques which show that, for a particular problem:

  1. An LP solution will always be an ILP solution?
  2. There exists an LP solution that is also an ILP solution?
  3. A particular LP algorithm (e.g. simplex method) finds an ILP solution.

By “solution”, I, of course, mean a vector on which the objective reaches its optimum.

algorithms – Proving that a preorder traversal of a rooted tree can be performed in linear time

Definition:

Let $T(V, E)$ be a rooted tree with root $r$.

If $T$ has no other vertices, then the root by itself constitutes the preorder traversal of $T$.

If $lvert V rvert > 1$, let $T_1, T_2, dots, T_k$ denote the subtrees of $T$ from left to right. The preorder traversal of $T$ first visits $r$ and then traverses the vertices of $T_1$ in preorder, then the vertices of $T_2$ in preorder, and so on until the vertices of $T_k$ are traversed in preorder.

Question:

How does one prove, using the above definition, that a preorder traversal of a rooted tree $T(V, E)$ can be computed in $O(lvert V rvert)$ time? Since $T$ is a tree, $lvert E rvert = lvert V rvert – 1$, and so showing that a preorder traversal algorithm simply visits the vertices and edges of $T$ a constant number of times and does constant work on each visit would do it. Obviously this is true, but how does one prove this formally?

linear algebra – Most accurate way to calculate matrix powers and matrix exponential for a positive semidefinite matrix

I do need to numerically calculate the following forms for any $xinmathbb{R}^n$, possibly in python:

  1. $x^T M^k x$, where $Minmathbb{R^n}$ is a PSD matrix, where $k$ can get quite large values $-$possibly up to order of hundreds. I prefer $k$ to be a real number, but it is ok if it can only be an integer, if that makes a considerable accuracy difference.
  2. Similarly I am interested in $x^Te^{-tM}x$ where $t$ is a real value.

For case 1 I can either:

  • Use the scipy.linalg.fractional_matrix_power to calculate $M^k$ and then derive $x^TM^Kx$, or
  • Use scipy.linalg.svd to find SVD of $M$ as $ULambda U^T$ and then finally evaluate the desired value using $x^TULambda^k U^Tx$.
  • Finally if $k$ is integer, and again based on SVD I can calculate $|x^TM^{k/2}|^2$.

For case 2

  • Again I can use off the shelf scipy.linalg.expm and then exponentiate the singular values of $M$
  • I can do SVD for $M$ and then go with $x^TUexp(Lambda) U^Tx$.
  • Finally since I am only interested in $x^T exp(M) x$, and not exactly $exp(M)$ it self, I can consider the Taylor expansion of $x^T{rm expm}(M)xapprox sum_{i=0}^{l} frac{1}{i!}x^TM^ix$ for some $l$ that controls the precision, and $x^TMx$ can be calculated based on case 1.

Can anybody guide me about what is the most precise way to calculate either of these expressions, up to hopefully machine precision? Any of these methods, or they’re better solutions out there? I would be happy also with references.

P.S. Not knowing if here or stackoverflow or math.stackexchange.com being a good place to share this question at, I will be cross-posting it on their with the same title and content.

c++ – Predicting trajectory of Box2D physics body using both: linear dumping and gravity

I would like to calculate position of physics body after some time because of predicting shots trajectory in my game.

I found some great answer here where Iter Ator provides equation to calculate actual velocity of body after time, provided time, starting velocity and linear dumping but no gravity. I was able to integrate this solution to get distance traveled in time equation. Problem is that this doesnt take gravity into account.

There is also this tutorial that has many information about predicting physics body trajectory but doesnt take linear dumping into account at all.

What I would love to have is combined single equation that covers both linear dumping AND gravity to predict Box2d physics body actual velocity/position but math is too hard for me :(.

Problem seems to be that physics body velocity is both accelerated by gravity and dumped at the same time.