## lo.logic – Prenex form converse to implication in Linear Programming?

Given an implication $$Pimplies Q$$ its contrapositive is $$neg Qimpliesneg P$$ and its negation is $$neg Pvee Q$$.

1. What is the negation to $$Pimplies Q$$ when it comes to Linear Programming?

2. Can it be given an implication statement?

In my case I have $$A(x,r)’leq bimplies r<0$$ where $$A$$ is a rational matrix and $$rinmathbb R$$ and $$x$$ is a vector and $$b$$ is a vector.

The negation is $$rgeq0wedge A(x,r)’leq b$$.

Motivation:

I have a quantified Linear Program:

$$forall xinmathbb R^m$$

$$exists rinmathbb R$$

$$A(x,r)’leq bimplies r<0$$.

I want to write a converse statement which should be of form (to be in proper PRENEX form):

$$exists xinmathbb R^m$$

$$forall rinmathbb R$$

$${rgeq0wedge A(x,r)’leq b}vee{“Something”}$$.

What is the “something” to make the statement in PRENEX form?

1. Can we give an implication statement to the converse by removing the $$vee$$?

Is the “something” $$r<0$$?

It does not make sense since the implication statement becomes

$$neg{r<0}implies{rgeq0wedge A(x,r)’leq b}$$

since clearly $$neg{r<0}equiv{rgeq0}$$ has nothing to do with $$A(x,r)’leq b$$.

## How to traverse the order relationship between string elements in linear time

I have a string and I want to get the order relationship between string elements in linear time. For example, a string "abcacbad", their order relationship is:

ab, ac, ad, bc, ba, bd, ca, cb, cd

## linear algebra – basis for subspace

Suppose s={u1,u2,u3} is a basis for a subspace V ⊆ R^4. Let v be a vector in V. Suppose that the RREF of{u1,u2,u3|v}is $$begin{bmatrix}1&0&0&|1\0&1&0&|2\0&0&1&|0\0&0&0&|0end{bmatrix}$$

What is(v)s (standard basis)??

1. Not enough information

2.(1;-2;0)

Am i right in choosing option 2??

## statistics – Is the formula for standard error for the slope of a linear regression with intercept the same as without?

Thanks for contributing an answer to Mathematics Stack Exchange!

But avoid

• Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.

## microsoft excel – Factoring business growth into linear regression

I am looking to run a pretty basic linear regression in excel with the products price as the Independent Variable and QTY sold as the Dependent variable. I have 5 years of sales data with each row showing month of month growth.

The issue is our company has been in a steady state of growth over that time, 50%, 30%, 40%, and 25% from 2015 to 2019.

Is there anyway to factor this organic growth into the regression?

Thanks

## Linear applications set vector space isomorphism

Let $$mathcal{L}(mathbb{R}^m;mathbb{R}^n)$$ be the set of the linear applications $$f:mathbb{R}^m rightarrow mathbb{R}^n$$.

How do I show that $$mathcal{L}(mathbb{R}^m;mathbb{R}^n)$$ and $$mathbb{R}^{mn}$$ are isomorphic vector spaces? (without the dimension argument)

## linear algebra – A question about implementation of Farkas lemma

The Farkas Lemma: Let $$A$$ be an $$mtimes n$$ matrix, $$binmathcal{R}^m$$. Then exactly one of the following two assertions is true:
(1) There exists an $$xin mathcal{R}^n$$ such that $$Ax=b$$ and $$xge0$$.
(2) There exists a $$yin mathcal{R}^m$$ such that $$A^Tyge0$$ and $$b^Ty<0$$.

I want to check which assertion is true for a given $$b$$. So I constructed the linear programming problem according to the second statement:
$$begin{equation} min b^Ty\ s.t. -A^Tyle0 end{equation}$$
The idea is simple. if the solution $$b^Ty$$ is great than or equal to $$0$$, then the first assertion is true; otherwise the second one is true.

But the following matlab implementation always gives me the result $$y=0$$. Something is wrong, but i have no idea. I would be thankful for any help or references.

f = b;
A = -A';
y = linprog(f,A,zeros(m,1),(),(),(),());


## ag.algebraic geometry – Sections of (affine) toric varieties by linear subspaces

I am looking for a reference that expands on the following statement:

“Sections of toric varieties by linear subspaces defined by coordinates or differences of coordinates are binomial schemes.” (page 2 in “Binomial Ideals” by Eisenbud, Sturmfels, 1994).

Is there a source that expands on this point?

Are there other sources working out the sections of toric varieties by linear subspaces? I am looking both for explicit work akin to (Eisenbud, Sturmfels) as well as sources using intersection theory.

My motivation is to understand the linear subspaces contained in toric varieties arising in toric dynamical systems (Craciun, Gheorghe, et al. “Toric dynamical systems.” Journal of Symbolic Computation 44.11 (2009): 1551-1565.) because they have biophysical significance.

## linear algebra – Definition of smooth point and its practical application

My book defines a smooth point of the boundary of a subset of a manifold as follows. Let $$Xsubset Msubsetmathbb{R}^n$$ where $$M$$ is a $$k$$-dimensional manifold. Consider a point $$vec{x}inpartial_MX$$, the boundary between $$X$$ and $$M$$. This point is a smooth point if there exists a neighborhood $$Vsubsetmathbb{R}^n$$ of $$vec{x}$$ and a single $$C^1$$ function $$gcolon Vcap Mtomathbb{R}$$ such that
$$g(vec{x})=0,quad Xcap V={ggeq0},quad text{and }(Dg(vec{x}))colon T_{vec{x}}Mtomathbb{R}, text{is surjective.}$$
$$(Dg(vec{x}))$$ is the derivative (the Jacobian) of $$g$$ at $$vec{x}$$ and $$T_{vec{x}}M$$ is the tangent space of $$M$$ at $$vec{x}$$.

My first question is, why must the derivative be surjective? Furthermore, why does the derivative of $$g$$ map from the tangent space of $$M$$ at $$vec{x}$$ instead of $$Vcap M$$ (which is the domain of $$g$$)?

My second question is best phrased with an example: consider the region bounded by the graphs of $$y=x^2$$ and $$x=y^2$$. Apparently, the origin and $$(1,1)$$ are not smooth points of the boundary. How so? This is stated without proof, but the nonexistence of a function $$g$$ for those points is not clear to me.

## linear algebra – Construct matrices

Came into the following question during a course.

Create three matrices: $$A_1, A_2$$ and $$A_3$$ such that
$$begin{array}{l} mathcal{C}left(A_{1}right) cap mathcal{C}left(A_{2}right)={mathbf{0}} \ mathcal{C}left(A_{1}right) cap mathcal{C}left(A_{3}right)={mathbf{0}}, end{array}$$
but $$mathcal{C}left(A_{1}right) cap mathcal{C}left(A_{2} : A_{3}right) neq {mathbf{0}}$$.

Here $$mathcal{C}$$ denotes the vector space generated by the columns of matrix $$A$$ and : denotes the bigger matrix with partitioned matrix $$A_2, A_3$$.