Linear independence of complex polynomials and a “sum of squares” conjecture

This will take me some time to explain. Let $n geq 2$ be a fixed integer. Let $p_i(z)$, for $i = 1,ldots,n$ be $n$ nonzero complex polynomials of degree at most $n-1$. I am interested in reformulating whether or not the $n$ complex polynomials are linearly independent over $mathbb{C}$, using the so called “squaring map”, which is familiar to people working with spinors. It is a quadratic map which maps spinors to vectors. I will define everything.

For $i = 1, ldots, n$, write

$$ p_i(z) = prod_{j neq i} L_{ij}(z) $$

where $j$ runs over the values from $1$ to $n$ that are different from $i$, and each $L_{ij}(z)$ is a (nonzero) complex polynomial of degree at most $1$, so that

$$L_{ij}(z) = a_{ij} z + b_{ij}.$$

Of course, for a fixed $i$, the $L_{ij}$ are not uniquely determined. For instance these linear factors can be permuted, or you could for instance scale them differently. In the end, our construction will be independent of such ambiguities.

Given a nonzero polynomial

$$L(z) = az + b$$

of degree at most $1$, one can form a nonzero element of $psi_L in mathbb{C}^2$ out of its coefficients, namely

$$psi_L = left( begin{array}{c} a \ b end{array} right).$$

We can now define the “squaring” $Sq(L)$ of the polynomial $L(z)$ to be

$$ Sq(L) := psi_L psi_L^* = left( begin{array}{cc} |a|^2 & a bar{b} \ bar{a} b & |b|^2 end{array} right). $$

We also define the “squaring” of a nonzero complex polynomial of degree at most $n-1$ to be
the symmetric tensor product (over $mathbb{R}$) of the squarings of its linear factors. So for instance,

$$ Sq(p_i) := odot_{j neq i} Sq(L_{ij}). $$

I should say what I mean by the symmetric tensor product. Each $Sq(L_{ij})$ has an index up and an index down, with each of these indices taking values in ${1,2}$. In order to define $Sq(p_i)$, first form the outer product (or tensor product if you prefer) of all the $Sq(L_{ij})$, for $j neq i$, then completely symmetrize all the indices up together, and completely symmetrize all the indices down together. Since each index can only take on $2$ possible values, this will result in a tensor, which can be viewed simply as an $n times n$ matrix, one index up (resp. down) which corresponds to the various values of the symmetrized indices up (resp. down).

One can check that $Sq(p_i)$ is indeed well defined, despite the ambiguities in the definitions of the factors $L_{ij}$.

Define the following “sum of squares”:

$$ S := sum_{i=1}^n Sq(p_i),$$

which can be thought of as an $n times n$ matrix, by the remarks above.

I conjecture that the $p_i$, for $i = 1, ldots, n$, are linearly independent over $mathbb{C}$ iff their “sum of squares” $S$ is nonsingular.

For $n = 2$, this is straightforward. I did some numerical simulations for $n = 3$, which seem to confirm my conjecture. However, I did not do any numerical simulations for $n > 3$. So it could be false for $n = 4$ perhaps. I am not sure yet.

If someone has any comments and/or, better, knows how to prove/disprove my conjecture, then please share your knowledge.

calculus – Tests of independence for 2 and more samples

I need your help.
Almost every theorem from probability theory, mathematical statistics requires an independence of 2 and more random variables, also samples. There are some tests on categorical data, but I do not see such tests for samples contain numbers etc i.e. not categorical data. Please, could you provide this information about these tests, thanks a lot.

rt.representation theory – Independence between $X_{n-k}$ and $sumlimits_i Y_{n-i}-Y_{n-k}$

If $(X_i,Y_i), i=1,ldots,n,$ is i.i.d sample from the joint distribution $F$ and there is dependence between the two variables say $R$. Denote the order statistics for the two variables $X_{1:n},ldots, X_{n:n}$ and $Y_{1:n},ldots, Y_{n:n}$ respectively.

Now by Renyi’s representation, it is possible to show that $X_{n-k:n}$ and $sumlimits_i X_{n-i:n}-X_{n-k:n}$ are independent.

I want to check if there is independence or not between $X_{n-k:n}$ and $sumlimits_i Y_{n-i:n}-Y_{n-k:n}$?

vector spaces – understand of span, linear independence and basis by using dimension

Before start explaining what makes me confused, I’m sorry about my poor English. I’m not good at English. lol

If V is Finite-dimensional vector space, let {v1,v2, ⋯,vn} is abritary basis of V.
(a)The set of vectors that have more than n is linearly dependent set.
(b)The set of vectors that have less than n can’t span V

I wanna know that (b) means if we have more than n vectors when the basis of the V is n, we can just span V.

I’m confused because I’m not sure about the concept of span and linear combination exactly.

If S={v1,v2,⋯vr}, S’={w1,w2,⋯wk} is a vector set that is included in vector space V, if and only if span{v1,v2,⋯vr}=span{w1,w2,⋯wk} is that each vector of S is linear combination of w1,w2,⋯wk and also each vector of S’ is linear combination of v1,v2,⋯vr.

I wanna know this sentence is right.
If there are ***n**-times basis* *(I mean the number of the basis vector is n)* and n≦r, n≦k then S,S' can span vector space V and also S,S' is linearly dependent set.

Independence Day | Forum Promotion

How to show independence and uniform distribution of hash codes from k-wise independent hash functions?

Most definitions of a $k$-wise independent family of hash functions I have encountered state that a family $H$ of hash functions from $D$ to $R$ is k-wise independent if for all distinct $x_1, x_2,dots, x_k in D$ and $y_1, y_2,dots, y_k in R$,

$$mathbb{P}_{h in H}(h(x_1) = y_1, h(x_2) = y_2, dots, h(x_k) = y_k) = frac{1}{|R|^k}$$

The Wikipedia article on k-wise independent hash functions (which uses the above definition) claims that the definition is equivalent to the following two conditions:

(i) For all $x in D$, $h(x)$ is uniformly distributed in $R$ given that $h$ is randomly chosen from $H$.

(ii) For any fixed distinct keys $x_1, x_2,dots, x_k in D$, as $h$ is randomly drawn from $H$, the hash codes $h(x_1), h(x_2), dots, h(x_k)$ are independent random variables.

It is not obvious to me how one proves (i) from the above definition without explicitly assuming (ii) in the definition as well (and vice-versa). How is the definition sufficient for proving both (i) and (ii)?

multivariable calculus – Linear independence of gradients of bivariate polynomials projected on unit vectors

Suppose we have an $(Ntimes M)$ matrix $mathbf{A}$, where each of its row represents the gradient of a bivariate polynomial at a certain point $(x_i,y_i)$ projected onto a local unit vector $vec{n}_i$:

$A_{ij} = vec{{rm grad}p}_j(x_i,y_i)cdot vec{n}_i, quad i = 1,cdots,N quad j = 1,cdots,M$

where $p_j$ is the $j^{th}$ bivariate basis polynomial of degree $leq m$, $M$ is the number of basis polynomials $M = frac{(m+1)(m+2)}{2}$, $vec{n}_i = (n_{xi},n_{yi})^T$ with $n_{xi}^2+n_{yi}^2=1$

Example : if $m=2$, then $M=6$, $(p_1,cdots,p_6) = (1, x, y, x^2, xy, y^2)$, and $vec{{rm grad}p}_j(x,y)cdot vec{n} = (0, n_x, n_y, 2xn_x, yn_x+xn_y,2yn_y)$.

Question : If for a set of $N$ distinct points ${(x_i, y_i),quad i=1,cdots,N}$, matrix $mathbf{A}$ is row-independent when $vec{n}_1 = cdots = vec{n}_N$, then does this row-independence stay true for the same set of points when we have $N$ distinct $vec{n}_i$ ? If yes, how to prove it ?

Thought : This problem arises when dealing with gradient-based multivariate interpolation problems. I was trying to prove that each row stays column-equivalent after an arbitrary 2D rotation of $vec{n}_i$, but without success.

Adding frame-rate independence to more complicated 2d game physics

Currently, I am having the problem of converting the physics engine on my 2D game to work frame-rate independent. I have looked at a lot of other questions, but I haven’t been able to make any of the solutions there work on my game physics. The problem with changing the physics engine, is that the game already has a lot of “units” that you can be (it’s similar to diep.io), and all of those have been balanced out already, and doing a massive change to the physics engine will make it take a lot of time to fix all of the content.

The physics engine works, by running a “physics” function followed “friction” function; where “physics” will add the current acceleration to the velocity, reset acceleration, and add the velocity to the position, and “friction” basically tries to get the magnitude of the velocity to be under a “max speed” value by checking how much higher the magnitude is which is called “excess”, then it sets the magnitude (by normalizing then multipying the x and y) of the velocity to maxSpeed plus “excess” divided by a value calculated earlier called “damp” (it basically shrinks the excess speed above max speed down every update, and makes the speed the maxSpeed plus the excess, to get closer and closer to the actual max speed).
Some pseudocode of what these two functions actually do:

physics() {
  velocity.x += acceleration.x;
  velocity.y += acceleration.y;
  acceleration.x = 0;
  acceleration.y = 0;
  position.x += velocity.x;
  position.y += velocity.y;
}

friction() {
  motion = sqrt(velocity.x ^ 2 + velocity.y ^ 2);
  excess = motion - maxSpeed
  if (excess > 0) {
    drag = excess / (1 + damp) 
    newVelocity = maxSpeed + drag
    velocity.x = (velocity.x / motion) * newVelocity
    velocity.y = (velocity.y / motion) * newVelocity
  }
}

I’ve tried multiplying basically every value by a “scale” (which will later on, basically just be how much to speed up or slow down the physics), tried powering stuff, and a lot of other things. Generally, “maxSpeed” is set to 0 for most of the player entities, and “damp” is usually a value from 1-3.
Is it even possible to scale this physics system at all? and if so, what should I do to make it do that?