ag.algebraic geometry – Picard group of connected linear algebraic group

Here’s a statement:

Suppose $G$ is a connected linear algebraic group over a field $k$, then $Pic(G)$ is a finite group.

I know this is true when $k=mathbb{C}$. My question is does this true for abitrary field $k$? If not, how about furthermore when $G$ is smooth or even reductive? Is there any reference?

Thanks for any help.

least squares – Quadratic Programming: Can a linear constraint convert a convex, quadratic program to a non-convex one?

I consider a quadratic program of the form

$$ minimize,||Ax – b||^2_2, x in mathbb{R}^n$$ (least-squares) in cvxpy and add linear constraints of the form $Bx le c$ consecutively.

The problem is solvable without constraints and with the first constraint, but fails after adding the second constraint with the message that the problem is non-convex.

Now I’m trying to imagine and visualise how a convex problem can be converted to a non-convex one by adding a linear constraint. At least for the 2D case I cannot find an example (visually) where this happens, because constraining a convex solution space by linear constraints keeps the solution space convex in my mind.

Could anyone give me a visual counterexample where this happens?

The proof of theorem 9 in sec 5.7 The grassman Ring in linear algebra hoffman kunze

$Theorem$ Let K be a commutative ring with identity and let V be a module over K. Then the exterior product is an associative operation on the alternating multilinear forms on V. In other words, if L, M, and N are alternating multilinear forms on V of degrees r, s, and t, respectively, then

$(Lwedge M)wedge N=Lwedge (Mwedge N)$


Let $G(r,s,t)$ be the subgroup of $S_{r+s+t}$ that consists of the permutations which permute the sets

{1,…,r}, {r + 1,…,r + s}, {r+s+1,…,r+s+t}

within themselves. Then $(sgnmu){(Lotimes Motimes N)}_mu$ is the same multilinear function for all $mu $ in a given left coset of $G(r,s,t)$ Choose one element from each left coset of $G(r,s,t)$, and let E be the sum of the corresponding terms $(sgnmu){(Lotimes Motimes N)}_mu$. Then E is independent of the way in which the representatives ju are chosen, and

$r!s!t!E=pi _{r+s+t}(Lotimes Motimes N)$.

We shall show that $(Lwedge M)wedge N$ and $Lwedge (Mwedge N)$ are both equal to $E$.

Let $G(r+s,t)$ be the subgroup of $S_{r+s+t}$ that permutes the sets

{1,…,r + s}, {r+s+1,…,r+s+t}

within themselves.
Let $T$ be any set of permutations of {1,…,r+s+t} which contains exactly one element from each left coset of $G(r+s,t)$.
By (5-50)

$(Lwedge M)wedge N$=$sum (sgntau )$$((Lwedge M)wedge N)_tau $

extended over $tau$ in $T$. Now let $G(r,s)$ be the subgroup of $S_{r+s}$ that permutes the sets


within themselves. Let S be any set of permutations of {1,…,r+s} which contains exactly one element from each left coset of $G(r,s)$. From (5-50) and what we have shown above, it follows that

$(Lwedge M)wedge N$=$sum (sgnsigma )(sgntau ){({(Lotimes M)}_{sigma }otimes N)}_{tau }$

extended over all pairs $(sigma ,tau )$ in $Stimes T$. If we agree to identify each $sigma $ in $S_{r+s}$, with the element of $S_{r+s+t}$ which agrees with $sigma $ on {1,. . . ,r+s} and is the identity on {r+s+1,…,r+s+t}, then we may write

$(Lwedge M)wedge N$=$sum sgn(sigma tau ){({(Lotimes Motimes N)}_{sigma })}_{tau }$


${({(Lotimes Motimes N)}_{sigma })}_{tau }$=${{(Lotimes Motimes N)}}_{tau sigma }$


$(Lwedge M)wedge N$=$sum sgn(sigma tau ){{(Lotimes Motimes N)}}_{tau sigma }$

Now suppose we have

$tau _1sigma _1=tau _2sigma _2gamma $

with $sigma _i$ in $S$, $tau _i$ in $T$, and $gamma $ in $G(r,s,t)$. Then ${tau _2}^{-1}tau _1=sigma _2gamma {sigma _1}^{-1}$, and since $sigma _2gamma {sigma _1}^{-1}$ lies in $G(r+s,t)$, it follows that $tau _1$ and $tau _2$ are in the same left coset of $G(r+s,t)$. Therefore, $tau _1=tau _2$, and $sigma _1=sigma _2gamma $. But this implies that $sigma _1$ and $sigma _2$ (regarded as elements of $S_{r+s}$) lie in the same coset of $G(r, s)$ ; hence $sigma _1=sigma _2$. Therefore, the products $tau sigma $ corresponding to the


pairs ($tau ,sigma $)in $Ttimes S$ are all distinct and lie in distinct cosets of $G(r, s, t)$.
Since there are exactly


left cosets of $G(r,s,t)$ in $S_{r+s+t}$, it follows that $(Lwedge M) wedge N)=E$ .By an analogous argument, $Lwedge (Mwedge N)=E$ as well.The end


I have a question about the following part of this proof:

**Now suppose we have

$tau _1sigma _1=tau _2sigma _2gamma $

with $sigma _i$ in $S$, $tau _i$ in $T$, and $gamma $ in $G(r,s,t)$.**

What is the reason for assuming that ” $tau _1sigma _1=tau _2sigma _2gamma $” ? And I don’t understand how this assumption suggests ” the products $tau sigma $ corresponding to the


pairs ($tau ,sigma $)in $Ttimes S$ are all distinct and lie in distinct cosets of $G(r, s, t)$.”

algorithms – Linear programming reduce errors

Most normal linear programming problems look like this:
normal linear programming
We choose some point in the double shaded area that solves our optimizer and we’re good to go.

However, I’ve come across a problem where it’s fairly common to get a lp problem like this:
weird linear programming
I’d like to employ some metric such that the solution given reduces the errors between the closest possible solutions in any equation it does not fulfill such that it would be on the line right between the 2 solutions (something like reduce OLS or LAD).

Is there any formal algorithm for this or common way this is done?

glsl – Is linear filtering possible on depth textures in OpenGL?

I’m working on shadow maps in OpenGL (using C#).

First, I’ve created a framebuffer and attached a depth texture as follows:

// Generate the framebuffer.
var framebuffer = 0u;

glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);

// Generate the depth texture.
var shadowMap = 0u;

glGenTextures(1, &shadowMap);
glBindTexture(GL_TEXTURE_2D, shadowMap);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT, null);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowMap, 0);

// Set the read and draw buffers.

Later (after rendering to the shadow map and preparing the main scene), I sample from the shadow map in a GLSL fragment shader as follows:

float shadow = texture(shadowMap, shadowCoords.xy).r;

Where vec3 shadowCoords is the coordinates of the fragment from the perspective of the global directional light source (the one used to create the shadow map). The result is shown below. As expected, shadow edges are jagged due to using GL_NEAREST filtering.


To improve smoothness, I tried replaced the shadow map’s filtering with GL_LINEAR, but the results haven’t changed. I understand there are other avenues I could take (like Percentage-Closer Filtering), but I’d like to answer this question first, if only for my sanity. I’ve also noticed that other texture parameters (like GL_CLAMP_TO_EDGE rather than GL_REPEAT for wrapping) don’t function for the shadow map, which hints that this may be a limitation of depth textures in general.

To reiterate: Is linear filtering possible using depth textures in OpenGL?

linear algebra – Are there any results in generalizing matrix theory to multidimensional arrays?

In matrix theory(2-dimensional arrays), we can define addition, multiplication, rank and determination etc. I’m working on generalizing these properties to multidimensional arrays as many as possible. Are there any results in this way? I’d really appreciate it if you could provide some references.

linear algebra – Diagonalization of nxn matrice

What conditions must a nxn matrice have to always be diagonalizable?
I do know that it has to have n distinct eigenvalues but let’s say if the only information we had is that if 0 is one of its eigenvalues then can it still be diagonalised? I personally don’t think so but i would just want to be sure of it. One of the other important conditions is that say matrice B

B= $lambda$I for where $lambda$ is not 0 right?

numerical linear algebra – Numerically solving the optimization problem $min | x |_{ell^1} s.t. | Ax-b |_{ell^2} leq delta$

Consider a linear system $Ax=b$ with matrix $A$ and right hand side $b$ and suppose one is interested in a sparse solution of this system. In the situation where the right hand side is corrupted by noise one can solve the minimization problem
min | Ax-b |_{ell^2} s.t. | x |_{ell^1} leq delta.

This corresponds to the LASSO algorithm with regularization parameter $delta$. On the other hand one can try to solve the optimization problem
min | x |_{ell^1} s.t. | Ax-b |_{ell^2} leq delta. (1)

This problem was, for instance, considered in Candes famous paper “Towards a mathematical theory of super-resolution”. I’m interested in solvind problem (1) numerically with Python but I have limited Python skills. I was wondering if there is any implementation which solves the problem (1). For the LASSO there are many packages but I couldn’t find one for problem (1) so far.

Thanks a lot for your help!