ag.algebraic geometry – Image of a projective variety is closed

Let $X$ be a projective variety and $Y$ an Artin stack. Suppose that $f:Xto Y$ is a morphism of Artin stacks. Is $f(X)$ necessarily a closed substack of $Y$?

This seems like it should be true and probably one can find it somewhere in the stacks project, but I cannot locate a good source. I am happy to assume that $Y$ is the quotient of an affine variety by the action of a group scheme. Other assumptions that would make the answer to my question affirmative are also interesting.

Completion of a variety in a projective bundle

We work over $mathbb C$.

Consider a vector bundle $E = mathcal O(a_0) oplus cdots oplus mathcal O(a_r)$ over $mathbb P^1$ of rank $r+1$ and its projectivization $pi: P(E) rightarrow mathbb P^1$.

We can identify $pi^{-1}(mathbb P^1 – { (1,0)} )$ with $mathbb P^r times mathbb C $.
Let $X$ be a hypersurface of degree $d$ in $mathbb P^r$ and $Y^* = X times mathbb C subset mathbb P^r times mathbb C$.

Is there always a completion $Y$ in $P(E)$ of $Y^*$, where a completion is a variety such that  $Y cap ( mathbb P^r times mathbb C) = Y^*$?

If we assume that $X$ is smooth. How bad singularities can $Y$ have?

Particularly, I am considering the case that  $E = mathcal O oplus mathcal O oplus mathcal O(-1) oplus mathcal O(1)$ and $d=4$.
I am wondering if one can take some crepant resolution of singularities of $Y$ for some suitable $X$.

ag.algebraic geometry – A question on linear projection of a smooth projective variety

Let $X$ be a smooth, projective $mathbb{C}$-variety of dimension $n$. Fix a closed point $x in X$ and an embedding of $X$ in $mathbb{P}^m$ for some integer $m$. For a given $d$, denote by $sigma_d : mathbb{P}^m to mathbb{P}^{N_d}$ the $d$-tuple embedding. My question is: for $d gg 0$, does there exist a linear subspace $L subset mathbb{P}^{N_d}$ of dimension $N_d-n-2$, not intersecting $sigma_d(X)$ such that for the linear projection from $L$ (sometimes called projection with centre $L$):
$$pi_L : sigma_d(X) to mathbb{P}^{n+1}$$ we have $pi_L^{-1}(pi_L(sigma_d(x)))=sigma_d(x)$ i.e., the preimage of $sigma_d(x)$ is only $sigma_d(x)$? Any hint/reference is most welcome.

Why the class of H-trivial monoids corresponds to the variety of aperiodic monoids, .i.e. monoids that can recognize star-free languages?

I have two similar questions, one about the H-trivial monoids and one about the R-trivial monoids.

  1. I cannot see the reason why H-trivial monoids, i.e., the monoids where H induced classes are singletons, coincide with the variety A of aperiodic monoids, also characterized as the monoids that satisfy the monoid equation $x^omega x=x$.
  2. Similarly, I don’t understand why R-trivial monoids, i.e., the monoids where R induced classes are singletons, coincide with monoids that satisfy the monoid equation $(xy)^omega x=(xy)^omega$.

Here

  1. $x^omega$ is defined as the limit $limlimits_{krightarrowinfty} x^{k!}$.
  2. the relation $R$ is defined as $xRy iff xM=yM$.
  3. the relation $L$ is defined as $xLy iff Mx=My$.
  4. the relation $H$ is defined as $xHyiff xRy land xLy$.

ag.algebraic geometry – Smooth projective variety with no second homotopy group

I am looking for an example (if such exist) of a smooth projective variety $X$ whose $mathbb{Q}$-homology is generated by algebraic cycles, and yet does not have a second homotopy group, $pi_2(X)=0.$ Thus, algebraic cycles that span $H_2(X,mathbb{Q})$ are coming from some non-rational curves.

python – validation and test loss for a variety of PyTorch time series forecasting models

Hi everyone I’m trying to reduce the complexity of some of my Python code. The function below aims to compute the validation and test loss for a variety of PyTorch time series forecasting models. I won’t go into all the intricacies but needs to support models that return multiple targets, an output distribution + std (as opposed to a single tensor), and models that require masked elements of the target sequence. This over time has resulted in long if else blocks and lots of other bad practices.

I’ve used dictionaries before to map long if else statements but due to the nested nature of this code it doesn’t seem like it would work well here. I also don’t really see the point in just creating more functions as that just moves the if else statements somewhere else and requires passing more parameters around. Does anyone have any ideas? There are several unit tests that run from the different paths in this code now. However, it is still cumbersome to read. Plus soon I will have even more model variations to expand and support. Full code can in context can be seen at this link.

def compute_validation(validation_loader: DataLoader,
                       model,
                       epoch: int,
                       sequence_size: int,
                       criterion: Type(torch.nn.modules.loss._Loss),
                       device: torch.device,
                       decoder_structure=False,
                       meta_data_model=None,
                       use_wandb: bool = False,
                       meta_model=None,
                       multi_targets=1,
                       val_or_test="validation_loss",
                       probabilistic=False) -> float:
    """Function to compute the validation loss metrics

    :param validation_loader: The data-loader of either validation or test-data
    :type validation_loader: DataLoader
    :param model: model
    :type model: (type)
    :param epoch: The epoch where the validation/test loss is being computed.
    :type epoch: int
    :param sequence_size: The number of historical time steps passed into the model
    :type sequence_size: int
    :param criterion: The evaluation metric function
    :type criterion: Type(torch.nn.modules.loss._Loss)
    :param device: The device
    :type device: torch.device
    :param decoder_structure: Whether the model should use sequential decoding, defaults to False
    :type decoder_structure: bool, optional
    :param meta_data_model: The model to handle the meta-data, defaults to None
    :type meta_data_model: PyTorchForecast, optional
    :param use_wandb: Whether Weights and Biases is in use, defaults to False
    :type use_wandb: bool, optional
    :param meta_model: Whether the model leverages meta-data, defaults to None
    :type meta_model: bool, optional
    :param multi_targets: Whether the model, defaults to 1
    :type multi_targets: int, optional
    :param val_or_test: Whether validation or test loss is computed, defaults to "validation_loss"
    :type val_or_test: str, optional
    :param probabilistic: Whether the model is probablistic, defaults to False
    :type probabilistic: bool, optional
    :return: The loss of the first metric in the list.
    :rtype: float
    """
    print('Computing validation loss')
    unscaled_crit = dict.fromkeys(criterion, 0)
    scaled_crit = dict.fromkeys(criterion, 0)
    model.eval()
    output_std = None
    multi_targs1 = multi_targets
    scaler = None
    if validation_loader.dataset.no_scale:
        scaler = validation_loader.dataset
    with torch.no_grad():
        i = 0
        loss_unscaled_full = 0.0
        for src, targ in validation_loader:
            src = src if isinstance(src, list) else src.to(device)
            targ = targ if isinstance(targ, list) else targ.to(device)
            # targ = targ if isinstance(targ, list) else targ.to(device)
            i += 1
            if decoder_structure:
                if type(model).__name__ == "SimpleTransformer":
                    targ_clone = targ.detach().clone()
                    output = greedy_decode(
                        model,
                        src,
                        targ.shape(1),
                        targ_clone,
                        device=device)(
                        :,
                        :,
                        0)
                elif type(model).__name__ == "Informer":
                    multi_targets = multi_targs1
                    filled_targ = targ(1).clone()
                    pred_len = model.pred_len
                    filled_targ(:, -pred_len:, :) = torch.zeros_like(filled_targ(:, -pred_len:, :)).float().to(device)
                    output = model(src(0).to(device), src(1).to(device), filled_targ.to(device), targ(0).to(device))
                    labels = targ(1)(:, -pred_len:, 0:multi_targets)
                    src = src(0)
                    multi_targets = False
                else:
                    output = simple_decode(model=model,
                                           src=src,
                                           max_seq_len=targ.shape(1),
                                           real_target=targ,
                                           output_len=sequence_size,
                                           multi_targets=multi_targets,
                                           probabilistic=probabilistic,
                                           scaler=scaler)
                    if probabilistic:
                        output, output_std = output(0), output(1)
                        output, output_std = output(:, :, 0), output_std(0)
                        output_dist = torch.distributions.Normal(output, output_std)
            else:
                if probabilistic:
                    output_dist = model(src.float())
                    output = output_dist.mean.detach().numpy()
                    output_std = output_dist.stddev.detach().numpy()
                else:
                    output = model(src.float())
            if multi_targets == 1:
                labels = targ(:, :, 0)
            elif multi_targets > 1:
                labels = targ(:, :, 0:multi_targets)
            validation_dataset = validation_loader.dataset
            for crit in criterion:
                if validation_dataset.scale:
                    # Should this also do loss.item() stuff?
                    if len(src.shape) == 2:
                        src = src.unsqueeze(0)
                    src1 = src(:, :, 0:multi_targets)
                    loss_unscaled_full = compute_loss(labels, output, src1, crit, validation_dataset,
                                                      probabilistic, output_std, m=multi_targets)
                    unscaled_crit(crit) += loss_unscaled_full.item() * len(labels.float())
                loss = compute_loss(labels, output, src, crit, False, probabilistic, output_std, m=multi_targets)
                scaled_crit(crit) += loss.item() * len(labels.float())
    if use_wandb:
        if loss_unscaled_full:
            scaled = {k.__class__.__name__: v / (len(validation_loader.dataset) - 1) for k, v in scaled_crit.items()}
            newD = {k.__class__.__name__: v / (len(validation_loader.dataset) - 1) for k, v in unscaled_crit.items()}
            wandb.log({'epoch': epoch,
                       val_or_test: scaled,
                       "unscaled_" + val_or_test: newD})
        else:
            scaled = {k.__class__.__name__: v / (len(validation_loader.dataset) - 1) for k, v in scaled_crit.items()}
            wandb.log({'epoch': epoch, val_or_test: scaled})
    model.train()
    return list(scaled_crit.values())(0)

algebraic geometry – Why is the closure of a “$mathbb{K}$-cone generated by an irreducible affine variety” irreducible?

I am trying to do the second exercise in page 37 in Ernst Kunz’s “Introduction Commutative Algebra and Algebraic Geometry” (I leave a sreenshot of the exercise below), and I got stuck trying to prove the second part of line b).

If I write $X:= overline{V^*}$ = $X_1 cup X_2$ with $X_1, X_2$ affine subvarieties of $X$.
Since we have $V subset X$ and it is irreducible, we must have $V subset X_1$ or $V subset X_2$. One of the $X_i$‘s will have to contain infinite points of some line contained in $V^*$ and so the polynomials in its ideal can’t have constant terms. I have also thought of considering the decomoposition in irreducible components, but I don’t see how that helps if $mathbb{L}$ is not alg. closed and also, even if it is, I don’t know if $rad(mathfrak{p_1}…mathfrak{p_m}) = mathfrak{p_1}…mathfrak{p_m}$ and if any prime ideal $mathfrak{p}$ with $I subset mathfrak{p}subset mathfrak{q}$ (where $mathfrak{q}$ is a prime divisor of $I$), also is a prime divisor of $I$, which in the case of $mathbb{L}$ algebraically closed, I would use to deduce that the ideals of each irreducible component would be homogeneous, and then any irreducible component that cotained $V$ would be equal to $X$.

My questions are:

  • If $mathbb{L}$ is alg. closed, do we have $sqrt{mathfrak{p_1}…mathfrak{p_m}} = mathfrak{p_1}…mathfrak{p_m}$?
  • Can anyone give me hint for the case $mathbb{L}$ not alg. closed?
  • If $I$ is an ideal and $mathfrak{q}$ a prime divisor of $I$, is any prime ideal $mathfrak{p}$ with $I subset mathfrak{p} subset mathfrak{q}$ also a prime divisor?

Thank you for all the help in advance 🙂enter image description here

rt.representation theory – How to compute the characteristic variety of a perverse sheaf occurring in the Springer correspondence?

In particular, how to do for the first nontrivial case when the representation of the Weyl group is the sign representation? As far as I know, one can probably use the Fourier transform and compute for the trivial representation instead. But is there a direct way to compute it that may work for all the representations of W?

ag.algebraic geometry – Canonical lift of the deformation of an ordinary abelian variety

If $A/k$ is a principally polarised ordinary abelian variety ($k$ a perfect field of characteristic $p$, we may assume it is finite for simplicity), we have a canonical lift $hat{A}/W(k)$.
Now if I take a deformation $A_{epsilon}/k(epsilon)$, does there still exists a canonical lift of this deformation to a deformation of (the generic fiber of) $hat{A}$?

By the Kodaira-Spencer mapping deformations are essentially encoded by differentials of $A$, so the question boils down to whether differentials on $A$ lifts canonically to differentials on $hat{A}$. By Katz, Serre-Tate local moduli, Section 3, differentials on any lift $tilde{A}$ correspond to points in $T_p(A^vee)(k)$, and his main theorem 3.7.1 describe the compatibility of this identification with the Kodaira-Spencer map. Is there a way to use this to lift differentials canonically on the canonical lift?