## ag.algebraic geometry – Explicit Natural Correspondence between Cusps of X(N) and isomorphism classes of Level N structures on Tate(q^N)

In Katz’ paper Antwerp III, section 1.4 (Ka-14) one reads (we assume $$n geq 3$$ integer):

”The scheme $$overline{M}_n – M_n$$” over $$mathbb{Z}(1/n)$$ is finite and étale, and over $$mathbb{Z}(1/n,zeta_n)$$, it is a disjoint union of sections, called the cusps of $$overline{M}_n$$,”

I would be interested to see a detailed proof of the next part of that sentence, namely:

“which in a natural way are the set of isomorphism classes of level $$n$$ structures on the Tate curve $$text{Tate}(q^n)$$ viewed over $$mathbb{Z}((q)) otimes_{mathbb{Z}} mathbb{Z}(1/n,zeta_n)$$.”

What I did: I tried to extract the relevant information in Deligne-Rapoport and Katz-Mazur but in each case, certainly for a lack of understanding on my part, I’m not able to establish this correspondence explicitly. I found the discussion of formal completion at (the divisor of) cusps well explained in both references (something which is also addressed in Katz’ paper Antwerp III on page Ka-14), but I couldn’t connect the dots for the natural correspondence above and thus my question. Feel free to ask if you need more details.

In Deligne-Rapoport?

I first looked in Deligne-Rapoport (DR), which was in Antwerp II (and so the natural place to look for the arguments):
http://smtp.math.uni-bonn.de/ag/alggeom/preprints/Lesschemas.pdf

The motivating example on pages DeRa-7 and beginning of page 8 hint to that fact. But it seems that’s not the point of view (DR) take.

“Dans le texte, nous précisons cette interprétation modulaire de l’ensemble des points à l’infini de $$mathcal{H}/Gamma(n)$$ en une interprétation modulaire de la courbe projective compactifiée $$overline{mathcal{H}/Gamma(n)}$$ de $$mathcal{H}/Gamma(n)$$.”

Here: $$mathcal{H} = { z in mathbb{C} mid Im(z) > 0 }$$ is the upper half-plane.

On page DeRa-10 they do say that $$M_n$$ can be defined as the normalization, in the field of functions of $$M_n^0(1/n)$$, of the projective $$j$$-line over $$mathbb{Z}(zeta_n)$$. (That’s what Katz and Mazur do in their book on Chapter 8.) (DR) say among other things that they prove that there exists a finite family of points $$mathbb{Z}(zeta_n)$$-points $$f_i : M_n to Spec(mathbb{Z}(zeta_n))$$ such that the sections $$f_i$$ are disjoint (incongruent modulo any prime ideal of $$mathbb{Z}(zeta_n)$$) and that $$M_n^0$$ is the complement in $$M_n$$ of the union of the ”sections at infinity” $$f_i$$.

The Tate curve is only constructed in chapter VII of (DR). But I don’t find it immediate to deduce initial assertion by Katz in Antwerp III.

In Chapter VII (sections 1 and mostly 2 seem relevant to my question), DeRa-156, (1.16.4) gives me the description of the level $$r$$-structure of the Tate curve with $$r$$ edges over $$mathbb{Z}((q^{1/r}))$$.

Moreover, $$text{Tate}(q)$$ over $$mathbb{Z}((q))$$ induces a morphism $$tau: Spec(mathbb{Z}((q))) to mathcal{M}_1$$ which identifies $$mathbb{Z}((q))$$ with the formal completion of $$mathcal{M}_1$$ along the section at infinity $$f_1$$ (Theorem 2.1).

The Néron $$n$$-gon $$C$$ over $$mathbb{Z}(zeta_n)$$ equipped with its structure of generalized elliptic curve and the natural isomorphism $$C(n) = mu_n times mathbb{Z}/nmathbb{Z}$$ defines a section at infinity $$f_n : Spec(mathbb{Z}(zeta_n)) to mathcal{M}_n$$. We also obtain an isomorphism between the $$n$$-torsion of the Tate curve with $$n$$ edges and $$mu_n times mathbb{Z}/n mathbb{Z}$$ and then we geta morphism $$Spec(mathbb{Z}(zeta_n)((q^{1/n}))) to mathcal{M}_n$$. This latter morphism identifies $$mathbb{Z}(zeta_n)((q^{1/n}))$$ with the formal completion of $$mathcal{M}_n$$ along the section at infinity $$f_n$$.

Finally, Corollary 2.5 says that the completion of $$mathcal{M}_n$$ along infinity is sum of copies of $$Spec(mathbb{Z}(zeta_n)((q^{1/n}))$$ indexed by $$SL_2(mathbb{Z}/nmathbb{Z})/pm U$$, where $$U$$ is the group of upper unipotent matrices.

It feels like the desired correspondence is there but I couldn’t extract it explicitly.

In Katz-Mazur?
I turned to the book of Katz and Mazur (see https://web.math.princeton.edu/~nmk/katz-mazur.djvu). Again, I feel I’m getting closed, but I’m not sure how to tie up the loose ends.

The point of view in (KM) doesn’t deal (explicitly?) with stacks (as in (DR)). They consider the moduli problem (contravariant functor)

$$(Gamma(N)) : textbf{Ell} to textbf{Set}$$

which classifies elliptic curves (proper smooth curves $$pi : E to S$$ with geometrically connected fibers all of genus one, given with a section $$0$$, and here $$S$$ is any scheme.) equipped with a $$Gamma(N)$$-structure (KM 3.1, page 98).

This functor is relatively representable and flat over $$textbf{Ell}$$ of constant rank $$geq 1$$, and regular of dimension $$2$$. As a functor with source $$textbf{Ell}/mathbb{Z}(1/N)$$ it is étale on the source. (First Main Theorem 5.1.1, page 129).

When $$N geq 3$$, $$(Gamma(N))$$ is in fact representable by some universal elliptic curve $$E_text{univ}/Y(N)$$, where $$Y(N)$$ is a smooth affine curve (We have a rigidity.) (See (KM) Cor 2.7.2, 4.7.0 and 4.7.1)

Following (KM 8.6.3 and 8.6.8) we normalize $$Y(N)$$ near infinity to obtain $$X(N)$$ (we obtain a smooth proper curve over $$mathbb{Z}(1/N)$$ which is the normalization of the projective $$j$$-line in $$Y(N)$$).

The Tate curve $$text{Tate}(q)$$ itself represents an appropriate moduli problem $$mathcal{S}$$. Applying corollary 8.4.4 (p.235) to this and to the moduli problem $$(Gamma(N))$$ over an excellent noetherian regular ring $$R$$, we obtain an isomorphim of $$R((q))$$-schemes

$$left( (Gamma(N))_{text{Tate}(q)/R((q))} right)/ pm 1 xrightarrow{simeq} Y(N)_{R((q))}$$

where $$Aut(text{Tate}(q)/R((q))) = pm 1$$ (see Proposition 8.11.7).

Moreover, the formal completion of $$X(N)$$ along the (divisor of) cusps $$X(N) – Y(N)$$, which is a finite $$R((q))$$-scheme, is the normalization of $$R((q))$$ in the finite normal $$R((q))$$-scheme $$left( (Gamma(N))_{text{Tate}(q)/R((q))} right)/ pm 1$$.

Finally, we have

Theorem 10.8.2

There is a canonical isomorphism of $$mathbb{Z}(zeta_N)((q))$$-schemes

$$(Gamma(N))_{text{Tate}(q)/mathbb{Z}(zeta_N)((q))} simeq coprod_{text{Hom Surj }((mathbb{Z}/Nmathbb{Z})^2,mathbb{Z}/Nmathbb{Z})} Spec(mathbb{Z}(zeta_N)((q^{1/N})))$$

and

Theorem 10.9.1

(1) $$text{Cusps}(X(N))$$ is the disjoint union of $$mid text{Hom Surj }((mathbb{Z}/Nmathbb{Z})^2,mathbb{Z}/Nmathbb{Z}) mid$$ sections of $$X(N)$$ over $$mathbb{Z}(zeta_N)$$.

(2) There exists an open neighborhood $$V$$ of the cusps $$text{Cusps}((Gamma(N))) subset V subset X(N)$$ which is smooth over $$mathbb{Z}(zeta_N)$$.

(3) The formal completion of $$X(N)$$ along its cusps is the $$mathbb{Z}(zeta_N)$$-formal scheme

$$coprod_{text{Hom Surj }((mathbb{Z}/Nmathbb{Z})^2,mathbb{Z}/Nmathbb{Z})/pm 1} Spfleft( mathbb{Z}(zeta_N)((q^{1/N})) right)$$.

## data structures – How do we pick and switch hash functions from a universal family?

When we have a universal family of hash functions, it gives us a few useful mathematical guarantees. But, if we pick a specific function from the family and use it all the time it’s effectively like the family doesn’t exists and we only have this one function.

Therefore, the only logical thing seems to be switching a hash function every once in a while (picking a new function from our family). If indeed this is what we should do, how often should we switch functions? How should this time period be selected and is there an optimal time period for different tasks?

Also, suppose we have two hash functions $$h_1, h_2$$, and then store values in an array according to $$h_1$$. After a while, we switch our has function to be $$h_2$$. Do we have to re-hash all of our values according to $$h_2$$? This seems rather wasteful

## data structures – Proving Quicksort is \$O(n^2)\$

So I’m trying to figure out why the worst case of Quicksort is $$O(n^2)$$.

I know this a very well known problem, but the funny thing is where ever I look (even Wikipedia) gives the following explanation: “The worst case is the most unbalanced case where the problem splits to a problem of size $$n-1$$ and a problem of size $$0$$ (i.e. when the array is already sorted)”.

Then they use the master theorem and find it is $$O(n^2)$$.

Marvelous. So simple. But wait.

Do we know upfront that the worst case is $$O(n^2)$$? No, that’s what we need to prove.

“The most unbalanced case” meaning it is the worst case? Is there any theorem that states this?

So what is actually a coherent proof that Quicksort is $$O(n^2)$$?

Or in other words, what is the proof for the missing part?

We can derive that the run time can be described as $$T(n) = T(n_1) + T(n_2) + O(n)$$ where $$n_1 + n_2 + 1 = n$$. How to prove $$T(n)$$ is the largest when $$n_1 = 0$$ and $$n_2 = n-1$$?

I already know this is the most unbalanced case. Why is it actually the worst case?

## gr.group theory – How to classify rings by combinatorial structures?

There are many ways to encode information about algebraic structures such as groups, rings, etc… in combinatorial form. For example the Cayley graph of a group with a subset of generators, or the various graphs associated to rings, as can be found in, e.g., the answers to
Why do we associate a graph to a ring?. So I was wondering about the converse questions, which for groups and rings take the form:

First question(s): Given $$X$$ a graph is there a way to discover, intrinsically, constructively and algorithmically, whether it is the Cayley graph of a group? How to recover the group structure from the graph? Is there a unique group $$G$$ such that $$X = CG(G)$$, the Cayley graph of $$G$$? How to find such a group $$G$$? Which graphs are the Cayley Graphs of some Group?

Second Question: Is there a combinatorial structure (such as a system of graphs) associated to rings (or algebra or module of an algebra) from which you can recover the full ring (or module) in a similar manner as in the first question? Preferably in an intrinsic, constructive and algorithmic way. Assuming one could find such a combinatorial category, how to find out which objects in it are the objects associated to rings (or modules)?

I would also be interested in considering similar types of questions for general well-known algebraic structures (some kind of combinatorial informational encoding for these algebraic structures) in the sense that you can define precisely combinatorial structures out of algebraic structures, intrinsically constructively and algorithmically, but from which you can recover the original structure, also intrinsically constructively and algorithmically and intrinsically.

For groups, there is positive answer given by Sabidussi’s theorem, as mentioned in https://en.wikipedia.org/wiki/Cayley_graph#Characterization, which characterizes graphs which are Cayley Graphs of groups. This theorem would suffice in terms of instrisic, constructive and algorithmic profile of the proof, for question 1.

I would be satisfied with partial answers.

## data structures – Time complexity of algorithms

The first and the last statements are correct, while the second one is incorrect.

## Statement 1

Denote by $$T_{min}$$ the actual running time of the algorithm $$A$$, in the best case, and $$T_{max}$$ in the worst case. By how we chose $$T_{min}$$ and $$T_{max}$$ it follows that $$T_{min}le T_{max}$$.

From our assumptions, $$T_{min}=Omega(g(n))implies T_{min}ge c_1g(n)$$. Also from our assumptions, $$T_{max}=O(f(n))implies T_{max}le c_2f(n)$$.

Combining them together we get:

$$c_1f(n)le T_{min} le T_{max} le c_2g(n)$$, which means that $$f(n)le frac{c_2}{c_1}g(n)implies f(n)=O(g(n))$$

## Statement 2

Consider the following algorithm:

``````if lst(0) != 0:
for x in lst:
print(x)
``````

And consider the inputs $$I_1:=(0,1,2,3,…,n)$$ and $$I_2:=(1,2,3,…n+1)$$.
Clearly, the algorithm takes $$O(1)$$ time with input $$I_1$$, but $$Omega(n)$$ time with input $$I_2$$. Obvoiusly, $$nneq O(1)$$ and thus the statement is incorrect.

## Statement 3

Repeat the proof of statement 1. Note that also $$T_{avg}le T_{max}$$ and thus the proof still holds.

## data structures – Given a binary min-heap, find the \$k\$-th element

data structures – Given a binary min-heap, find the \$k\$-th element – Computer Science Stack Exchange

## data structures – Reverse An Array whats wrong with this type of code?

data structures – Reverse An Array whats wrong with this type of code? – Computer Science Stack Exchange

## data structures – Is there a relationship between visitor pattern and DeMorgan’s Law?

Visitor Pattern enables mimicking sum types with product types. Where does the “sum”-iness come from?

For example, in OCaml one could define `type my_bool = True | False`

Or encode with visitor pattern:

``````type 'a bool_visitor = {
case_true: unit -> 'a;
case_false: unit -> 'a;
}

let t visitor = visitor.case_true ()
let f visitor = visitor.case_false ()

let visitor = {
case_true = (fun () -> "true");
case_false = (fun () -> "true");
}

let () = print_endline (t visitor) (* prints "true" *)
``````

What’s the best way of explaining the sum-type-to-visitor-pattern transformation? Is it:

• Of course + and * are interdefinable, what did I expect?
• Or is it that the left side of `->` is the “negative” position and that this leads to a DeMorgan-law-like flip of sum and product?

I also wonder if this question is related to how one can use universally-quantified types to mimic existential types.

## data structures – Should Binary Expressions make use of built in functions in interpreters?

I have been developing a statically typed language with support for autocasting. The language has a lot of built in functions with good support for overloading. I’ve noticed that this built in function system could be used to evaluate binary expressions by implementing operator built in functions with different overloads e.g.

``````int operator+(int, int)
float operator+(float, float)
int operator-(int, int)
float operator-(float, float)
``````

It feels like I’m doubling up my efforts to handle operator evaluation, when realistically they could piggyback the built in method system which has type checking and links the node (my interpreter uses the visitor pattern) to the actual function that would retrieve the parameters and do some computation with them – effectively what the old operator system already has to evaluate an expression.

Does this approach sound sensible? In theory it sounds good to me but I don’t know if I’m oversighting something. I know that all languages must have a intrinsic implementation for operators, but I don’t know how they link it to the actual AST.

Implementation

If this approach is indeed a good approach, at what level should I implement the built in function support to a binary expression? For example I could adjust the parser so that when it receives 2+2 it can generate a methodCall node with parameters `(2, 2)` and the name `operator+`, which would effectively treat all operators as calls to methods from the get go. Or I could stick with a binaryExpression node and link it to the correct function during semantic analysis.

## data structures – Sort a d-sorted array

Given an array (1,…,n), The array will be called d-sorted, if every key in the array is located in a distance not greater then a constant d from his location in the sorted array A

I need to write algorithms that get a d-sorted array with the length of n and sorts the array in the runtime of:

1. Θ(n) if d is a constant
2. Θ(n(log(log(n)))) if d is Θ(log(n))

My Attempts:
I wrote this following pseudo-code:

``````Sort_d_array(A(),d)
min-heap-size <- d
for i <- 1 to n
BUILD-MIN-HEAP(min-heap,heap-size)
if min-heap not empty
then EXTRACT-MIN <- A(i)
if i+d<=n
then heap-insert-min(min-heap,A(i+d))
``````

But in terms of runtime, all I get is BigO(nlog(log(d)))

MY METHOD: I initialized `i <- 1` and then I built a `min-heap` that contains all of the first d elements
as long as the `heap is not empty`, I used `EXTRACT-MIN` and put the element at i index in
the array and if `i+d<=n`, then `A(i+d)` will be an element in the
min-heap

Any help?