gr.group theory – Upper-triangular matrices as union of centralizers of cyclic elements

Let $p$ be a prime number and $G=GL_n ( mathbb{Z} / p mathbb{Z}
)$
such that $nleq p$. Consider the set $U$ of upper-triangular matrices of $G$
having entries of $1$ on the diagonal. The cardinality of $U$ is $p^{frac {n(n-1)} 2}$ and $U$ is a subgroup of $G$, in particular $U$ is a Sylow $p$-subgroup of $G$. Recall that an element $M$ in $G$ is said to be a cyclic matrix if the characteristic polynomial of $M$ is equal to its minimum polynomial. Let $C$ be the set of cyclic matrices of $U$.

Question: Can $U$ be written as $U=bigcup_{Min C}C_{U}(M)$?

Any help would be appreciated so much. Thank you all.

complexity theory – How to prove NP-Completeness of longest path between two vertices relying Hamilton NP-Hard problem

I have this question: I have an undirected graph G(V, E) (where V = set of vertices, E = set of edges). Consider the maximum path between two vertices s and t:

LPATH = {⟨G,s,t,k⟩|There is in G a simple path as long as at least k from s to t}

A simple path is a path without any repeated vertice, i.e. every vertice can be visited just one time. The length of the path is given from the edges of which it is composed.

So how can I show that LPATH is an NP-Complete problem using the Hamilton path problem as an NP-Hard problem as reference?

database theory – Association Tables

I can use an association (or ‘linking’) table to model a many-to-many relationship, eg Student -> Enrolment <-Course, where an enrolment describes an essential relationship between students and courses.

I can also have Course -> Department and Course -> Language, where language and department are both attributes of a course, but there’s no special relationship between them.

These two scenarios look similar, but I wouldn’t call Course an association table… is there another name for it? Is there a logical way to distinguish between the two cases?

gr.group theory – How to identify the two copies of $D_{24}$ in the homomorphisms of the 2 musical actions?

Let $S$ be the set of minor and major triads. Two sets of actions are defined on the set:
1) Musical transposition and inversion
2) P, L, R actions
$P(C-major) = c-minor,$
$L(C-major) = e-minor,$
$R(C-major) = a-minor.$

I already know that each action can be described as a homomorphism from our group into $Sym(S)$ ($S_n$). I just don’t really know how to identify these ‘distinguished copies’.

Apparently, each of these homomorphisms (of action 1 and 2) is an embedding so that we have two distinguished copies, H1 and H2, of the dihedral group of order 24 in Sym(S).
This is the duality in music described by David Lewin.

“The two group actions are dual in the sense that each of these subgroups H1 and H2 of Sym(S) is the centralizer of the other!”

These notions are defined in this paper: https://www.maa.org/sites/default/files/pdf/upload_library/22/Hasse/Crans2011.pdf

nt.number theory – Given a lattice in $mathbb{Z}^n$, what can be said about its ‘transpose’ lattice?

I apologize if this notion is well-known, but I couldn’t find anything useful and I am not sure what key words to look for.

Suppose we have a lattice $Lambda subset mathbb{Z}^n$, given by in the form

$$displaystyle Lambda = left{M mathbf{u} : mathbf{u} in mathbb{Z}^n right }$$

for some matrix $M$ with integer entries and non-zero determinant. By ‘transpose’ lattice I mean the corresponding lattice given by

$$displaystyle Lambda^T = left{M^T mathbf{u} : mathbf{u} in mathbb{Z}^n right }$$.

Is there a name for $Lambda^T$? What properties can be deduced about $Lambda^T$ given $Lambda$?

For example, it is clear that $det Lambda = det M = det M^T = det Lambda^T$.

ct.category theory – Additivization of functors in an abelian monoidal category (crosspost from MSE)

I posted a question a week ago on math.stackexchange. As is sometimes the case, I got no answers. Considering that the question is about a research article, I hope that it might be relevant for MathOverflow.

Here is the original question:


I’m having trouble with the proof of Lemma 2.9 in “Cohomology of Monoids in Monoidal Categories” by Baues, Jibladze, and Tonks, and I was wondering if someone could clarify a detail. I’ll try to summarize the context of the lemma.

Context

Let $(Bbb A,circ,I)$ be an monoidal category where $Bbb A$ is abelian: in particular, $circ$ is not necessarily additive in both arguments. Suppose that $circ$ is left distributive, i.e. the natural transformation
$$(X_1circ Y)oplus(X_2circ Y)rightarrow (X_1oplus X_2)circ Y$$
is an isomorphism. For example, $Bbb A$ could be the category of linear operads (this is a motivating example of the article). Given an endofunctor $F$ of $Bbb A$, we define its cross-effect
$$F(A|B):=ker(F(Aoplus B)rightarrow F(A)oplus F(B)).$$
The additivization of $F$ is then the functor $F^text{add}$ defined by
$$F^text{add}(A):=text{coker}left(F(A|A)rightarrow F(Aoplus A)xrightarrow{F(+)}F(A)right).$$
The idea is that $F^text{add}$ is the additive part of $F$.

Let $(M,mu,eta)$ be an internal monoid in $Bbb A$, and let $L_0$ be the endofunctor of $Bbb A$ defined by $L_0(A)=Mcirc(Moplus A)$. Let $L:=L_0^text{add}$ be the additivization of $L_0$. (In the case of operads, represented as planar trees, I see $L(A)$ as the space of trees whose nodes are all labeled by elements of $M$ except for one leaf, which is labeled by an element of $A$.)

Suppose now that $Bbb A$ is right compatible with cokernels, i.e. that

for each $AinBbb A$, the additive functor $Acirc-:Bbb ArightarrowBbb A$ given by $Bmapsto Acirc B$ preserves cokernels.

Then, in the proof of Lemma 2.9, the authors claim the following:

By the assumption that $Bbb A$ is right compatible with cokernels it follows that $L(L(X))$ is the additivisation of $L_0(L_0(X))$ in $X$ (…).

Remarks

If anyone could provide an explanation of the last claim, I would be very grateful. However, my inability to understand how to show this might be related to two other issues I have:

1) Elsewhere in the literature, cross-effects are only defined when $F$ is reduced, i.e. $F(0)=0$ (e.g. here, section 2). But we can always reduce a functor by taking the cokernel of $F(0)rightarrow F(X)$, so I don’t think it’s much of a problem.

2) In the first quote, the authors state that $Acirc -$ is additive, which is quite the opposite of the initial hypothesis that $circ$ be left distributive, and not necessarily right distributive. How to resolve this apparent conflict?

nt.number theory – On the $mathsf{LCM}$ of a set of integers defined by moduli of powers

For integers $a,b,t$ define $$mathcal R_t(a,b)={qinmathbb Zcap(1,min(a^t,b^t)): a^tequiv b^tbmod q}$$ and $mathsf{LCM}(mathcal R_t(a,b))$ to be $mathsf{LCM}$ of all entries in $mathcal R_t(a,b)$.

Similar reasoning to On $mathsf{LCM}$ of a set of integers gives $$mathsf{LCM}(mathcal R_t(a,b))leqmathsf{LCM}(T_t(a,b))$$ where $T_t(a,b)$ is defined as $$T_t(a,b)=Big{qinmathbb Zcap(1,infty):q|Big((a-b)sum_{i=0}^{t-1}a^{t-1-i}b^iBig)Big}.$$

So $mathsf{LCM}(mathcal R_t(a,b))=mathsf{LCM}(T_t(a,b))$ holds.

If $a,binbig(frac r2,rbig)$ hold and are coprime then what is the probability $mathsf{LCM}(mathcal R_t(a,b))<beta r^{t-alpha}$ at some $alphain(0,t)$ and $beta>0$?

nt.number theory – Discriminants of Gleason’s period-$n$ polynomials for the Mandelbrot set

Gleason’s polynomials are the sequence of monic integer polynomials defined recursively by
$$
prod_{d mid n} G_d(c) = (((c^2+c)^2+c)^2+cdots+c)^2+c quad quad quad (textrm{$n$ iterates}),
$$

for $n=1,2, ldots$. Thus they start out like:
$$
G_1 = c, quad G_2 = c+1, quad G_3 = c^3 + 2c^2 + c + 1, quad G_4(c) = c^6 + 3c^5 + 3c^4 + 3c^3 + 2c^2 + 1
$$

They give the period-$n$ centers for the hyperbolic components of the Mandelbrot set in complex dynamics. In many ways they resemble the cyclotomic polynomials, which would result if we had $c^n-1$ on the right-hand side of the recursive definition; or the dynatomic polynomials in the dynamical plane (this is their version in the parameter plane parametrizing the quadratic iterations $z^2+c$).

For example, $mathrm{Res}(G_n,G_m) = pm 1$ for any pair $n neq m$, just as for the cyclotomic polynomials. This is proved for instance in Corollary 4.8 of this paper by Hutz and Towsley. (Courtesy of Matt Baker for this reference. An aside: What are the exact signs?)

My question is about the discriminants of these polynomials:

What are the lower and upper growth rates of $delta_n := log{|mathrm{Disc}(G_n)|}$? Does $frac{1}{n}sum_{d mid n} delta_d asymp log{n}$?

(The latter $sim log{n}$ for the cyclotomics, and the upper and lower growth rates $sim phi(n)log{n}$ for the individual cyclotomic discriminants are $nlog{n}$ and $e^{-gamma}nfrac{log{n}}{log{log{n}}}$, by Mertens’s theorem.)

I was wondering in what ways would the Gleason discriminants behave similarly to the cyclotomic discriminants (size-wise?), and in what ways they are markedly different. One marked difference is that the prime factors of the $mathrm{Disc}(G_n)$ are quite unpredictable; a casual look at the first few prime factorizations of the discriminants of $G_3, G_4, G_5$ and $G_6$ reveals
$$
23 times 2551, quad 13 times 24554691821639909, quad 13^2 times 949818439 times 6488190752068386528993226361, quad 8291 × 9137 × 420221 × 189946 395389 × 4813 162343 551332 730513 × 2 837919 018511 214750 008829 × 1 858730 157152 877176 856713 108209 153714 699601
$$

The one thing that is easy and very useful to see is that these discriminants are odd: this is how Gleason established that the complex roots of the polynomials are all distinct (no multiplicities), which is non-obvious from the definition. We certainly have the trivial lower $delta_n gg n$ (from Minkowski) and upper $delta_n = o(n^2)$ (from noting that the Mandelbrot set has logarithmic capacity $1$) estimates, but both of these are on purely general grounds, and they leave out a large margin.

set theory – Why are quotient sets (types) called quotients — are they the inverse of some product?

There seems to be a beautiful relation between natural numbers and sets (and types),
as in the size of a discriminate union, cartesian product, and function type,
is described by the sum, product, exponential of the sizes of the components. (As I learned from type theory). This also makes it easy to see why the symbols + and x are used for discriminate union and cartesian product (sum type and product type).

$$
forall A, B, C : text{sets} \
A + B = C ~ implies ~ |A| + |B| = |C|\
A times B = C ~ implies ~ |A| times |B| = |C|\
A → B = C ~ implies ~~~~~~~~ |B|^{|A|} = |C|
$$

However, why are quotient sets (and quotient types) called quotients and use the symbol $/$?

That does not seem to make sense to me. At the very least, to deserve the name quotient, I would expect them to somehow be the inverse of some product. I first thought they should be the inverse of the cartesian product, I tried to google this, but I cannot find anything. Is there some relation between quotient (sets) and (cartesian) products, that I am missing?