ct.category theory – Can the effective topos be considered symmetrically monoidal?

in the

Example (e) for a monosymmetric closed category with NNO without infinite by-products?

User Zhen Lin indicates that the effective topos are locally Cartesian closed. In nLab we have that locally Cartesian with terminal object closed means Cartesian closed, and Hyland (in his original paper on the effective topos) indicates that there is such a terminal object and he calls it $ 1 $ as usual. Cartesian closed thus implies Cartesian monoidal implies symmetric monoidal. Is this argument okay? Do I miss something?

Also, since we can represent morphisms in a symmetric monoidal category as string diagrams (from Joyal and Street), does that mean that we can do this for the effective topos? I want to draw this!

If so, could someone help me there? My knowledge of all this is pretty tight and I have only successfully made this connection.

Number Theory – Does Multiplication Increase Entropy?

Does multiplication increase entropy?

The Shannon entropy of a number $ k $ in binary digits is defined as
$$ H = – log ( frac {a} {l}) cdot frac {a} {l} – log (1- frac {a} {l}) cdot (1- frac { a} {l}) $$
from where $ l = text {floor} ( frac { log (k)} { log (2)}) $ is the number of binary digits of $ k $ and $ a $ is the number of $ 1 $-s in the binary extension of $ k $,
So let's take a look at the number $ k $ as a "random variable".

Suppose that $ n, m $ are selected evenly at random in the interval $ 1 le a $,

Hypothesis 1):

$ H_ {m cdot n} $ is then "significantly" larger $ H_n $,

Hypothesis 2):

$ H_ {m + n} $ is then not "significantly" larger $ H_n $,

Here is an empirical statistical test that indicates that multiplication increases entropy, but not addition:

def entropyOfCounter (c):
S = 0
for k in c.keys ():
S + = c[k]
    prob = []
    for k in c.keys ():
prob.append (c[k]/ S)
H = sum ([ p*log(p,2) for p in prob]) .N ()
Return H

def HH (l):
return entropyOfCounter (counter (l))
N = 10 ^ 4
MX = []
MP = []
for k in the range (N):
n = Randint (1.2 ^ 500)
m = Randint (1,2 ^ 500)
Hn = HH (integer (n). Digits (2))
Hm = HH (integer (m). Digits (2))
c + = 1
M = max (Hn, Hm)
MX.append (HH (integer (n * m). Digits (2)) - Hn)
MP.append (HH (integer (n + m). Digits (2)) - Hn)

tX = mean (MX) / (sqrt (variance (MX)) / sqrt (N)). N ()
tP = average (MP) / (sqrt (variance (MP)) / sqrt (N)). N ()
prints tX, tP

Output:
31.1839027855549 0.266357305397406

The first case (multiplication) significantly increases the entropy. The second case (addition) not.

Is there any way to give a heuristic explanation of why this is generally so (if so) or is this empirical observation generally available? $ 1 le a $ not correct?

Connected:
https://physics.stackexchange.com/questions/487780/increase-in-entropy-and-integer-factorization-how-much-work-does-one-have-to-do

Complexity Theory – How can it be shown that the product of two binary numbers can not be determined in $ AC ^ {0} $?

entrance $ x = x_ {0} … x_ {n-1} $, To determine the xor over $ n $-bits $ x_i $ it
is sufficient to multiply the following two $ n ^ 2 $Bit binary numbers:
$$ a = 0 ^ {n-1} h space {2mm} x_ {n-1} h space {2mm} 0 ^ {n-1} h space {2mm} x_ {n-2} h space {2mm} 0 ^ {n-1} hRoom {2mm} … hRoom {2mm} 0 ^ {n-1} hRoom {2mm} x_ {1} hRoom {2mm} 0 ^ {n-1} hRoom {2mm} x_ {0} $$

$$ b = 0 ^ {n-1} hRoom {3mm} 1 hRoom {5mm} 0 ^ {n-1} hRoom {5mm} 1 hRoom {5mm} 0 ^ {n-1} hRoom { 2mm} … hSpace {2mm} 0 ^ {n-1} hSpace {5mm} 1 hSpace {5mm} 0 ^ {n-1} hSpace {5mm} 1 $$

The product of two binary numbers can be determined in $ AC ^ {1} $?

Measure theory – If $ f $ is a measurable function, then $ f ^ 2 $ is a measurable function, $ f: X rightarrow bar { mathbb {R}} $

To let $ f $ a measurable function $ f ^ 2 $ is a measurable function $ f: X rightarrow bar { mathbb {R}} $ and
$ mathbb {A} $ a sigma algebra of sets.

My attempt

Note $ x in (f ^ 2) ^ {- 1} (c, infty) = {x: f ^ 2 (x)> c } = {x: f (x)> pm sqrt { c} } = {x: f (x)> sqrt {c} } cup {x: f (x) <- sqrt {c} } $

Here I am stuck. Can someone help me?

Number Theory – Are the logarithms of the integer polynomials in $ L ^ 1 $ of the unit circle discrete?

Tautologically, the integer polynomials form a discrete set $ L ^ 1 $ of the unit circle. On the other hand, a normally ordered set of logarithms generally becomes denser than the original set.

Is the set
$$
big { log {| P |} ,: , P in mathbb {Z}[X] setminus {0 } big } subset L ^ 1 ( mathbb {T})
$$

of functions on the complex unit circle $ mathbb {T} = {z mid | z | = 1 } $ discreet in $ L ^ 1 $or does it have an accumulation point?

I am equally satisfied with that $ L ^ 2 $ Norm, if it makes a difference.

Complexity Theory – Proof of NP Completeness of an Extension in List Coloring Problem

The List Coloring Problem (LCP) displays an undirected graph $ G (V, E) $, every vertex $ v in V $ is given a list
Permissible colors, and try to assign colors to vertices so that each vertex is assigned a color from its own list, with adjacent vertices having different colors.

In my problem: given a diagram $ G (V, E) $ an integer $ k $For the grip we take the coloring with natural numbers from 1 to k. Each vertex must be colored with a number. The goal is to minimize the sum of the color numbers of $ v in V $ in the $ G (V, E) $ Two adjacent vertices are colored with the same number.

I want to prove that the above problem (called P1) is $ mathcal {NP} $-Complete.

Answer: Reduce the $ f $Color list color problem to P1.

I now define a decision-version problem (called P2): For an undirected graph $ G (V, E) $ and an integer $ k $ In List Coloring Problem we ask if a $ k $Colors can be found in $ G (V, E) $,

In my opinion, to solve P1 we first have to solve P2 and then determine the minimum sum from the feasible solution sets. Since LCP is a generalization of the Graph Coloring Problem (GCP), i. H. The latter is a special case of the former, and it is known that the $ k $Color problem with GCP is $ mathcal {NP} $-Complete. So we can say that the $ k $Color problem with LCP is too $ mathcal {NP} $completely, d. H. P2 is $ mathcal {NP} $-Complete. It follows that P1 is too $ mathcal {NP} $-Complete. Is this reduction correct?

I would be very happy if someone can make useful suggestions!

topological graph theory – thickness of the space cover

A book embedment of a graph G consists of placing the vertices of G on a ridge and assigning edges of the graph to sides so that edges on the same side do not intersect. The page number is a measure of the quality of a book embed, which is the minimum number of pages in which the graph G can be embedded.

Is a graph $ G $ is a diagram overlapping chart $ B $.
Is there a relationship between their page number?

I think the covering diagram is more complicated than the basic diagram. So too $ pn (G) geq pn (B) $ generally hold?

Graph theory – online algorithm for finding clique of size k

I'm trying to write an online algorithm that can detect k size cliques. I start with a series of vertices first. For each iteration, I add an edge. The algorithm recognizes the first time that an edge generates a clique of size k. What is an efficient algorithm that can accomplish this task, and what is the temporal complexity?

Complexity Theory – Construction of QAP in Pinoccho paper

In this article, Pinocchio: Virtually Verifiable Calculation, on page 3, there is a way to construct the QAP and create an example of it.

Question:

We choose an arbitrary root rg ∈ F for each
Multiplication g in C and define the target polynomial

what do you mean by root? is it the output of a multiplication gate?

  • In the example I can not understand how they end with the values ​​w3 and w4

In fact, I think that w3 (r5) = 0 and w3 (r6) = 1 because W encodes the correct input to the gate and C3 is connected to 6 through 5 and goes to it from the right (0,0). The same applies to w4.

  • Are the coefficients of the polynomial p (x) calculated by the prover according to the input x each time? if not, how should we find them? about interpolation?