Complexity Theory – How subexponential time does \$ text {3SAT} \$ have to be to make \$ text {NP} neq text {EXP} \$? What else would \$ text {NP} neq text {EXP} \$ mean?

The exponential time hypothesis (see: https://en.wikipedia.org/wiki/Time_complexity#Exponential_time_hypothesis) assumes that if $$text {3SAT}$$ does NOT have a sub-exponential time algorithm (i.e. one in $$mathcal O (2 ^ {o (n)})$$), then $$text {P} neq text {NP}$$. However, I am interested in the case that there is such an algorithm. From posts like this: https://cstheory.stackexchange.com/questions/9237/consequences-of-sub-exponential-proofs-algorithms-for-sat it doesn't seem to prove to find such an algorithm $$text {NP} neq text {EXP}$$So I wonder how subexponential the algorithm has to be to ensure this.

More generally, there is some sort of list of conjectures / hypotheses that would imply $$text {NP} neq text {EXP}$$?

Is \$ 1 neq a in Z (2.E_7 (q)) cong Z_2 \$ a quadratic element in \$ 2.E_7 (q) \$?

When $$q$$ is a force of an odd prime, is $$1 neq a in Z (2.E_7 (q)) cong Z_2$$ a square element in $$2.E_7 (q)$$?

Are there references to the subgroup structure for the finite simple group? $$E_7 (q)$$ and its double coverage of $$2.E_7 (q)$$?

Let \$ G \$ be any Abelian group and \$ x, y in G \$ and for all \$ k in Bbb Z \$: \$ x neq y ^ k \$ and \$ y neq x ^ k \$. Is there an integer like \$ t \$ that \$ (ab) ^ t = e \$?

I'm looking for answers other than trivial $$t = 0$$. I would be very happy to receive examples or evidence of falsehood

Real Analysis – Easy way to determine if directional derivatives exist at \$ x neq 0 \$

Suppose I have a role $$f: mathbb {R} ^ {2} to mathbb {R}$$given by
$$f (x, y) = frac {x ^ {2} y} {x ^ {4} + y ^ {2}}$$to the $$(x, y) neq (0.0)$$ and we bet $$f (0) = 0$$, It can be shown that this function has all directional derivatives on $$(0.0)$$ but is not even continuous.

But what if I want to show that there are directional derivatives at some point? $$(x, y) neq (0.0)$$? One way would be to calculate
$$lim_ {h to 0} frac {f (x + ah, y + bh) – f (x, y)} {h}$$
This is an extremely tedious calculation. Is this the way to generally show that all directed derivatives exist; directly from the definition?

An alternative way would be the following: I compute partial derivatives at a point other than zero $$(x, y)$$and show that these are continuous in a neighborhood of this point. My question: Are "nice" looking features in $$mathbb {R} ^ {2}$$ generally continuous? If the partial derivation of this function w.r.t. $$x$$ is
$$frac {2xy} {x ^ {4} + y ^ {2}} – frac {x ^ {2} y} {(x ^ {4} + y ^ {2}) ^ {2}} 4x ^ {3}$$
I can only say that this function is continuous for some $$(x, y) neq 0$$ without going into detail, simply because it looks "nice"?

Overall: How can it normally be shown that a function has directional derivatives at points that are not the origin? In all these cases, the calculation of the limit value is not easy.

abstract algebra – Prove that a set of orders \$ 2pq \$ is solvable, \$ p neq q \$, \$ p, q> 2 \$

I try to use the Sylow theorems to prove the following.

To let $$G$$ be a group of order $$2pq$$, Where $$p> q> 2$$ are prime numbers. Show that $$G$$ is solvable.

Now I realize if I can show that too $$n_p = 1$$ or $$n_q = 1$$, then I find a subset of the order $$pq$$ This is normal (index 2).

Now I'm using Sylow to show that $$n_q = 1$$ or $$n_q = 2p$$,

So we assume $$n_q = 2p$$, We also learn that from the Sylow theorems $$n_q = 2p equiv 1 mod q$$

Now my idea is to show this somehow $$n_p = 1$$, but I don't see how to do it.

It is clear to me that this question has been asked before, namely here: Suppose \$ | G | = 2 pq \$. Does \$ G \$ have a subset of Order \$ pq \$?
but I cannot understand this last step with modular arithmetic.

linear algebra – \$ prod limits_ {j neq i = 1} ^ k (a_i-a_j) ^ n = 1 \$

n> 1 is fixed. Find all complex numbers $$a_1$$, …,$$a_k$$ Which $$forall i in {1, …, k }$$ $$prod limits_ {j neq i = 1} ^ k (a_i-a_j) ^ n = 1$$
What I have done so far for K = 3.2 was to solve that it was not difficult and I think the result was not an answer $$P (x) -x mid P ^ {& # 39;} (x) ^ n-1$$ and P (x) has no double roots.

nt.number theory – Prove \$ frac { text {area} _1} {c_1 ^ 2} + frac { text {area} _2} {c_2 ^ 2} neq frac { text {area} _3 } {c_3 ^ 2} \$ for all primitive Pythagorean triples

Some time ago I asked this question about MSE here. After placing a bounty, it got some attention, but unfortunately it still has to be resolved. After receiving some advice from MO Meta, I decided to post the question here (note that this is the same as the area just multiplied by two and $$a_n, b_n> 0$$).

Tut,

$$frac {a_1b_1} {c_1 ^ 2} + frac {a_2b_2} {c_2 ^ 2} = frac {a_3b_3} {c ^ 2_3}$$

for three different primitive Pythagorean triples $$(a_n, b_n, c_n)$$?

My personal belief is that this doesn't happen and I'm actively trying to refute it. I would also welcome a counterexample.

Some results so far:

User @mathlove on MSE found the following necessary condition

The following is a necessary condition for $$c_i.$$

This is necessary for every prime number $$p$$. $$nu_p (c_1) leq nu_p (c_2) + nu_p (c_3)$$ $$nu_p (c_2) le nu_p (c_3) + nu_p (c_1)$$
$$nu_p (c_3) le nu_p (c_1) + nu_p (c_2)$$ Where $$nu_p (c_i)$$ is the
Exponent of $$p$$ in the prime factorization of $$c_i$$,

(You can find proof here)

To search for these values, I have created a comprehensive algorithm to search for these triples with the help of here. I found that

To the $$c ^ 2 <10 ^ {14}$$

$$frac {a_1b_1} {c_1 ^ 2} + frac {a_2b_2} {c_2 ^ 2} neq frac {a_3b_3} {c ^ 2_3}$$

and,

$$frac {1} {c_1 ^ 2} + frac {1} {c_2 ^ 2} neq frac {1} {c ^ 2_3}$$

Note that the difficulty of finding these triples appears to be due to the division by the square of the hypotenuse, as there are many solutions $$a_1b_1 + a_2b_2 = a_3b_3$$, At first it seemed as if it was extremely unlikely that these solutions would occur (ratios match perfectly), which is why nothing was found, but it looks like there is a little more to it. Due to a bug in my original code, I accidentally looked for solutions to it.

$$frac {a_1b_1} {c_1} + frac {a_2b_2} {c_2} = frac {a_3b_3} {c_3}$$

What resulted in these very interesting values ​​for $$c <10 ^ 7$$.

$$frac {3 * 4} {5} + frac {20 * 21} {29} = frac {17 * 144} {145}$$
$$frac {20 * 21} {29} + frac {119 * 120} {169} = frac {99 * 4900} {4901}$$
$$frac {119 * 120} {169} + frac {696 * 697} {985} = frac {577 * 166464} {166465}$$
$$frac {696 * 697} {985} + frac {4059 * 4060} {5741} = frac {3363 * 5654884} {5654885}$$

This pattern has a clearly defined structure. Note the recursive nature, where one of the terms always comes from the sum of the previous ones. In addition, the LHS meters are both individual and on the RHS $$b_3$$ and $$c_3$$ are also a distance from each other.

I found this recently and didn't have much time to learn, but I found that the values ​​all have a corresponding OEIS sequence. How could this help to refute the original statement?

Background and motivation
A solution in one way or another to the original question could help solve a few (probably not so important) but annoying open problems in number theory. I am preparing a website that I will link to sometime to get the full background. However, it is too long for this post and I will omit it according to META MO's recommendations to keep it as short as possible. Also, I'm not a research-level mathematician. Please forgive unintentional ignorance when replying to comments.

Subspaces of \$ ell_p \$ (\$ 1 <p < infty \$, \$ p neq 2 \$) are not isomorphic to \$ ell_p \$

Is it possible to show the existence of an infinitely dimensional closed subspace of? $$ell_p$$ ($$1 . $$p neq 2$$), not isomorphic too $$ell_p$$, in one (n elementary Path?

To the $$1 I think we can find such an example isomorphic $$ell_p ( ell_q ^ n)$$but the evidence I have in mind takes advantage of the fact that $$ell_q$$ embeds in $$L_p (0,1)$$which is not elementary.

Show the following: \$ Theta (n log n) cup o (n log n) neq O (n log n) \$

Show that: $$Theta (n log n) cup o (n log n) neq O (n log n)$$

I've tried to do this in many ways, but I do not really know how … intuitively $$Theta cup o = o$$? So that would mean that I just had to show it $$o (n log n) neq O (n log n)$$ Which would be easier, I think. But I do not know how to do that formally.

Complexity Theory – How to deal with the witness size by reducing PH with \$ Sigma_k neq Pi_k \$

I read Goldreich's note on PH and it states that if $$Sigma_k neq Pi_k$$ for some $$k$$, then PH collapses to the corresponding level (so $$mathcal {PH} = Sigma_k$$). At least one proof formulation seems to be based on collapsing two neighboring existential quantifiers. How can we do that when the witnesses are long? $$poly (x)$$ for input $$x$$?

It seems that as we break up the hierarchy, we must increase the size of the witnesses at every level, on many levels $$poly (x)$$ Witnesses (each corresponding to an existential quantifier) ​​could easily add up to something exponential $$x$$, or more. However, this seems to violate the definition of $$Sigma_k$$ that has a polynomial testimony.