probability – “First principles” proof of the limit of the expected excess of the uniform renewal function

The closed form of the expected number of samples for $sum_r X_r geqslant t, X_r sim text{U(0,1)}$ is given by:

$$m(t) = sum_{k=0}^{lfloor t rfloor} frac{(k-t)^k}{k!}e^{t-k}$$

From this we can deduce the expected amount by which this sum exceeds $t$, namely:

$$varepsilon(t) = frac{m(t)}{2} – t$$

From knowing that $m(t) to 2t+dfrac{2}{3}$, we can easily see that $varepsilon(t) to dfrac{1}{3}$.

Is there a simple (“low tech”) way of proving that $varepsilon(t) to dfrac{1}{3}$ without first passing through proving $m(t) to 2t+dfrac{2}{3}$ ?

dg.differential geometry – Uniform convergence of Eigenfunction decomposition on Riemannian sphere?

Let ${u_k}_{k=1}^infty$ be a sequence of ($L^2$ normalized) mutually orthogonal eigenfunctions of the operator $-Delta$ on the sphere $mathbb{S}^n$ (here $Delta$ is the Laplace Beltrami operator). Let $u$ be a smooth (real valued) function on the sphere. It is a well-known result that we can write $u=sum_{k=1}^infty c_k u_k$ for some (real) constants $c_k$. My question is: Is the convergence of this sum uniform?

I am trying to prove that the optimal constant in the Poincare inequality is $lambda_1=n$. That is to say, I am trying to prove the inequlity $int_{mathbb{S}^n} |nabla u|^2 geq n int_{mathbb{S}^n} |u|^2$. Here is what I have done so far:

First, integrate by parts on the LHS so that it suffices to prove $int_{mathbb{S}^n} -uDelta u geq n int_{mathbb{S}^n} |u|^2$. Then use $u=sum_{k=1}^infty c_k u_k$ and assume that the convergence is uniform. Then we can switch the order of the sum with the derivative and integral (and use the fact that ${u_k}$ are orthonomal) so that
begin{align*}
int_{mathbb{S}^n} -left(sum_{k=1}^infty c_k u_kright)Delta left(sum_{j=1}^infty c_j u_jright)&=
int_{mathbb{S}^n} -left(sum_{k=1}^infty c_k u_kright)left(sum_{j=1}^infty c_j Delta u_jright)=
int_{mathbb{S}^n} -left(sum_{k=1}^infty c_k u_kright)left(sum_{j=1}^infty lambda_j c_j u_jright)
\&= sum_{j,k}c_k c_j lambda_jint_{mathbb{S}^n} u_k u_j= sum_{j,k}c_k c_j lambda_j delta_{jk}=sum_{j}c_j^2 lambda_jgeq lambda_1 sum_j c_j^2.
end{align*}

By the same logic, the last sum is equal to $int |u|^2$.

Now obviously, this proof requires some argument showing that the sum commutes with $Delta$ and the integral but I have not been able to find a reference that the sum converges uniformly. My thought is that this would follow from some basic facts in Harmonic analysis though I am no expert in that field. Would anyone be able to provide a reference for this?

pr.probability – Comparison of the distribution of uniform r.v.s with $mathcal{N}(0, 1)$

$ ​Given :\
1. X_1, X_2, X_3, … are independent random variables \ X_n sim Uniform(-n , 3n) where n = 1, 2, ..\
2. S_N = frac{1}{sqrt{N}}sum_{n=1}^{N} frac{X_n}{n} where N = 1, 2, ..\
3. F_N be the distribution function of S_N.\
4. phi sim mathcal{N}(0, 1)\\$

$
How to prove the following:\
a. lim_{N to infty} F_N(0) leq phi(0) \
b. lim_{N to infty} F_N(1) leq phi(1)
$

plotting – Uniform Color distribution with the command Show[]

I have the following definitions:

a=Sqrt(2 + 2 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r)) Sqrt(
  Cosh(2 r) + Cos(2 (Theta)) Sinh(2 r)));

a3=Sqrt(5 + 4 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r)) Sqrt(
  Cosh(2 r) + Cos(2 (Theta)) Sinh(2 r)));

beta = 2 a1^2 + 2 a2^2 + 2 a3^2 + 2 a1^2 a2^2 + 2 a1^2 a3^2 + 
   2 a2^2 a3^2 - a1^4 - a2^4 - a3^4 - Sqrt(((a + a + a3)^2 - 1)*((a - a + a3)^2 - 
     1)*((a + a - a3)^2 - 1)*((a - a - a3)^2 - 1)) - 1;

then, I define the plots A and B as follows:

A = Plot3D({1/2 Log((beta/(8 a^2)))}, {r, -1.0, 1.0}, {(Theta), 
   0.01 (Pi), 1.99 (Pi)}, 
  RegionFunction -> 
   Function({r, (Theta)}, 
    0 < Sqrt(
      2 + 2 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r)) Sqrt(
        Cosh(2 r) + Cos(2 (Theta)) Sinh(2 r))) - 
      1/Sqrt(2) ((Sqrt)(((-3 - 
               2 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r)) Sqrt(
                Cosh(2 r) + Cos(2 (Theta)) Sinh(2 r)))^2 + 
             2 (7 + 6 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r))
                  Sqrt(Cosh(2 r) + Cos(2 (Theta)) Sinh(2 r))) + 
             Abs(-3 - 
                2 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r)) Sqrt(
                 Cosh(2 r) + 
                  Cos(2 (Theta)) Sinh(2 r))) (Sqrt)((-3 - 
                   2 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r)) Sqrt(
                    Cosh(2 r) + Cos(2 (Theta)) Sinh(2 r)))^2 + 
                 8 (7 + 6 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r))
                     Sqrt(Cosh(2 r) + 
                    Cos(2 (Theta)) Sinh(2 r)))))/(7 + 
             6 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r)) Sqrt(
              Cosh(2 r) + Cos(2 (Theta)) Sinh(2 r)))))), 
  PerformanceGoal -> "Quality", AxesLabel -> Automatic, 
  PlotRange -> All, PlotPoints -> 30, Mesh -> 5, MaxRecursion -> 7, 
  ColorFunction -> "TemperatureMap");
(*-------------------------------------------*)

B=Plot3D({1/2 Log(((a^2 - a3^2)/(a^2 - 1))^2)}, {r, -1.0, 
  1.0}, {(Theta), 0.01 (Pi), 1.99 (Pi)}, 
 PerformanceGoal -> "Quality", AxesLabel -> Automatic, 
 RegionFunction -> 
  Function({r, (Theta)}, 
   0 >= Sqrt(
     2 + 2 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r)) Sqrt(
       Cosh(2 r) + Cos(2 (Theta)) Sinh(2 r))) - 
     1/Sqrt(2) ((Sqrt)(((-3 - 
              2 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r)) Sqrt(
               Cosh(2 r) + Cos(2 (Theta)) Sinh(2 r)))^2 + 
            2 (7 + 6 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r)) Sqrt(
                Cosh(2 r) + Cos(2 (Theta)) Sinh(2 r))) + 
            Abs(-3 - 
               2 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r)) Sqrt(
                Cosh(2 r) + 
                 Cos(2 (Theta)) Sinh(2 r))) (Sqrt)((-3 - 
                  2 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r)) Sqrt(
                   Cosh(2 r) + Cos(2 (Theta)) Sinh(2 r)))^2 + 
                8 (7 + 6 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r))
                     Sqrt(Cosh(2 r) + 
                    Cos(2 (Theta)) Sinh(2 r)))))/(7 + 
            6 Sqrt(Cosh(2 r) - Cos(2 (Theta)) Sinh(2 r)) Sqrt(
             Cosh(2 r) + Cos(2 (Theta)) Sinh(2 r)))))), 
 PlotRange -> All, PlotPoints -> 30, Mesh -> 5, MaxRecursion -> 8, 
 ColorFunction -> "TemperatureMap");

then, by the command Show() I join the two plots obtaining
enter image description here

Therefore, both 3Dplots match, as I expected; however, I have the question:

(1) There is a way to show a uniform color distribution for both plots by using the comand Show()?

that is, each plot appears with its own color distribution when I display both with Show(). This is logical since I define separately each function. On the other hand, it must be noted that the region function for plot A is of the form: RegionFunction -> Function({r, (Theta)}, 0 <f) and for B is RegionFunction -> Function({r, (Theta)}, 0 >=f) being f the function of $r$ and $theta$ displayed in the code, which could help to define a conditional to display a single plot without the need to use Show().

banach spaces – Uniform smoothness and twice-differentiability of norms

To get to the simplest case, consider a norm $|cdot|$ over $R^n$ that is uniformly convex of power-type 2, that is, there is a constant $C$ such that $$frac{|x+y| + |x – y|}{2} le 1 + C |y|^2$$ for all $x$ with $|x| = 1$ and for all $y$.

Question: Does this guarantee that $|cdot|$ has a second-order Taylor expansion on $R^n setminus {0}$, that is, there is a vector $g$ and a symmetric matrix $A$ such that $$|x + y| = |x| + langle g, y rangle + frac{1}{2} langle Ay, y rangle + o(|y|^2)$$ for all $x neq 0$. (Apparently this is a weaker requirement than twice-differentiability of $|cdot|$ on $R^n setminus {0}$)

It is easy to see that $|cdot|$ is differentiable on $R^n setminus {0}$, and a classic result of Alexandrov guarantees that the above second-order Taylor expansion holds for any convex function on almost every point $x$. It is also known that the norm of any separable Banach space can be approximated arbitrarily well by a power-type 2 norm that is twice differentiable on $R^n setminus {0}$ (see Lemma 2.6 here). But I wonder if the original norm itself has a second-order Taylor expansion.

gn.general topology – Uniform spaces as condensed sets

$DeclareMathOperatorHom{Hom}DeclareMathOperatorUnif{Unif}DeclareMathOperatorCHaus{CHaus}DeclareMathOperatorSet{Set}DeclareMathOperatorop{op}DeclareMathOperatorInd{Ind}DeclareMathOperatorFin{Fin}$In Barwick-Haine Example 2.1.10, they showed that the functor $Hom_{Unif}(-,X)colonCHaus^{op}toSet$ is a pyknotic set, i.e., a sheaf on the site $CHaus$ of compact Hausdorff spaces equipped with the coherent topology.

Question I wonder how much is known about realization of uniform spaces as condensed sets.

I did not check whether the sheaf above is accessible (when restricted to $Ind(Fin^{op})$, but I am slightly skeptical about this approach. It seems to me that the “correct” realization of a uniform space should be a condensed set which records its underlying topological space, along with an extra structure which records the uniform structure.

I suppose that this extra structure would be described by a certain kind of enriched groupoid. Indeed, the uniform structure on a topological space could be understood as a groupoid enriched in filters. See nLab page for a description of this sort.


This is motivated by an attempt to eliminate the restriction that the adjective “solid” only applies to condensed abelian groups.

Let $M$ be a topological abelian group. I was about to understand what it means for $M$ that the condensed abelian group $underline M$ is solid. Following Lecture II, for any sequence $(m_n)_{ninmathbb N}in M^{mathbb N}$ convergent to $0$, we associate a (continuous) map from the profinite set $S:=mathbb Ncup{infty}$ (there seems a bug in MathJax which renders { as C) to $M$ which maps $n$ to $m_n$ and $infty$ to $0$, or equivalently, a map $Stounderline M$ of condensed sets by, say, Yoneda’s lemma.

Suppose that $underline M$ is solid, then this map extends uniquely to a map $mathbb Z(S)^blacksquaretounderline M$ of condensed abelian groups. If I am not mistaken, $mathbb Z(S)^blacksquareto M$ further factors through $underline{mathbb Z((t))}$, the condensed abelian group associated to the topological abelian group $mathbb Z((t))$ with $(t)$-adic topology, and by full faithfulness, we get a factorization $Stomathbb Z((t))to M$ where the second map is additive.

Consequently, for every sequence $(a_n)_{ninmathbb N}in{mathbb Z}^{mathbb N}$ of integers, the series $sum_na_nt^n$ converges in $mathbb Z((t))$, therefore the series $sum_na_nm_n$ converges in $M$, which should imply, if I am not mistaken, that the uniform structure on $M$ is non-archimedean and complete, at least when $M$ is first countable (by the way, I don’t understand why it is claimed that it is not directly as any kind of limit of finite sums).

So the non-archimedean nature is rooted in the formalism. I suppose that a natural approach is to generalize the uniform structure to condensed sets, and to generalize the classical Cauchy-completeness. I don’t know whether it is convinced that this does not work. The current presentation separates non-archimedean and $mathbb R$-case, which covers neither non-abelian groups nor the general completeness of topological abelian groups.

how can I generate a random number from 1e-9 to 1e9 with uniform probability in r?

I am trying to generate a random number from 1e-9 to 1e9. The very naive idea is to generate a sequence from 1 to 1e18, then divide by 1e9. as following.
but seems not working.

set.seed(100)
rand <- sort(runif(10000, min =1, max= 1e18))/1e9
result <- sample(rand, 1)
min(rand) 
max(rand)

# result 664426274
# min(rand) 199051.1
# max(rand) 999853646

seems the min number is much higher than I expect.
Please advise.

coding theory – Prove that probabilistic adaptive algorithm that can explore only k bits of n bit input can’t distinguish k-independent distribution from uniform

Definition: Distribution $D$ on ${0,1}^n$ is called k-independent if for every random variable $X$ with distribution $D$ and for all $i_1, dots, i_k in {1,2,dots,n}$ random variable $X_{i_1,dots, i_k}$ has distribution $U_k$ (uniformal).

Problem: Consider probabilistic algorithm A that has an oracle access to input of length $n$. It means that algorithm $A$ can adaptively request $k$ bits of input (in more detail, $A$ can request one bit, then based on the oracle answer request another bit and repeat it not more than $k$ times).
Prove that if $D$ is k-independent distribution than $Pr_{x sim D} (A(x) = 1) = Pr_{x sim U_n}(A(x) = 1) $

First of all I don’t understand how adaptivity can potentially help Algorithm to distinguish uniformal distribution from k-independent. Second question is in the problem.

ag.algebraic geometry – Uniform Łojasiewicz constant in 2D

Łojasiewicz inequality is a classical result in real algebraic geometry. In particular, for any given polynomial $f:mathbb R^2to mathbb R$ there is some $C>0$ and some $alpha>0$ such that for all $|x|<1$ we have
$$
mathrm{dist}(x,Z(f))^alphaleq C|f(x)|,
$$

where $Z(f)$ denotes the zero set of $f$. There has been abundant reseach into the optimal exponent $alpha$, and in the 2D case there is an elementary paper by Kuo (1974):

https://link.springer.com/content/pdf/10.1007/BF02566729.pdf

for an explicit computation of the exponent $alpha$.

However, I was wondering if there is a uniform estimate (in terms of the degree of $f$) of the constant $C$, even in 2D. Of course this question would make no sense if the coefficients of $f$ can be arbitrarily small; for this reason I require that at least one coefficient of $f$ has absolute value bounded below by 1.