real analysis – Generalization of Bernstein’s inequality

I’m using Muscalu and Schlag’s textbook to study harmonic analysis and I encountered the following claim:

Given some function $f in mathcal{S}(mathbb{R}^{d})$, where $mathcal{S}(mathbb{R}^{d})$ denotes the Schwartz space of functions. Let $hat{f}$ denote the Fourier transform of $f$. Assume that there exists some measurable set $E$, such that $text{supp}(hat{f}) subset E subset mathbb{R}^d$. Then for any $1 leq p leq q leq infty$, we have the following inequality: ($|E|$ below denotes the Lebesgue measure of $E$)
$$||f||_{L^q} leq |E|^{frac{1}{p}-frac{1}{q}}||f||_{L^p}$$
I have managed to show the special case when $q=+infty$ and $p=2$ by using Young’s inequality and Plancherel identity. However, the hint says that we still need to use duality and interpolation to deduce the general conclusion. Any ideas on this?

Moreover, how might this estimate be related to the probability version of Bernstein inequality? Thanks in advance!

real analysis – Isoperimetric type inequality in $mathbb{R}^2$

Fix $L in (0,infty)$ and consider $mathcal{C}_L$ defined as follows:
mathcal{C}_L := { gamma:(0,1) rightarrow mathbb{R}^2 |~ gamma text{ is smooth and length($gamma$)$=L$ }}.

I am interested in the “blow-up” of $gamma$, denoted $gamma_{+r}$, defined as follows: For any set $S subseteq mathbb{R}^2$ and $r>0$
S_{+r} := cup_{zin S}(z+rmathbb{D}),

where $mathbb{D}$ is the unit disc in $mathbb{R}^2$ which is centred at the origin. So $gamma_{+r}$ is a bounded open set in $mathbb{R}^2$. My question is for which $gamma in mathcal{C}_L$ is $m(gamma_{+r})$ maximised? Here $m(cdot)$ is the Lebesgue measure in $mathbb{R}^2$. I feel that it should be maximised by the line segment with length $L$.

If this is a version of some well known result, please do indicate it.

The reason for this title is that sometimes the isoperimetric inequality in $mathbb{R}^2$ is stated as follows: For any Borel subset $A subseteq mathbb{R}^2$ with $m(A) < infty$ and for every $epsilon >0$, we have $m(A_{+epsilon}) geq m(B_{+epsilon})$. Here $B$ is a Euclidean ball with $m(A) = m(B)$.


pr.probability – Concentration inequality for the sample covariance matrix

I’d like to know if there is a concentration inequality for the sample covariance matrix that don’t assume the knowledge of the true mean.


Given a probability distribution $mu$ on $mathbb R^d$, the covariance matrix of $mu$ is defined as follows:
$$Sigma := mathbb E ((x – bar mu)(x -bar mu)^top) $$
where $x sim mu$ and $bar mu = mathbb E (x)$.

If $X = (x_1, cdots x_m)$ is an i.i.d. sample drawn from $mu$, then we can define two estimators:
& hat Sigma_1 := frac1m sum_{i=1}^m (x_i – bar mu)(x_i – bar mu)^top, text{ where } bar mu = textbf E_{x sim mu} (x) \
& hat Sigma_2 := frac1{m-1} sum_{i=1}^m (x_i – bar x)(x_i – bar x)^top, text{ where } bar x = frac1m (x_1 + cdots x_m)

They both satisfy $mathbb E_X hat Sigma_1 = mathbb E_X hat Sigma_2 = Sigma$.

The second estimator $hat Sigma_2$ is of interest because $bar mu$ is often not known in practice.


I’m interested in the concentration of $hat Sigma_2$ to $Sigma$ as $m rightarrow infty$. More precisely, given a number $t > 0$, I’d like to know whether there exists a constant $A>0$ and a term $alpha in (0,1)$ that depend on $mu$ and $t$ such that
$$text{Prob}(| Sigma – hat Sigma_2 | ge t) le A cdot alpha^m$$
In the case of the difference $|Sigma – hat Sigma_1|$, such an answer can be obtained using the matrix Bernstein inequality. However, I’m less sure about $|Sigma – hat Sigma_2|$. I have an idea, which is to use the fact that:
$$hat Sigma_1 – hat Sigma_2 = frac1{m(m-1)} sum_{ineq j} (x_i-barmu) (x_j-barmu)^top$$
which follows from:
hat Sigma_2 =& frac1m sum_i x_i x_i^top – frac1{m(m-1)} sum_{ineq j} x_i x_j^top \
=& frac1m sum_i (x_i-barmu) (x_i-barmu)^top – frac1{m(m-1)} sum_{ineq j} (x_i-barmu) (x_j-barmu)^top \
=& hat Sigma_1 – frac1{m(m-1)} sum_{ineq j} (x_i-barmu) (x_j-barmu)^top

But now I’m not sure how to control the sum of the quantities $(x_i-barmu) (x_j-barmu)^top$, which are not independent.

This should be a fairly standard question with a standard answer, but I couldn’t find an answer to this. A similar question’s only answer wasn’t addressing my question; it was addressing the case for $hat Sigma_1$.

real analysis – Analogous form of Hardy-Littlewood maximal inequality (weak/strong type) on affine subspaces

I’m using some online notes (Professor Schlag, Yale University) to study harmonic analysis by myself. He introduced the following claim as an exercise:

For any function $f in L^{1}(mathbb{R}^{d})$ and fixed $1 leq k leq d$, let’s consider an “analogous” form of the Hardy-Littlewood function $mathcal{M}_{f}$ defined as follows:
$$mathcal{M}_{f}(x) = sup_{r > 0}frac{1}{r^k}int_{B(x,r)}|f(y)|dy$$
Fix an arbitrary affine subspace $L subseteq mathbb{R}^{d}$ of dimension $k$. Let $m_{L}$ denote its Lebesgue measure. Then we have that there exists some constant $C>0$, such that for any $lambda >0$, the following “analogous” form of weak-type Hardy-Littlewood maximal inequality holds:
$$m_{L}({x in L | mathcal{M}_{k}f(x) > lambda}) leq frac{C}{lambda}||f||_{L^1(mathbb{R}^{d})}$$
I have attempted this exercise by taking the intersection of the $k$-dim subspace $L$ and an arbitrary
$d$-dim ball to derive some bounds, but it seems that it doesn’t help much. Any idea on this problem?
Moreover, can we also develop some similar form of strong-type Hardy-Littlewood maximal function with respect to an arbitrary affine subspace?
Thanks in advance!

inequalities – show this inequality with $frac{d^i}{dx^i}left(frac{x}{ln(1-x)}right)^{1/K} Bigg|_{x=0}>0, ~~~forall iin N^{+}$

Let $K$ be a fixed positive integer,show this $$dfrac{d^i}{dx^i}left(frac{x}{ln(1-x)}right)^{1/K} Bigg|_{x=0}>0, ~~~forall iin N^{+}$$

the problem is from when I solve this:$$left(sum_{i=1}^{n}a_{i}x^iright)^K=dfrac{x}{ln{(1-x)}},show ~that ~a_{i}>0,forall iin N^{+}$$.maybe this $$f(x)=dfrac{x}{ln{(1-x)}}$$ is special function? Is there a background to this conclusion?Thanks

fa.functional analysis – Strict Riesz’s rearrangement inequality

(Continue to the last question Riesz rearrangement inequality) In the Lieb-Loss’s book , they present the strict Riesz rearrangement inequality in Section3, Theorem 3.9(Page 93). They say that when the functions f,g,h are all nonnegative, and if g is strictly symmetric decreasing, then Riesz rearrangement inequality holds and the “=” holds iff f and h are translation of $f^*, h^*$. Namely if f,g,h are all nonnegative, then
$$iint_{mathbb{R}^ntimes mathbb{R}^n} f(x) g(x-y) h(y) , dx,dy \
le iint_{mathbb{R}^ntimes mathbb{R}^n} f^*(x) g^*(x-y) h^*(y) , dx,dytag{1}$$

and if g is strictly symmetric decreasing, then there is a equality only of $f=T(f^*), h=T(h^*)$ for some translation $T$. I want to ask when remove the nonnegative condition, such as g(x)=-ln(x), whether the “=” holds iff f and h are translation of $f^*, h^*$. For example, let $g(x)=-ln x$, which is strictly symmetric decreasing. In this cases, we know that (1) still holds. Does the equality holds in (1) only if f and h are a translation of $f^*, h^*$?

real analysis – Riesz rearrangement inequality

In the Lieb-Loss’s book , they present the Riesz rearrangement in Section3, Theorem 3.9(Page 93). Note that the functions f,g,h are all nonnegative. I want to ask whether the nonnegative condition can be removed, such as g(x)=-ln(x). Because in come cases, such as the fundamental solution of -Delta in mathbb{R}^2 is -ln x. In this cases, does the Riesz’s Rearrangement inequality still holds?

functions – Proving Jensen’s Inequality by NOT using induction.

Let $f:Irightarrow mathbb R$ be a convex function. Show that for any integer $n$, any real numbers $x_1,x_2,ldots, x_nin I$ and any positive real numbers $mu_1, mu_2,ldots, mu_n$ such that $sum_{i=1}^{n}mu_i=1:$
$$sum_{i=1}^{n}mu_if(x_i)geq fbigg(sum_{i=1}^{n}mu_ix_ibigg)$$

Now I specifically want to prove it WITHOUT using induction but I’m stuck at a point. Here’s what I’ve tried:

We know that: $$sum_{i=1}^{n}mu_if(x_i)geq mu_1f(x_1)+f(x_k)sum_{i=2}^{n}mu_ispace text{where $f(x_k)=min{f(x_i)}space forall 1leq ileq n$}$$

Since $sum_{i=1}^{n}mu_i=1$, we would have $sum_{i=2}^{n}mu_i=1-mu_1$

Now since $f$ is a convex function: $$mu_1f(x_1)+f(x_k)sum_{i=2}^{n}mu_i=mu_1x_1+(1-mu_1)f(x_k)geq f(mu_1x_1+(1-mu_1)x_k)$$

Thus: $$sum_{i=1}^{n}mu_if(x_i)geq f(mu_1x_1+x_ksum_{i=2}^{n}mu_i)$$

So if I prove that $f(sum_{i=1}^{n}mu_ix_i)leq f(mu_1x_1+x_ksum_{i=2}^{n}mu_i)$ (if it is true), then I’m done.

But this is the step where I’m stuck. Would you please help me prove this?


inequalities – “Reversed” Bernstein Inequality

I’m studying harmonic analysis by myself, and I read some online notes that introduce the Bernstein inequality. One of them mention a reversed form of the Bernstein inequality, which is stated below:

Let $mathbb{T} = mathbb{R} / mathbb{Z} = (0,1)$ be the one-dimensional torus. Assume that a function $f in L^{1}(mathbb{T})$ satisfies $hat{f}(j) = 0$ for all $|j| < n$ (vanishing Fourier coefficients). Then for all $1 leq p leq infty$, there exists some constant $C$ independent of $n,p$ and $f$, such that
$$||f’||_{p} geq Cn||f||_{p}$$

It seems that an easier problem can be obtained by replacing $f’$ with $f”$ in the above inequality. The easier problem is addressed in the MO post below:

Does there exist some $C$ independent of $n$ and $f$ such that $ |f”|_p geq Cn^2 | f |_p$, where $1 leq pleq infty$?

However, it seems that the trick of convex Fourier coefficients used in the post above no longer applies to the harder problem (lower bounding the norm of the first derivative). Any suggestions/ideas?

The meaning of a closed differential inequality

I’m currently reading Villani’s notes on hypocoercivity and on pp87 it states that the differential inequality (14.10):
$$ frac{d}{dt}(mathcal E (f) – mathcal E(f_infty)) = -mathcal D(f) leq -K_epsilon(mathcal E(f) – mathcal E(f_infty))^{1+epsilon} $$
cannot in general be ‘closed’. What does it mean for a differential inequality to be closed?