pr.probability – Poincare inequality for martingales


This is a vague question but here we go: Is there a form of the Poincaré inequality that is better suited for martingales?

For example, the Poincaré inequality for the boolean cube states that for any function $f:{-1,1}^n rightarrow mathbb{R}$ we have $$Var(f(epsilon)) le mathbb{E} |Df(epsilon)|_2^2,$$ where we use the uniform measure in the cube and define the partial derivative vector $D f(epsilon)$ via $$D_i f(epsilon) = frac{f(epsilon_1, ldots, epsilon_i,ldots, epsilon_n) – f(epsilon_1, ldots, -epsilon_i,ldots, epsilon_n)}{2},$$ i.e., flipping just the $i$th bit.

Now suppose that $(M_i)_i$ is a martingale adapted to the sequence $epsilon_1,ldots, epsilon_n$ (uniform in the cube) and we are interested in the variance of $g(M_n)$. While this is a function of $epsilon$ and we can apply the Poincaré inequality, the discrete derivative above does not seem to be appropriate quantity to bound the variance.

Concretely, the Poincaré inequality gives the right bound for linear functions: $$VarBig(sum_i a_i epsilon_iBig) le |a|_2^2.$$ But consider a Paley-Walsh martingale $$M_n = sum_{i le n} V_i epsilon_i,$$ where $V_i$ is a function of $epsilon_1, ldots, epsilon_{i-1}$. While by orthogonality we have a similar expression as above $$Var(M_n) = sum_i mathbb{E}(V^2_i),$$ this does not seem to be captured by applying the above Poincaré inequality with $f = M_n$ (notice $D_i M_n$ is not $V_i epsilon_i$ because flipping $epsilon_i$ may change $V_{i+1}, ldots$).

The problem seems to be that in the martingale case changing one bit changes the whole future of the martingale difference sequence; this makes the discrete gradient $D M_n$ too big compared to the actual size of the increments of the martingale.

Any pointers on what is the right way of looking at this are appreciated!