## Plotting a small gaussian | Small values and dealing with Machine Precision

I’ve defined the following:

``````k := 1.38*10^-16
kev := 6.242*10^8
q := 4.8*10^-10
g := 1.66*10^-24
h := 6.63*10^-27
``````

and

``````b = ((2^(3/2)) ((Pi)^2)*1*6*(q^2)*(((1*g*12*g)/(1*g + 12*g))^(
1/2)) )/h

T6 := 20
T := T6*10^6
e0 := ((b k T6 *10^6)/2)^(2/3)

(CapitalDelta) := 4/(Sqrt)3 (e0 k T6 *10^6)^(1/2)

(CapitalDelta)kev = (CapitalDelta)*kev
e0kev = e0*kev
bkev = b*kev^(1/2)
``````

Then, I want to plot these functions:

``````fexp1(x_) = E^(-bkev *(x*kev)^(-1/2))
fexp2(x_) = E^(-x/(k*T))
fexp3(x_) = fexp1(x)*fexp2(x)
``````

and check that this Taylor expansion works:

``````fgauss(x_) =
Exp((-3 (bkev^2/(4 k T*kev ))^(1/3)))*
Exp((-((x*kev - e0kev)^2/((CapitalDelta)kev/2)^2)))
``````

which should, e.g., as expected: This plot came from “Stellar Astrophysics notes” of Edward Brown (also it is a known approximation).

I used this line of command to Plot:

``````Plot({fexp1(x),fexp2(x),fexp3(x),fgauss(x)}, {x, 0, 50},
PlotStyle -> {{Blue, Dashed}, {Dashed, Green}, {Thick, Red}, {Thick,
Black, Dashed}}, PlotRange -> Automatic, PlotTheme -> "Detailed",
GridLines -> {{{-1, Blue}, 0, 1}, {-1, 0, 1}},
AxesLabel -> {Automatic}, Frame -> True,
FrameLabel -> {Style("Energía E", FontSize -> 25, Black),
Style("f(E)", FontSize -> 25, Black)}, ImageSize -> Large,
PlotLegends ->
Placed(LineLegend({"","","",""}, Background -> Directive(White, Opacity(.9)),
LabelStyle -> {15}, LegendLayout -> {"Column", 1}), {0.35, 0.75}))
``````

but it seems that Mathematica doesn’t like huge negative exponentials. I know I can compute this using Python but it’s a surprise to think that Mathematica can’t deal with the problem somehow. Could you help me?

Posted on Categories Articles

## pr.probability – Lower-bound for \$underset{p le gamma_d(A) le q}{inf} gamma(A^epsilon)\$, where \$gamma_d\$ is the standard gaussian distribution on \$mathbb R^d\$

Let $$gamma_d = gamma_1 otimes ldots otimes gamma_1$$ be the standard Gaussian distribution on $$mathbb R^d$$, where $$d$$ is a large positive integer. Given $$epsilon ge 0$$ and a measurable $$A subseteq mathbb R^d$$, let $$A^epsilon := {x in mathbb R^d mid mbox{dist}(x,A) le epsilon}$$ be its epsilon-neighborhood, where $$mbox{dist}(x,A) := inf_{a in A} |x-a|$$ is the distance of between $$x$$ and the closest point in $$A$$. Finally, let $$0 < p le q le 1$$.

Question. What is a good lower-bound for $$r_d(epsilon,p,q) := underset{p le gamma_d(A) le q}{inf} gamma(A^epsilon)$$ ?

Note. Understanding the special case $$p to 0^+$$ would already be interesting. And of course, I’m perfectly fine with a bound which looks at different regimes of $$epsilon$$, $$p$$, and $$q$$.

Posted on Categories Articles

## Bounding expectations of Gaussian integrals

I have quite a few questions regarding this paper, and in particular APPENDIX D, which contains the proof of propostion C.2:

With the following definitions in mind,  (denoting $$hat P equiv sum_{i=1}^n delta_{X_i}/n$$ the empirical measure). I believe there is a typo and that $$mu$$ should be a $$P$$. Anyways, I am stuck on the first step of the proof, primarily because I’m a moron. It says this: Here are the links to Maurer and Pontil (2010) and Gine and Nikl (2016), although truly (for me at least) only God knows where to find any of the arguments that correspond to this paper. How do you argue that $$mathbb{E} sup_{c, S} |mathbb{E}_P f_{c,S} – mathbb{E}_{hat P} f_{c,S}| leq frac{sqrt{2 pi}}{n} mathbb{E} sup_{c, S} |sum_{i=1}^n g_i f_{c, S}(X_i)| quad textbf{?}$$
I am genuinely and absolutely lost.

I understand the rest of the proof except for the above two inequalities and this next picture: In particular the first and third inequalities make no sense. If at all possible, any help will be massively appreciated. I will gladly make a very large bounty for this if someone is able to help me understand what’s going on.

Posted on Categories Articles

## pr.probability – The integral of a Gaussian process on a unit sphere

Suppose there exist a zero-mean Gaussian process $$mathbb{G} f_u$$ indexed by $$u in mathcal{S}^{p – 1}$$ with known covariance $$mathrm{E} big[ mathbb{G} f_u mathbb{G} f_v big]$$ when both $$u$$ and $$v$$ are known, where $$mathcal{S}^{p – 1}$$ is the $$p$$-dimensional unit sphere. Now I want to know what exactly the integral
$$begin{equation*} int_{mathcal{S}^{p – 1}} , mathbb{G} f_u , du end{equation*}$$
is. This is a integral Gaussian process on the unit sphere. I try my best to find some articles about it, but I cannot find any useful information about it.

Does anyone can help me with how to handle this integral or know some literature about this integral? Thanks so much!

Posted on Categories Articles

## matrix – Matlab code to perform Gaussian elimination

I am creating a MATLAB function that would perform Gaussian elimination. My code script is shown below. However, the code is neither efficient nor correct.

``````   % function to perform row eliminination
function (x,U)=triangleform(A,b)
%FORWARD ELIMINATION
n=length(b);
m=zeros(n,1);
x=zeros(n,1);
for k =1:n-1;
%compute the kth column of M
m(k+1:n) = A(k+1:n,k)/A(k,k);
%compute
%An=Mn*An-1;
%bn=Mn*bn-1;
for i=k+1:n
A(i, k+1:n) = A(i,k+1:n)-m(i)*A(k,k+1:n);
end;
b(k+1:n)=b(k+1:n)-b(k)*m(k+1:n);
end
U= triu(A);
%BACKWARD ELIMINATION
x(n)=b(n)/A(n,n);
for k =n-1:-1:1;
b(1:k)=b(1:k)-x(k+1)* U(1:k,k+1);
x(k)=b(k)/U(k,k);
end
end
``````

Posted on Categories Articles

## probability or statistics – Plotting Gaussian using a formula

I am able to plot a Gaussian distribution of mean 10 and Standard deviation using below code.

``````ListLinePlot(Table(PDF(NormalDistribution(10, 2), x), {x, 0, 20}),
PlotMarkers -> {Automatic, 10}, PlotStyle -> Blue, Frame -> True,
FrameStyle -> Directive(Black, 15))
``````

But when I use a formula, I can’t get a plot when I write a code as follows:

``````(Lambda) = .125
a = 10
(Rho)(x_) := A*Exp(-(Lambda)*(x - a)^2)
Replace((Rho)(x), A -> Sqrt((Lambda))/Sqrt((Pi)), All)
Plot((Rho)(x), {x, 0, 20})
``````

Here Posted on Categories Articles

## A Gaussian expectation

For real numbers $$a,b,c,sigma$$ is the following expectation known to be exactly doable?

$${mathbb E}_{x sim {cal N}(0,sigma)} (max {0, ax^2 + bx + c } )$$

If you know of multivariable extensions of this then kindly let me know!

## linear algebra – Limit spectrum of composition of projections onto random Gaussian vectors

Let $$n > p$$, let $$X in mathbb{R}^{n times p}$$ whose columns $$X_1, ldots, X_p$$ are zero-meaned Gaussian,of covariance $$(rho^{|i – j|})_{i, j in (p)}$$ ($$rho in (0, 1)$$).

Are there (asymptotics or not) known results on the eigenvalue distribution of:

$$left(mathrm{Id} – tfrac{1}{||{X_1}||^2} X_1 X_1^top right) ldots left(mathrm{Id} – tfrac{1}{||{X_p}||^2} X_p X_p^top right) enspace ?$$

From a geometrical point of view, this is the matrix of the application which projects sequentially onto the orthogonal of the span of $$X_p$$, then onto that of $$X_{p-1}$$, etc, so all eigenvalues are in the unit disk, 0 is an eigenvalue, 1 also is an eigenvalue since $$n > p$$.
I would expect the spectrum to “move towards” 1 as $$rho$$ increases, but are there any quantitative results on that?

Posted on Categories Articles

## gaussian – Model of Dithered lattice quantization error/noise

I have a non-temporal vector of values which I need to quantize for the sake of data compression. I’m using K-level lattice quantization with subtractive dither, which would theoretically make the quantization noise uniform and independent of the input distribution. Now, I need to model the quantization noise and I’m having trouble figuring that out. I suspect I cannot just assume that the noise is “uniform Gaussian”? This paper by Zamir and Feder calculates the “divergence from Gaussianity” of the quantization noise with subtractive dither, but I can’t formalize it into a concrete formula for the noise for a finite value of K.

Posted on Categories Articles

## reference request – (random fields / gaussian process): On rewritting a certain expectation as a kernel function

Let $$v = (v_1,ldots,v_n)$$ and $$(w_{1,1},ldots,w_{1,n},ldots, w_{n,m})$$ be random vectors with iid coordinates, and also $$v$$ is independent of $$w$$, with $$w_{i,j} sim N(0,1/m)$$ and $$v_j sim N(0,1/n)$$, for $$i=1,ldots,n$$, $$j=1,ldots,m$$. Define a random function $$f_{w,v}:mathbb R^m to mathbb R$$ by $$f_{w,v}(x) := sum_{i=1}^nv_iphi(sum_{j=1}^m w_{i,j}x_j)$$, where $$phi(t):=max(t,0)$$. Fix a point $$a, b in mathbb R^m$$.

Question. How to got about computing
$$p_{m,n}(a,b) := mathbb P(f_{w,v}(x)f_{w,v}(a, b) > 0),$$
Or even just $$lim_{n to infty}lim_{m to infty}p_{m,n}(a,b)$$.

Note that $$p_{n,m}(a,b)$$ is simply the probability that the random field $$x mapsto f_{w,v}(x)$$ flips its sign between $$x=a$$ and $$x=b$$.

## Observations

• Conditioned on $$w$$, we compute (see this math.SE post)
$$mathbb P(f_{w,v}(x)f_{w,v}(a, b) > 0) = kappa_0(phi_w(a),phi_w(b))$$
where $$mathbb R^n ni phi_w(x) := (phi(sum_{j=1}^mw_{ij}x_j))_{1 le i le n}$$, and $$kappa_0(z,z’) := 1-frac{1}{pi}arccos(z^Tz’/|z||z’|) in (0, 1)$$ is the arc-cosine kernel of order $$0$$.
• Thus, we have the identity

$$p_{m,n}(a,b) = mathbb E_w(kappa_0(phi_w(a),phi_w(b))).tag{2}$$

Question. Can the formula (2) be further simplified ? Is it linked to some other kernels ?

Posted on Categories Articles