## Reference Request – Distribution of $frac { sum_ {i = 1} ^ nX_i ^ 4} {( sum_ {i = 1} ^ nX_i ^ 2) ^ 2}$ if $X_i$ iid $N (0, 1 )$

Is there an existing result about the density (in relation to the Lebesgue measure) $$mathbb {R}$$) of the random variables:
$$frac { sum_ {i = 1} ^ nX_i ^ 4} {( sum_ {i = 1} ^ nX_i ^ 2) ^ 2}$$
when $$X_i$$ are iid $$N (0,1)$$? Is it possible to get an analytic expression or at least a simple expression for the density function?

## Probability – limiting the distribution of the "scatter matrix" $frac {1} {n} XX ^ T: = frac {1} {n} sum_ {i = 1} ^ nx_ix_i ^ T$ for iid $x_1, ldots , x_n in mathbb R ^ p$

To let $$x_1, ldots, x_n$$ iid be drawn from such a "nice" distribution $$mathbb R ^ p$$ (but maybe very common!) and let $$X$$ be that $$n$$-by-$$p$$ Matrix formed by vertical stacking $$x_i$$& # 39; see Fig.

Question. What is the boundary distribution of the "scattering matrix" $$frac {1} {n} XX ^ T: = frac {1} {n} sum_ {i = 1} ^ nx_ix_i ^ T$$ as $$n rightarrow infty$$ ?

• If that $$x_i$$So they come from a centered multivariate Gaussian $$XX ^ T$$ has a Wishart distribution.

## Probability – Need help to correct / clarify my considerations on iid RVs after learning some statistics for the first year

Thank you for writing a reply to Mathematics Stack Exchange!

But avoid

• Make statements based on opinions; Cover them with references or personal experience.

Use MathJax to format equations. Mathjax reference.

## Linux – Configure the IPv6 address at the interface to the static IID

I'm looking for a tool similar to rdisc6 that configures the v6 address (s) on an interface using a static IID upon receiving the RA. This is a server that must be located at a known address within a ULA. (No, I can not use mDNS and SLAAC because name-bound certificates exist and mDNS may not work unless this interface is configured.)
If I have to, I will expand rdisc6, but I hope to not replicate something that someone has already done.
This is done under Linux (armv7) in an LXC container.

## Probability – How will these expectation values ​​(inner product values) be obtained between iid functions?

Consider the following two polynomials:
$$f_1 = x_1 ^ 2-1$$ and $$f_2 = x_1 ^ 2x_2 ^ 2 + x_1 ^ 2-x_2 ^ 2 + 1$$,

Both are Hermite polynomials $$x_1$$ and $$x_2$$ are independent normal variables with a mean of 0 and a variance of 1.

How can we achieve these expectations (inner product over the Hilbert space) if we know that? $$= delta_ {ij}$$:

$$= 0$$

$$= 2$$

$$= 4$$

Is there a way to broaden that expectation formula and be able to calculate the expectations without having to integrate the inner product?

## Probability – Joint distribution of two weighted sums of IID random variables

To let $$X_1, X_2, points$$ independently distributed random variables in $${- 1, +1 }$$ and let it go $$a_1, b_1, a_2, b_2, ldots in mathbb {R}$$ fixed, limited and non-zero mean. To let $$Y_n = a_1X_1 + cdots + a_nX_n$$ and $$Z_n = b_1X_1 + cdots + b_nX_n$$,

I am interested in understanding the common dissemination of $$Y_n$$ and $$Z_n$$ as $$n in infty$$ and more exactly, if you give an upper bound for the probability
$$mathbb {P}[|Y_n| leq x land |Z_n| leq y],$$
that is uniform in $$x, y in mathbb {R}$$ and $$n in mathbb N$$, A quick calculation seems to show the distribution and a limitation of the form
$$mathbb {P}[|Y_n| leq x land |Z_n| leq y]= O ! Left ( frac {(1 + x) (1 + y)} {n} right)$$
should be achievable if the sequences $$a_1, a_2, dots$$ and $$b_1, b_2, dots$$ are sufficiently "independent" of each other.

I could not find anything in the literature, but I guess these kinds of problems should have been thoroughly investigated, so I'm looking for clues. Many Thanks

## Probability – Determine the CDF of a sum of contiguous random I.I.D variables

Be X$$_1$$ and X$$_2$$ identically independent distributions (i.i.d) using random variables

P (X$$_i$$ $$le$$ x) = 1-x$$^ {- 1/3}$$

x $$ge$$ 1 and i = 1.2

Find P (X$$_1$$ + X$$_2$$ $$le$$ x)

I tried to find the convolution of f$$_ {X_1}$$ and f$$_ {X_2}$$ (the density functions for X$$_1$$ and X$$_2$$) and integrate to get the CDF of X.$$_1$$ + X$$_2$$ That's what I interpreted the question. When I integrated it, it got very messy and I could not finish the integration. Am I doing something wrong?

## Probability – When the arrival times of the customers i.i.d. exponential distribution, is it necessary that the number of customers is a Poisson process?

For example, suppose customers arrive at a time interval $$U_i$$ i.i.d. $$Exp ( lambda)$$, that's why,

$$F (U_i le t) = 1-e ^ {- lambda t}$$

The arrival time of the customer $$i$$ is

$$T_i = sum ^ i_ {j = 1} {U_j}$$

The number of customers who arrived on time $$t$$ That's why

$$N (t) = sum ^ infty_ {i = 1} {1 _ { {T_i le t }}}}$$

$$N (t)$$ is a counting process.

$$N (t)$$ is called the inhomogeneous Poisson counting process if it has the four properties.

This condition seems to indicate that i. The exponentially distributed arrival time does not guarantee that it is a Poisson process unless it meets the four characteristics.

1) Does this mean a probability distribution of $$N (t)$$ is not clear or without assumptions like the four characteristics comprehensible?
2) If 1) is true, there is another possible formulation process besides Poisson $$N (t)$$,

## st.statistics – concentration of $X ^ T eta ^ TX in mathbb R ^ d$ for i.i.d $(x_i, eta_i)$ and sub-gaussian $eta_i$

Accept $$(x_1, eta_1), ldots, (x_n, eta_n)$$ are $$n$$ i.i.d shows in $$mathbb R ^ {d + 1}$$ so that $$eta_1, ldots, eta_n$$ are $$sigma$$-subgaussian. To let $$X in mathbb R ^ {n times d}$$ let be the vertical stacking of $$x_i$$and $$eta in mathbb R ^ n$$ let be the vertical stacking of $$eta_i$$& # 39; s

Are there any concentration inequalities that can be made to bind the matrix? $$X ^ T eta eta ^ TX in mathbb R ^ {d times d}$$ ?

Naively, I would guess that $$X ^ T eta eta ^ TX preceq sigma ^ 2X ^ TX + text {"little thing"}$$, with high probability.

## pr.probability – maximum of the sums of iid $X_i$ s, where $X_i$ is the difference between two exponential values ​​of r.v.

given $$X_i = A_i – B_i$$ from where $$A_i sim text {Exp} ( alpha)$$ and $$B_i sim text {Exp} ( lambda)$$, Define $$S_k = sum_ {i = 1} ^ k X_i$$ With $$S_0 = 0$$, and
$$M_n = max_ {1 leq k leq n} S_k.$$
Can you calculate the amount? $$mathbb {P} (M_n leq x)$$ expressly? I tried it and the result is down, but it is not clear …

In Feller's Introduction to Probability Theory and its application, he has repeatedly remarked that this type of two-exponential divergence distribution is a rare but important case in which almost all erroneous calculations can be made explicit. (V1.8 example (b) page 193; XII.2 example (b) page 395; XII.3 example (b) page 401) Unfortunately, I could not find any detailed calculations in the book.

A second reference that I have looked at is the paper "On the distribution of the maximum of sums of independent and equally distributed random variables"by Lajos Takacs (Adv. Appl. Prob 1970) Takacs mentioned that we can calculate in some special cases $$mathbb {P} (M_n leq x)$$ light. After his example on page 346 (where he only suspected) $$X_i = A_i – B_i$$ from where $$B_i$$ is exponential and $$A_i$$ is not negative) that I could count on $$A_i sim text {Exp} ( alpha)$$, $$B_i sim text {Exp} ( lambda)$$, we have
$$U (s, p) = sum_ {n = 0} ^ infty mathbb {E} left[e^{-sM_n}right]p ^ n = frac { lambda – frac {s lambda} { gamma (p)}} { lambda – s – frac { lambda alpha p} { alpha + s}}$$
from where $$gamma (p) = frac { lambda – alpha + sqrt {( alpha + lambda) ^ 2 – 4 alpha lambda p}} {2}$$a zero of the denominator above. Is there a way to simplify this to get an explicit formula for $$mathbb {P} (M_n leq x)$$?