# Tag: prprobability

## pr.probability – Does Z=3* [(X-u) /σ]² follows chi-square distribution with 3 degrees of freedom?

Your privacy

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.

## pr.probability – Space of functions and the Coordinate process

I have the following question:

Let a probability space $(Omega,mathcal{F},mathbb{P})$ with a stochastic process $X$ be given and define $(mathcal{F}_t) = sigma(X_t;t geq 0).$

Further, we have the map $phi: Omega rightarrow mathbb{R}^{(0,infty)}$ given by $phi(omega) = (t mapsto X_t(omega)).$ Let $Y$ denote the coordinate process on $mathbb{R}^{(0,infty)}$. Is it possible to show that $phi^{-1}(sigma(Y_t; t geq 0)) = (mathcal{F}_t),$ i.e. for every $F:Omega rightarrow mathbb{R}$ which is $(mathcal{F}_t)$-measurable there exists an $f:mathbb{R}^{(0,infty)} rightarrow mathbb{R}$ with $F(omega) = f(phi(omega))$

## pr.probability – Relating sequence with or without replacement

I derived a relationship between sequences drawn with and without replacement for an application in genetics. The proof is easy enough, but I would rather find a source than provide a derivation of a well-known result. But can’t seem to find it.

Simplified problem setup:

You have a stack of 52 card from which you draw uniformly at random *with* replacement until an arbitrary condition is met (say, you picked the ace of spades). This generates sequences with replacement.

The order in which cards are *first* picked defines a random sequence without replacement.

An alternative way of generating the same distribution of our original sequences *with* replacement is as follows:

First draw this “order of first sampling” by generating a random permutation of the cards (i.e., shuffle the deck).

Then you pick cards from the top of the deck and keep a stack of previously drawn cards. suppose that by the time you completed the n th draw, the “previously picked cards” deck has c(n) cards. We then pick randomly from the “previously picked” pile with probability c(n)/52, and from the top of the original deck with probability 1-c(n)/52

Again, I don’t want a proof that the alternative way is equivalent to the original, I am just wondering whether people know a name or reference for this.

## pr.probability – Distrbution of points transformed by a family of polynomials

Consider a family of polynomials $mathcal{F}$. Let $p$ be a single complex point or a finite set of complex points inside the unit disk.

I am interested in what can we say about the distribution of

$${ f(p): f in mathcal{F} },$$

where $mathcal{F}$ is a family of polynomial such that all their roots are in the unit disk.

Let me make it more precise what I mean by the distribution. Let $M$ be a large number and we divide $(-1, 1)$ to $2M$ equal segment, we call this set $mathcal{M}$. Let

$$mathcal{R}= {r_1+i r_2 in mathcal{M} + imathcal{M}: r^2_{1}+ r^2_{2}<1 }.$$

Which is the set of lattice points inside the unite disk generated by ${1/M, i/M}$. Let

$$mathcal{F} = {f text{ is a polynomial of degree } < X text{ such that } text{ if } f(r)=0 text{ then } r in mathcal{R} },$$

which is an “approximation” of family of polynomials with their roots inside the unit disk. The size of $mathcal{F}$ is $mathcal{R}^X.$

Assume that we $p$ is a fixed point inside the unite circle and define:

$$mathcal{T} = {f(p) : f in mathcal{F} }.$$

Now define the density function as

$$text{pdf}(x)= frac{#{t in T : |t|< x}}{#{t in T}}.$$

Questions are:

-What does $text{pdf}$ look like?

-Does it depend on the initial point $p$?

-What happens if we change the family of polynomials?

I wrote a code that suggest it may be normally distributed, however, computationally, it is had to go above degree $4$.

## pr.probability – Random walk always stays below a level $a$

Suppose we have a random walk $S_n$ with i.i.d. steps $X_i$ and

$$mathbb{E}(X_i) = -mu$$

where $mu$ is close to zero. (We can also assume $X_i$ has exponential tails.)

Fix a constant $ageq 1$, are there any estimates in the literature for the probability of the random walk always stays below $a$, i.e.

$$mathbb{P}big{{max_{ngeq 0} S_n leq a}big}?$$

(I believe the upper bound should be $Camu$.)

For the special case where we replace $a$ by $0$, then

$$mathbb{P}big{{max_{ngeq 0} S_n leq 0}big} leq Cmu$$

which essentially follows from Sparre-Andersen theorem.

## pr.probability – Is this problem really from the Jane Street?

Someone sent me this problem through email and claimed that this problem comes from Jane Street. But I can’t find it anywhere from the Jane Street website.

Let $X_i$ be chosen uniformly and independently from $(0,1)$ for $i = 0, 1, 2, 3, cdots$. For each such $i$, define

$$Y_{i}=X_{0} cdot X_{1}^{-1} cdot X_{2} cdot X_{3}^{-1} cdot ldots cdot X_{i}^{(-1)^{i}}$$

Find the probability that there exists $N$ with $Y_N < 1/2$ and $Y_i < 1$ for all $i < N$.

I think this problem is solvable from a linear algebra perspective. So, I’m looking for an iterative algorithm that can find the solution.

But, I’m not so sure how to work it out in detail.

Greg Martin suggested that this problem is a gambler’s ruin type problem, but I’m unsure.

So my question comes down to

I) Can anyone verify this problem is actually from Jane Street?

II) Any suggestions for making progress on the problem?

Thank you guys in advance.

## pr.probability – Example(s) where replacing a multivariate, discrete RV with a single, univariate RV fail

Let $X_1,ldots,X_n,Y,Z$ be $n+2$ binary random variables and define $X=(X_1,ldots,X_n)$. In most problems, instead of treating $X$ as $n$ distinct binary random variables, there is no loss of generality in treating $X$ as a *single* variable $U$ that takes on $2^n$ states with the same probabilities (see below for a more rigourous interpretation). For example, $Yperp Z|X iff Yperp Z|U$, and quantities such as entropy and mutual information remain unchanged.

**My question:** Are there any examples where this replacement “fails”? That is, some property that holds for $(X,Y,Z)$ but doesn’t hold *mutatis mutandis* for $(U,Y,Z)$?

**What I mean by “treating $X$ as a single variable $U$“:**

More formally, let $sigma:{1,ldots, 2^n}to {0,1}^n$ be a bijection and enumerate the $2^n$ possible states of $X$ by $sigma$. We can define $U$ to be a random variable on $2^n$ states such that

$$

P(U=k) = P(X=sigma(k)).

$$

## pr.probability – If $(mu_k^{ast k})$ is tight, can we show that $(mu_k)$ is tight as well?

Let $mu_k$, $kinmathbb N$, be a sequence of measures on a Banach space $E$ such that $(mu_k^{ast k})_{kinmathbb N}$, where $mu_k^{ast k}$ denotes the convolution, is tight, i.e. for all $varepsilon>0$ there is a compact $Ksubseteq E$ such that $$sup_{kinmathbb N}mu_k^{ast k}(K^c)<varepsilontag1.$$

Are we able to show that $(mu_k)_{kinmathbb N}$ is tight as well?

Maybe we can do something like this: Given $varepsilon$ and $K$ as above, we have $$mu_k(K^c)^k=mu_k^{bigotimes k}(times_{i=1}^kK^c)tag2,$$ where $mu_k^{bigotimes k}$ denotes the product measure, and $$mu_k^{ast k}(K^c)=theta_k(mu_k^{ast k})(K^c)tag3,$$ where $$theta_k:E^kto E;,;;;xmapstosum_{i=1}^kx_i$$ and $theta_k(mu_k^{ast k})$ denotes the pushforward measure.

Maybe we can show that $(2)$ is at most $(3)$?

## pr.probability – Divergence-free Gaussian vector field with given mean magnitude and correlation function

My general question is how one might construct an isotropic random vector field $vec f: mathbb{R}^3 to mathbb{R}^3$ which has a given mean magnitude $mathbb{E}(||vec f(vec x)||)=mu$ such that vector magnitudes and direction are correlated up to some length scales $l$ (beyond which the correlation goes to zero). Furthermore, we wish that $nabla cdot vec f=0$, although a construction which does not satisfy this condition is already very interesting.

More precisely, we are given a vector $vec{mu}$ in $mathbb{R^3}$ and matrix-valued correlation function $C(vec x_1,vec x_2): mathbb{R^3} times mathbb{R^3} to M_3(mathbb{R})$ which is isotropic, i.e $C(vec x_1,vec x_2)=C(||vec x_1-vec x_2||)$. One may define a Gaussian Process $f(vec x) sim GP(vec{mu},C(vec x_1,vec x_2))$ such that $mathbb{E}(vec f(vec x))=vecmu$ and $Cov(vec f(vec x_1),vec f(vec x_1))=C(vec x_1,vec x_2)$. I believe it is known how to generate such a random field $f$. It seems the equivalent problem for a scalar field $f$ is well-studied, where one can for example draw the field first in Fourier space by normalizing a white noise field with the appropriate power spectrum $P(k)$, and then transform back to real space. However it would be useful if someone could describe here a simple procedure for the case of $mathbb{R}^3$, which I think is known but I haven’t found a clear description anywhere.

**Question**: how can one generate such a random field $f$ if one imposes a zero mean vector $vec{mu}=vec 0$, and in addition mean *magntitude* $mathbb{E}(||vec f(vec x)||)=mu$? And now if we impose also $nabla cdot vec f=0$?