Probability – Which random walk can produce a gamma distribution in the border area?

Symmetrical random walk, its probability distribution is the binomial coefficient, in the continuous boundary the Gaussian distribution:

$ displaystyle e ^ {- x ^ {2}} $

What kind of random walk is the probability distribution in the limit of the gamma distribution:

$ displaystyle xe ^ {- x} $ to the $ x geqslant 0 $ ?

or simpler, an exponential distribution:

$ displaystyle e ^ {- x} $ to the $ x geqslant 0 $ ?

We are looking for something as simple as a random walk at a certain time, in the steady limit, exponential factor $ e ^ {- x} $ Factor shows up. At each step, the rules for casual walking should be as simple as possible. If possible, we want each step of the random walk to be an iid random variable (independent and identically distributed). Is that possible ?

Thanks.

at.algebraic topology – Is the inclusion $ Delta ^ {op} to Gamma ^ {op} = Fin_ ast $ homotopy cofinal?

There is a canonical functor $ i: Delta ^ {op} to Fin_ ast $, For example, one uses the pullback $ i ^ ast $ turn on $ Gamma $space into a simplicial space and then make a geometric realization to get a delooping.

Question: Is the functor $ i $ Homotopy cofinal?

That is, can I calculate the homotopy colimit of a? $ Gamma $-space, viewed as a functor $ Fin_ ast to Top $by precomposing with $ i $ and take the geometric realization of the emerging simplicial space? Equivalent (from the $ infty $-Categorical Quill Set A) are the cosice categories $ langle n rangle downarrow i $ weakly contractible for everyone $ langle n rangle in Fin_ ast $?

I especially like a description of $ i $ I learned that from a work by Ayala, Francis and Tanaka. think of $ Delta $ as an incomplete subcategory of the category of one-dimensionally layered spaces and layered maps. Then $ i $ is the functor that sends a CW complex $ (n) $ to his sentence $ langle n rangle $ of one-dimensional layers plus a disjoint base point. A morphism $ f $ will be sent to the card $ f ^ ast $ Transferring a one-dimensional layer to the one-dimensional layer (if any) under which it is imaged $ f $and otherwise the base point.

Sequences and Series – Is there a deep philosophy or intuition behind the similarity between $ pi / 4 $ and $ e ^ {- gamma} $?

Here are a few examples of the similarity of Wikipedia, where the expressions differ only in characters.
I also came across other analogies.

$$ { begin {align} gamma & = int _ {0} ^ {1} int _ {0} ^ {1} { frac {x-1} {(1-xy) ln xy} } , dx , dy \ & = sum_ {n = 1} ^ { infty} left ({ frac {1} {n}} – ln { frac {n + 1} {n }} right). end {align}} $$

$$ { begin {align} ln { frac {4} { pi}} & = int _ {0} ^ {1} int _ {0} ^ {1} { frac {x-1 } {(1 + xy) ln xy}} , dx , dy \ & = sum_ {n = 1} ^ { infty} left ((- 1) ^ {n-1} left ({ frac {1} {n}} – ln { frac {n + 1} {n}} right) right). end {align}} $$

$$ { begin {align} gamma & = sum_ {n = 1} ^ { infty} { frac {N_ {1} (n) + N_ {0} (n)} {2n (2n + 1)}} \ ln { frac {4} { pi}} & = sum_ {n = 1} ^ { infty} { frac {N_ {1} (n) -N_ {0} (n)} {2n (2n + 1)}}, end {align}} $$

I wonder if there is an algebraic system there $ 4e ^ {- gamma} $ would play a similar role as what $ pi $ plays, for example in complex numbers, or in a geometric system in which $ 4e ^ {- gamma} $ would play a special role, like $ pi $ in Euclidean and Riemannian geometries.

Solving a polynomial equation with gamma function

I would recommend using the new M12 feature AsymptoticSolve for this. Your equation:

eqn = FD1((d-2)/2, ηs) + FD1((d-2)/2, ηs - vd) == 2 FD1((d-2)/2, η0);

We have to find the zero-order approximation of ηs when vd is small:

Simplify(Solve(eqn /. vd -> 0), (η0 | vd) ∈ Reals)

{{ηs -> η0}}

Use now AsymptoticSolve:

AsymptoticSolve(eqn, {ηs, η0}, {vd, 0, 5})

{{ηs ->
vd / 2 + ((-6 + d) (-2 + d) (-1 + d) vd ^ 4) / (
1536 η0 ^ 3) + ((2-d) vd ^ 2) / (16 η0) + η0}}

supplement

If you have an earlier version of Mathematica, you do not have access to it AsymptoticSolveYou could try using the cloud instead. Define for example:

asymptoticSolve(args__) := CloudEvaluate(System`AsymptoticSolve(args))

Then use asymptoticSolve instead of AsymptoticSolve,

Algorithms – Combinatorial optimization, how to choose the optimal gamma distribution?

Configuration: To let $ A = {X_1, …, X_n } $ independent, but not necessarily identical, Bernoulli random variables. Let's say you get a bunch of $ m + 1 $ weights $ W = {w_0, w_1, …, w_m } $With $ m leq n $, and $ -1 leq w_k leq 1 $ for all $ 0 leq k leq m $, The goal is to select a set $ S subset A $ from $ m $ Random variables, say $ S = {X_ {s_1}, …, X_ {s_m} } $that maximizes

$$ sum_ {i = 0} ^ m ~ w_i mathbb {P} bigg ( sum_ {j = 1} ^ m X_ {s_j} = i bigg) $$

Question: Can the optimal solution or even a constant factor approximation in polynomial time be calculated?

Previous work: If we have both $ w_0 leq w_1 leq … leq w_m $ or $ w_0 geq w_1 geq … geq w_m $Then we can achieve the optimal solution by either the $ X_i $& S; s with the highest probability of giving 1 or the lowest probability.
But does it work for a general? $ W $ limited by $ -1 $ and $ 1 $?

Graphics Programming – Additive Blending and Gamma Correction

Should additive blending (also known as light mapping) be done in linear space?

I tried to do it in linear space, and it got linear and boring and lost the cool HDR-like Bloomy effect. Is there a standard method for additive mixing in linear RGB, or is additive mixing now an outdated gamma-space hack and should be forgotten?

Graphics Programming – Alpha blending in software with gamma correction

How can you efficiently implement alpha blending without messing up gamma?

Alpha blending is basically the following expression:

result_color = (dst_color*src_alpha - dst_color*src_alpha*dst_alpha + src_color - src_color*src_alpha)/(1 - src_alpha*dst_alpha)
result_alpha = src_alpha*dst_alpha

For gamma-packed 8-bit RGB I have managed to implement this expression efficiently in the early 90s:

void ablend(int dr,int dg,int db,int da, int *sr,int *sg,int *sb,int *sa) {
  int ya = (da * *sa)>>8;
  uint32_t d = idiv_lut(255 - ya);
  uint8_t *st = ab_lut(255 - *sa);
  uint8_t *dt = ab_lut(*sa - ya);
  *sr = ((dt(dr) + st(*sr))*d)>>8;
  *sg = ((dt(dg) + st(*sg))*d)>>8;
  *sb = ((dt(db) + st(*sb))*d)>>8;
  *sa = ya;
}

Obviously, working with RGB without unpacking it first will give you the wrong result (this is the most common graphics programming trap). They must be unzipped (eg, pow (x, 2.2)), and now ab_lut can no longer be used, as it would require 2 ^ 30 bytes, and replacing the division by multiplication would also be on a 32-bit System impossible.

Give that, this question implies some minor questions:
1. Is the use of float the only solution for unpacked RGB processing?
2. Is there a good tutorial on alpha blending with gamma correction?
3. Is it worthwhile (L1 cache) still using gamma-packed sRGB, instead a simple array of 4 32-bit float components R, G, B, A or memory access would become one on most CPUs Become a bottleneck?

Maybe I just have that mindset of the 90s, where holding the data size means a difference between 1 and 60 frames per second and we can safely use as many extra bytes as we need today?

Laplace Transform – Sum of the lower incomplete gamma functions: $ sum_ {k = 1} ^ infty frac {(b / a) ^ k} {k! k!} gamma (k + 1, at) $

I have to evaluate the inverse Laplace transform
$$ Q (t) = mathcal {L} ^ {- 1} big { frac {e ^ {b / s}} {s (s-a)} big } (t). $$
Use identity $ mathcal {L} ^ {- 1} { frac {f (s)} {sa} } (t) = e ^ {at} int_0 ^ tdu e ^ {- au} mathcal {L} ^ {- 1} {f (s) } (u) $ with knowledge of the inverse transformation $ mathcal {L} ^ {- 1} { frac {e ^ {b / s}} {s} } (u) = I_0 (2 sqrt {bu}) $the series representation of the modified Bessel function $ I_0 (z) = sum_ {k = 0} ^ infty frac {1} {k! K!} Big ( frac {z} {2} big) ^ {2k} $and the definition of the lower incomplete gamma function $ gamma (k, x) = int_0 ^ x t ^ {k-1} e ^ {- t} dt $ provides $ Q (t) $ in the shape
$$ Q (t) = frac {e ^ {at}} {a} sum_ {k = 1} ^ infty frac {(b / a) ^ k} {k! K!} Gamma (k + 1, at). $$

Is that as good as it gets? Is there any approach I could use to evaluate this sum?