Statistics – approximation of the gamma function, decomposition

Is there an approximation of the gamma function of a sum so that the gamma function is broken down into functions of each element in the sum? Example:
$$ Gamma (n_1 + n_2 + … + n_N) = f_1 (n_1), f_2 (n_2), …, f_N (n_N) $$ where that, on the left, can be replaced by some mathematical operation and the elements, i.e. $ n_1 … n_N $ are positive integers and only one element, e.g. $ n_1 $must be a positive real.

Graphic – Why does Schlick's approximation contain a $ (1- cos theta) ^ 5 $ term?

The approximation writes the reflection coefficient as$$ R (θ) = R_0 + (1-R_0) (1-cosθ) ^ 5, R_0 = left ( frac {n_1-n_2} {n_1 + n_2} right) ^ 2. $$Why is the exponent 5? Schlick 1994 leads this exponent in Eq. (24) with the claim that it is the correct Fresnel approximation but without explanation.

Sign Dirichlet's approximation theorem

Fix a $ alpha in mathbf {R} $, The approximate theorem of the classic Dirichlet says that there are infinite rationalities $ p / q $ so that
$$
left | alpha- frac {p} {q} right | < frac {1} {q ^ 2}.
$$

Question. Fix a $ alpha in mathbf {R} $, Is it true that there are infinite rations? $ p / q $ so that
$$
0 le alpha- frac {p} {q} ll frac {1} {q ^ 2} , ,?
$$

fa.functional analysis – approximation of the inductive tensor product $ C (X) bar { otimes} C (Y) $

The following question is from Banach algebra techniques in operator theory written by Ronald G. Douglas.

Accept both $ X, Y $ are Banach spaces and $ X otimes Y $ is the algebraic tensor product. To the $ w in X otimes Y $, define $ | w | _i = sup $ {$ vert { sum_ {k = 1} ^ {n} phi (x_k) psi (y_k)} $| $ x_k in X, y_k in Y, w = sum_ {k = 1} ^ {n} x_k otimes y_k $ (one of $ w $Expression in $ X otimes Y $) $ phi in X ^ *, psi in Y ^ * $}. You can check if this is a norm $ X otimes Y $ and we leave ($ X bar { otimes} Y, | cdot | _i $) be the completion of the sentence ($ X otimes Y, | cdot | _i $).

Now assume $ X, Y $ are both compact topological rooms from Hausdorff and ($ C (X), | cdot | _ { infty} $), ($ C (Y), | cdot | _ { infty} $) are Banach spaces. Show that ($ C (X) bar { otimes} C (Y), | cdot | _ { infty} $) isometric isomorphic to ($ C (X times Y), | cdot | _ { infty} $). Here $ X times Y $ is equipped with the product topology.

Note that every norm $ | cdot | $ in the $ C (X) times C (Y) $ is equivalent to $ | cdot | _1 $ because both $ C (X), C (Y) $ are equipped with $ | cdot | _ { infty} $ (therefore $ | (f_x, f_y) | _1 = | f_x | _ { infty} + | f_y | _ { infty} $, In the meantime you can find a homeomorphism between $ C (X) times C (Y) $ and $ C (X times Y) $ there $ | f | _ { infty} leq | f_x | _ { infty} + | f_y | _ { infty} leq 2 | f | _ { infty} $, So I'm starting to find a direct relationship between ($ C (X) times C (Y), | cdot | _1 $) and ($ C (X) bar { otimes} C (Y), | cdot | _i $)

$$ Large section of questions $$

Say $ w in C (X) bars { otimes} C (Y) $ and here I have trouble finding the upper bound of $ | w | _i $ in memory of $ | cdot | _1 $, I naively consider the division of the unity of $ X $, say {$ P_i, i leq n $} and $ sum_ {i leq n} fP_i $ is to break one $ f $, Therefore, this could be one of the expressions from $ f $ part of $ w $, I don't know if $ n $ is the maximum number of pieces $ f $ I can break down.

According to the Kerin-Milman book, it is enough to consider extreme points $ X ^ * $ and $ Y ^ * $, Before I use this, I think I need to collect enough information from $ w $,

Reference for the positive convergence probability to a stable point of a stochastic approximation algorithm

Consider a stochastic approximation process
$$ x_ {t + 1} = x_t + frac {1} {t} (g (x_t) + u_t) $$
Where $ (u_s) _s $ is a sequence of i.i.d. Shocks.
Accept $ g $ is Lipschitz, $ u_t $ has finite variance, and that
$ (x_s) _s $ is limited with probability one.
Let's also assume that
$$
C = {x in mathbb {R} colon g (x) = 0 }
$$

is finite.
Define the stable points in $ C $ than the points $ x $ so that $ g (x) (x-y) <0 $ to the $ y $ close enough to $ x $,
In this case it follows from the results of the stochastic approximation theory that $ x $ converges a.s. to a stable point in $ C $ (See, for example, Kushner and Yin, “ Stochastic Approximation and Recursive Algorithms and Applications & # 39; & # 39; Theorem 2.1, page 127).

Question: Do you know a reference that offers such conditions that $ x $ converges to any stable point in $ C $ with a strictly positive probability.

Approximation of multipliers by multipliers of a smaller set

To let $ X $ Be a compact metric space and let $ B $ be a convex balanced barrier $ C (X) $ so for everyone $ x in X $ there are $ f in B $ With $ f (x) ne 0 $,

To let $ M = {u in C (X), ~ uf in B, ~ for all f in B } $ and let $ N = {u in C (X), ~ uf in overline {B}, ~ for everyone in overline {B} } $,

Since multiplication $ (f, g) to fg $ is a continuous operation $ C (X) $, it follows $ N $ is closed in $ C (X) $ and $ M subset N $, Consequently, $ overline {M} subset N $,

Is it true that $ N = overline {M} $?

Polynomial approximation (in $ L ^ 1 $ norm) at minimal cost

Define that costs of a polynomial $ sum_ {i = 0} ^ N a_i x ^ n $ his $ sum_ {i = 0} ^ N | a_i | $, To let $ g: (0,1) to mathbb {R} $ be a function to be approximated – let's say $ g (x) = 1 / x $ if $ e ^ {- 1} leq x leq 1 $. $ g (x) = 0 $ if $ 0 leq x <e ^ {- 1} $, (This function occurs in practical contexts.) We are interested in polynomials $ P _ + $. $ P _- $ so that $ P _- (x) leq g (x) leq P _ + (x) $, We define that tightness $ epsilon (P) $ of $ P $ to be fair $ epsilon (P) = epsilon = int_0 ^ 1 | P (x) -g (x) | dx $,

For given $ epsilon> 0 $ and $ N $ What are the polynomials? $ P _ + $. $ P _- $ the tightness $ leq epsilon $ and minimal cost? What are these minimal costs $ c _ + ( epsilon, N) $. $ c _- ( epsilon, N) $? What if we allow graduation? $ N $ be arbitrary? (In other words, what are $ c _- ( epsilon) = inf_N c _- ( epsilon, N) $ and $ c _ + ( epsilon) = inf_N c _ + ( epsilon, N) $?)

(Is there an easy way to see the right size of?) $ c _ + ( epsilon) $ and $ c _- ( epsilon) $?)

(Bonus question: what happens if you allow? $ P _ + (x) $. $ P _- (x) $ be a linear combination of breaking forces $ x ^ r $. $ r geq 1 $?)

Approximation theory – are there good references to the decay rate of the Legendre coefficient?

To let $ P_n: (- 1,1) rightarrow mathbb {R} $ be the $ n $Legendre polynomial. and let
$$ a_n: = int _ {- 1} ^ 1 f (t) P_n (t) , dt $$
for some $ f: (- 1,1) rightarrow mathbb {R} $,

There is good evidence of the decay rate of $ vert a_n vert $? I don't know about this type of problem, but I think there have to be many methods.

From the following similar mathoverflow question: reference for the exponential decay of Legendre coefficients, I found a work. I also found a book by Atkinson on "Spherical Harmonics and Approaches to the Unity Sphere". After reading these references, it appears that the smoothness of $ f $ is a method. But the application in my head is the case when $ f = arccos ^ 2 (t) $what is not smooth enough.

So I wonder if there are other references that explain different methods of calculating the decay rate of $ vert a_n vert $, Especially if there are some techniques that can be used for slippery $ f $, I really want to know.

Approximation – doubts about the integrity gap and LP relaxation

I have an exercise that tells me that given problem P (from which I am now omitting the description) there is no integrity gap between the LP and ILP formulations of this problem, and there is an integral for each fractional LP solution
feasible solution at the same cost. Then I have to design a poly-time algorithm that depends on everyone
With an optimal LP solution, calculates such an optimal integer assignment.

The fact that there is an integrity gap, what does that mean in this case? I mean, when I design an approximation algorithm to compute an optimal integer assignment from a given optimal LP solution, I get a solution with an object value like something * Optimal value of the LP problem , Isn't that contrary to the fact that there is no IG? It should mean that the optimal value of LP and ILP should be the same, no?

Please clarify my doubts.

real analysis – best linear approximation

I am trying to solve the following problem. Take a smooth function $ f $ defined on $ mathbb {R} ^ n $ with values ​​in $ mathbb {R} $, To let $ Q $ be the cube of the corner points $ ( pm 1, ldots, pm 1) $, To let $ psi $ a piecewise smooth (or smooth if you prefer) feature on $ mathbb {R} ^ n $, We think of $ psi $ to be defined as a measure $ psi dx $, To let $ p $ Be a positive integer (you can assume it to be large and even or odd if necessary).

What is the linear function that minimizes the p-distance from $ f $ on the cube $ Q $in relation to the measure $ psi dx $? I hope for an answer that depends on the differentiation $ f $ in some ways and integration in terms of $ psi dx $ some things on the cube.

In addition to philosophy, I want to find one $ v $ that minimizes

$$ T_ {n, psi} (v) = left ( int_ {Q} (f (x) -v ^ t x) ^ p psi dx right) ^ {1 / p} $$