Reference Request – A Comprehensive List of Random Running Inequalities?

I am interested in finding a comprehensive list of all perceptible random walk differences.

ie $ S_n = sum_ {k leq n} X_i $ symmetrical for i.i.d $ X_i $

I can only seem to find books / papers listing the already known ones such as the maximum inequality of Kolmogorov.

Does anyone know such a paper / book?

Optimization – Solve the system of inequalities with orthogonal constraint in Python

I have to solve a system of inequalities for a vector of variables, where one of the inequalities has an orthogonality condition. In my particular problem, it has been proven that the solution of the system always leads to an exact solution. Due to the orthogonality, I do not think I can use linear programming. Does anyone have recommendations for efficient methods in Python? Wolfram alpha can do that! (See example here) I know that Sympy can not because it can only be solved for a univariate case. Please indicate the term with the answer you give!

Calculus – Prove that the following inequalities are the same.

Prove that

If$ F: (0, + infty) to mathbb {R} $ To be a continuous function, then the following are equivalent (for each $ x in (0, + infty) $).

(1)$ Frac {f (x_4) -f (x_3)} {{x_4} -x_3} leq frac {f (x_2) -f (x_1)} {{x_2} -x_1} ; ;; x_4> x_3> x_2> x_1; $

The

(2)$ Frac {1} {2} (f (x_2) + f (x_1)) leq frac {1} {x_2-x_1} int_ {x_1} ^ {x_2} f (u) you leq f ( frac {x_2 + x_1} {2}). $The

I know that when $ f $ has the first condition, that means $ f $ is a concave function and implies that $ f $ is a center concave. so we have $ Frac {1} {2} (f (x_2) + f (x_1)) leq f ( frac {x_2 + x_1} {2}) $, I also know that condition (2), together with continuity, implies this $ f $ is a convex function and causes condition (1) to be held.
I know that, too $ f $ is a continuous function then according to the fundamental theorem of the calculus the integral of $ f $ is well defined. In addition to those I have tried to prove the above statement with the mean value theorem for integrals. But I could not reach the goal. Can someone help me.

The

Convexity of the logarithm function leads to different results in the inequalities

I derive some limits that affect two functions, namely $ f (x) = log (x) $ and $ f (x) = x * log (x) $, I have found in several books that these two functions are concave. I pulled her down with Python to check quickly. For this reason, the next inequality should apply by applying Jensen's inequality:

$ f (x) + f (y) leq f (x + y) $

what is true in the case of $ f (x) = log (x) $ but not for $ f (x) = – x * log (x) $, What do I miss?

Many Thanks.

How do I model a logical indicator if there are two integer programming inequalities?

I have an IP program where $ forall i in I, j in J $ My decision variables are $ x_ {i, j} $, I have two inequalities (one inequality for each) $ i, j $ Couple) that are of interest $$ a_ {i, j} x_ {i, j} geq 1 for all i in I, j in J $$ and $$ sum_ {j in J} b_ {i, j} x_ {i, j} leq b_ {i, j} – 1 for all i in I, j in J $$

I would like to introduce a logical indicator variable now $ delta_ {i, j} $ which is exactly 1 if both inequalities are true and 0 otherwise.
How could I implement this idea by introducing appropriate constraints for my integer progamming model?
I thought about defining a variable $ delta ^ 1_ {i, j} $ and $ delta ^ 2_ {i, j} $ which is equal to 1, if the first and second inequalities are true and then can combine these two auxiliary logical variables into a single variable $ delta_ {i, j} $but I could not figure out exactly how it works.

R-code for inequalities line by line

I want to calculate efficiently which coordinates (x, y) lie between (x1, y1) and (x2, y2), as indicated in the same line. The inequalities (min and max) are therefore valid line by line and not the entire column of values ​​(x1, y1) or (x2, y2). I would test (cond1) if both x> = min (x1, x2) and x <=max(x1, x2), and (cond2) if both y >= min (y1, y2) and y <= max (y1, y2).

x1 <- c (7, 8, 2, 2, 2, 7, 3)
y1 <- c (9, 3, 2, 5, 8, 7, 9)
x2 <- c (4, 2, 7, 9, 5, 7, 5)
y2 <- c (0, 6, 1, 8, 3, 5, 3)
x <- c (0, 7, 3, 9, 3, 6, 6)
y <- c (7, 0, 7, 4, 8, 9, 7)
K <- data frames (x1, y1, x2, y2, x, y)

The output would look like this:

        x1 y1 x2 y2 x y cond1 cond2
1 7 9 4 0 0 7 0 1
2 8 3 2 6 7 0 1 0
3 2 2 7 1 3 7 1 0
4 2 5 9 8 9 4 1 0
5 2 8 5 3 3 8 1 1
6 7 7 7 5 6 9 0 0
7 3 9 5 3 6 7 0 1

Can this be done with Apply and a custom function? If executed as a loop, what is the most efficient way to reference the line in the inequalities?

Trigonometry – patterns in inequalities of the triangle with angles.

I read this page and wondered why, inequalities for $ sin A $ (with arguments $ A $) will have the same inequality for $ cos frac {A} {2} $ (with arguments $ frac {A} {2} $) and vice versa, similar for $ tan $ and $ cot $,

Examples

$$ sin frac {A} {2} sin frac {B} {2} sin frac {C} {2} le frac {1} {8} $$$$ cos A cos B cos C le frac {1} {8} $$

and

$$ cos (A) + cos (B) + cos (C) le frac {3} {2} $$$$ displaystyle sin frac {A} {2} + sin frac {B} {2} + sin frac {C} {2} le frac {3} {2} $$

Is there a bigger math or just a coincidence?

Inequalities – Logarithmic and polynomial functions with two roots

This is a question I came across a few days ago. Although this is not particularly a research problem, the following problem is that I examine the null distribution of a class of transcendental elementary functions. And I do not think the following problems are easy to handle. I have thought of her for a few days and they have all failed

if $ f (x) = x ^ 2-x- ln {x} – ln {a} $,and $ f (x_ {1}) = f (x_ {2}) = 0,0 <x_ {1} <x_ {2} $I suspected the roots $ x_ {1}, x_ {2} $ such
$$ dfrac {3} {2a + 1} <x_ {1} x_ {2} < dfrac { ln {a}} {a-1} $$

That's my attempt

since
$$ x ^ 2_ {1} -x_ {1} – ln {x_ {1}} = x ^ 2_ {2} -x_ {2} – ln {x_ {2}} = ln {a} $ $
To let $ x_ {2} = tx_ {1}, t> 1 $,Then we have
$$ x ^ 2_ {1} -x_ {1} – ln {x_ {1}} = t ^ 2x ^ 2_ {1} -tx_ {1} – ln {t} – ln {x_ {1} $$
$$ x_ {1} = dfrac {t-1 + sqrt {(t-1) ^ 2 + 4 ln {t} cdot (t ^ 2-1)}} {2 (t ^ 2-1 )} = f (t) $$
then
$$ x_ {2} x_ {1} = t (x_ {1}) ^ 2 = t left ( dfrac {t-1 + sqrt {(t-1) ^ 2 + 4 ln {t} cdot (t ^ 2-1)}} {2 (t ^ 2-1)} right) ^ 2 = t (f (t)) ^ 2 $$
and
$$ a = e ^ {x ^ 2_ {1} -x_ {1} – ln {x_ {1}}} = e ^ {f ^ 2 (t) -f (t) – ln {f (t )}} $$
it has to prove itself
$$ dfrac {3} {2e ^ {f ^ 2 (t) – f (t) – ln {f (t)}} + 1} le t (f (t)) ^ 21 $$The next thing I know is pretty complicated.

Inequalities – How to make $ a ^ n> -1 $ simpler to $ a> -1 $?

I'm not sure if your first premise is strict.

a ^ n> -1;

log[%[[1]]]> Log[%[[2]]]// PowerExpand
(* n log[a] > I π *)

%[[1]]/ n>%[[2]]/ n;

Exp[%[[1]]]> Exp[%[[2]]](* a> E ^ ((I π) / n) *)

a> (E ^ (I π)) ^ (1 / n)
(* a> (-1) ^ (1 / n) *)

table[{n, %}, {n, 1, 5}]
(* Larger :: north: Invalid comparison with I tried. *)
(* {{1, a> -1}, {2, a> I}, {3, a> (-1) ^ (1/3)}, {4, a> (-1) ^ (1 / 4)}, {5, a> (-1) ^ (1/5)}} *)

Now we can use for specific n reduce

eq = a ^ n> -1

table[Reduce[eq /. n -> b, a], {b, 1, 5}](* {a> -1, true, a> -1, true, a> -1} *)

It looks as if MMa would say for odd-numbered n a> -1, and for even n is the equivalent for each a, but a warning in the first part indicates that inequalities may not be valid when comparing complex numbers.