## Rouches Theorem in Complex Analysis on the relation of the number of zeros and poles of meromorphic functions in a region

This question is from my son referenced in my earlier question, Need advice or assistance for son who is in prison. His interest is scattering theory . He asked me to post this question:

Hello and thanks to everyone for help finding papers thus far. I am currently looking for some further information and applications of Rouches Theorem in Complex Analysis on the relation of the number of zeros and poles of meromorphic functions in a region. I have the basic statement, but am looking for some more advanced or peripheral results, reformolulations, extensions, etc. Any other theorems with conditions for the relation of the poles and zeros of two functions in a region would also be helpful.
To be very specific, if f=g+h, with all functions meromorphic in the plane, I’m looking for conditions on f, g, h, so that f and g have the same number of poles and zeros in a region. The form of the particular functions I’m dealing with are generally highly oscillatory, nonlinear Fourier transforms of smooth, compactly supported functions where the nonlinearity can cause poles, but sometimes their real and imaginary parts can be controlled well, so conditions relating their arguments, or real/im parts might be useful. Thanks.
-Travis.

## calculus and analysis – Why so big difference between results of Integrate and NIntegrate?

Studying an interesting article of Daniel Lichtblau, I consider a variation of an example from it, calculating an improper integral

``````Integrate(RealAbs(Sin(x - y))^(-2/3), {x, 0, Pi}, {y, 0, Pi})
``````

`-((12 Sqrt((Pi)) Gamma(-(1/3)) HypergeometricPFQ({1/6, 1/2, 2/3}, {7/6, 7/6}, 1))/ Gamma(1/6))`

This is not very useful analytic expression, so

``````N(%)
``````

`16.7126`

Then I compare that result with the numeric one

``````NIntegrate(RealAbs(Sin(x - y))^(-2/3), {x, 0, Pi}, {y, 0, Pi},
Exclusions -> {y == x}, AccuracyGoal -> 3, PrecisionGoal -> 3)
``````

`22.8915`

The latest result is produced without any warning. How to explain so big difference between the numbers? Could this difference be decreased? Which result is more reliable?

## real analysis – Looking for non-polynomial functions: with the growth condition: \$phibig(theta frac{s}{t}big) leq frac{phi(s)}{phi(t)}\$

I am for example(s) of an invertible increasing function $$phi: (0,infty)to (0, infty)$$ such that $$phi(0)=0$$ and there exists $$theta>0$$ and for all $$sleq t$$ we have

begin{align}label{EqI}tag{I} phibig(theta frac{s}{t}big) leq frac{phi(s)}{phi(t)} qquadtext{or equaly} qquad theta leq phi^{-1}big(frac{s}{t}big)frac{phi^{-1}(t)}{phi^{-1}(s)} end{align}

The most simple class consists of polynomial functions of the form $$phi(t)= ct^p$$ with $$c>0$$ and $$p>0$$.

Question: Are there other possible non-polynomial examples satisfying $$eqref{EqI}$$?

As an attempt with $$phi(t)= e^{t^alpha}-1$$, I wonder if there is a constant $$c>0$$ such that

$$ln(t+1)lnleft(frac{s}{t}+1right)geq cln(s+1),qquad text{for all 0leq sleq t}.$$

## real analysis – Generalization of Bernstein’s inequality

I’m using Muscalu and Schlag’s textbook to study harmonic analysis and I encountered the following claim:

Given some function $$f in mathcal{S}(mathbb{R}^{d})$$, where $$mathcal{S}(mathbb{R}^{d})$$ denotes the Schwartz space of functions. Let $$hat{f}$$ denote the Fourier transform of $$f$$. Assume that there exists some measurable set $$E$$, such that $$text{supp}(hat{f}) subset E subset mathbb{R}^d$$. Then for any $$1 leq p leq q leq infty$$, we have the following inequality: ($$|E|$$ below denotes the Lebesgue measure of $$E$$)
$$||f||_{L^q} leq |E|^{frac{1}{p}-frac{1}{q}}||f||_{L^p}$$
I have managed to show the special case when $$q=+infty$$ and $$p=2$$ by using Young’s inequality and Plancherel identity. However, the hint says that we still need to use duality and interpolation to deduce the general conclusion. Any ideas on this?

Moreover, how might this estimate be related to the probability version of Bernstein inequality? Thanks in advance!

## ❕NEWS – Ethereum (ETH) Price Analysis: What Are The Critical Levels? | Proxies-free

According to the technical analysis of NewsBTC’s Aayush Jindal: Ethereum (ETH) Price Shows Downward Signals
Ethereum is struggling to gain momentum above \$ 1,350 and \$ 1,375.
The price is currently trading well below \$ 1,350 and the 100 hourly simple moving average.
As long as it is below \$ 1,375, it could extend towards the \$ 1,200 support.
What are your views on ETH drop analysis?

## fourier analysis – Definite integral with Dirac delta and Heaviside function

In relation with the question I posed on MathSE here, I want to ask how can Mathematica give an answer to my problem.

## The context

I am trying to get rid of the integral over $$y$$ in
begin{align} int_0^{infty}dy ; psileft(frac{y}{q}right)phi(y) int_{-infty}^{infty}frac{dp}{sqrt{2pi}} p^{gamma} e^{-ip(y-x)};, end{align}
and to obtain an analytical expression in terms of $$phi(x)$$, $$psi(x/q)$$ and their derivatives. I assume that $$gamma=1-delta$$, with $$0. I tried to break down the problem (though I don’t know if there is a better way) so that I am left with
begin{align} int_0^{infty} dy ; psileft(frac{y}{q}right)phi(y) times left(frac{delta(y-x)}{(y-x)^{1-delta}} – (1-delta)frac{H(y-x)}{(y-x)^{2-delta}}right);, end{align}
where $$delta$$ is the Dirac delta and $$H$$ the Heaviside function.

## The problem

Since the parameter $$delta$$ (not to be confused with the Dirac) is constrained as $$0, I expect troubles when evaluating the first term in the brackets. However, when I define $$psi$$ and $$phi$$, MMA returns an expression for this integral, which leaves me confused. Basically, my code is

``````  (Psi)(x_) := 9*6^{-1/2} x^{3/2} Exp(-3 x/2);
(Phi)(x_) := x; (* Or x^2, or E^(x)...*)
fourpfrac(x_, y_, (Delta)_) := (2*Pi)^(1/2)/Gamma((Delta))*HeavisideTheta(y - x)*(y - x)^((Delta) - 1); (* Fourier Transform of p^(1-(Delta)) *)
x1frac(x_, (Nu)_, (Delta)_, (Alpha)_, q_) := Integrate(Derivative(0, (Nu), 0)(fourpfrac)(x, y, (Delta))*(Psi)(y/q)*(Phi)(y)*y^(-(Alpha) - 1), {y, 0, Infinity}); (* The whole integral *)
x1frac(x, 1, (Delta), -1, q)
``````

$$left{text{ConditionalExpression}left(frac{45 sqrt{3} x ((pi delta ) sin ) Gamma left(-delta -frac{3}{2}right) (-x)^{delta +frac{1}{2}} , _1F_1left(frac{7}{2};delta +frac{5}{2};-frac{3 x}{2 q}right)}{8 q^{3/2}}+frac{sqrt{pi } 2^{delta +frac{3}{2}} 3^{-delta } Gamma left(delta +frac{3}{2}right) q^{delta } , _1F_1left(2-delta ;-delta -frac{1}{2};-frac{3 x}{2 q}right)}{Gamma (delta -1)},x<0right)right}$$

and in input form

``````{ConditionalExpression((2^(3/2 + (Delta)) 3^-(Delta) Sqrt((Pi)) q^(Delta) Gamma(3/2 + (Delta)) Hypergeometric1F1(2 - (Delta), -(1/2) - (Delta), -((3 x)/(2 q))))/Gamma(-1 + (Delta))
+ (45 Sqrt(3) (-x)^(1/2 + (Delta))x Gamma(-(3/2) - (Delta)) Hypergeometric1F1(7/2, 5/2 + (Delta), -((3 x)/(2 q))) Sin((Pi) (Delta)))/(8 q^(3/2)),  x < 0)}
``````

The thing is that in no way the form I have given to $$phi$$ and $$psi$$ could have canceled the singularity.

1. Is the result I have obtained with MMA correct?
2. What did MMA do to evaluate the integral?
3. Is there a better way to resolve the problem?

Many thanks!

## real analysis – Do we have full control the oscillation of a function by modifying it on a small set?

Definitions and some motivation:

Let $$mathcal B$$ be the set of bounded measurable functions from $$(0, 1)$$ to $$mathbb R$$. Denote by $$mathcal N$$ the set of measurable subsets of $$(0, 1)$$ with Lebesgue measure $$0$$.

Given a function $$f in mathcal B$$, define the function $$mathcal Of$$ by

$$mathcal Of(x) := inf_{N in mathcal N} lim_{delta to 0} sup_{y, z in B_delta (x) setminus N} |f(y) – f(z)|$$.

Thanks to Lusin’s theorem, we know that we can modify $$f$$ on an arbitrarily small set and get a continuous function, and so we force the oscillation to be $$0$$ everywhere. But can we force it to be whatever we want?

Question:

Does there exist, for any $$f, g in mathcal B$$ and $$varepsilon > 0$$, a function $$f’ in mathcal B$$ such that the following conditions are satisfied?

i) $$f’ = f$$ everywhere except for a set of measure at most $$varepsilon$$.

ii) $$mathcal Of’ = mathcal Og$$ everywhere.

Note: All functions are genuine functions and not equivalence classes modulo null sets of such.

## fa.functional analysis – reference/proof request: Covariance concentration bound for randomly sampled positive semi-definite matrices

I saw the following inequality being used in a paper and the given reference was Joel A Tropp et al. An introduction to matrix concentration inequalities. However, I could not find this inequality there. Could someone help me find a reference for this? Is this standard in matrix concentration inequality literature?

Suppose $$M_1, cdots, M_N in mathbb{R}^{dtimes d}$$ are iid drawn from a distribution $$mathcal{D}$$ over positive semi definite matrices. If $$|M|_F leq 1$$ almost surely and $$N = Omegaleft(frac{dlog(d/delta)}{epsilon^2}right)$$ then with probability at least $$1-delta$$,
$$left|frac{1}{N}sum_{t=1}^N M_t – mathbb{E}_{Msimmathcal{D}}(M)right|_{text{op}} leq epsilon$$
Where op denotes the operator norm.

## calculus and analysis – Integration by parts

I know this question is already present in many variations, and it seems that for each one you have to define your own rules, and I am struggling to invent them in this case.

I want to integrate a big list of expressions of the form

``````\$\$
{ r^n y^{(p)} y^{(q)}},,qquad n,p,qin mathbb{Z}_{>0}
\$\$
``````

where $$y$$ is an unknown function. Namely, by taking integrals by parts, I want to bring the expressions to the form of sum where remaining integrals contain minimal power of $$r$$ inside.
For example,

$$int dr,, r^2 y’ y” = frac{1}{2} r^2 y’^2 – int dr,, r^2 y’y”-2int dr,,r y’^2$$

Of course, straightforward application of some naive rules and then using FixedPoint will yield nothing but an infinite loop.

Probably, this is already implemented either in some package, or in some Mathematica function. I would be glad if you could point it to me.

## fa.functional analysis – When is a linear subspace to be closed in all compatible topologies

Let $$V$$ be a real vectors space, and $$W$$ be a linear subspace.

Say $$W$$ is obviously closed if, for every topology on $$V$$ that makes $$V$$ a Hausdorff locally convex topological vector space, the subspace $$W$$ is closed in $$V$$.

We know $$V$$ is obviously closed, and any finite-dimensional subspace of $$V$$ is obviously closed. Is there a known characterization of which subspaces are obviously closed? Are there other sufficient conditions for a subspace to be obviously closed? Are there other known examples?