## definite integrals – On counting the areas covered by holes in a function in integration

As far as I know, holes in a function at the endpoints of an interval aren’t usually given any importance while integrating over that interval. For example, while calculating the area under the fractional part function from 0 to 1;

You don’t really think of a hole as being an infinitesimally small width that contributes to the calculated area. And I’m fine with that- it makes sense.

But there’s this question that defines f(x) as 0 where x can be expressed as $$frac{n}{n+1}$$, where n’s an natural number, and as 1 everywhere else. And you’re supposed to find the integral of f(x) from 0 to 2. So you’ve got a line, y=1, with dots on it that grow closer and more numerous as you get closer to 2, nearing an infinite number.

Now the solution to the question just integrates the function from 0 to $$frac{1}{2}$$, $$frac{1}{2}$$ to $$frac{2}{3}$$, $$frac{2}{3}$$ to $$frac{3}{4}$$ and so on until 1 and then a normal integral from 1 to 2-essentially just integrating y=1 from 0 to 2.

This seems odd. Isn’t ‘infinitesimally small quantities summed in infinite numbers forming actual numbers’, the general idea of integration? Since there’re an infinite number of small widths here, shouldn’t they be considered as constituting some area and thereby affecting the calculation?

## calculus and analysis – Wildcard replacement from sums and integrals using NCAlgebra’s NCReplaceRepeated

I want to use the NCAlgebra package to do some simplification on non commutative expressions involving integrals. For example, one such expression would be

$$I=left(int f(x)g(x) dxright) * h$$

where $$*$$ denotes non-commutative multiplication. Let’s say that I know that $$g(x)*h=1$$ for all $$x$$. Then, simplifying the above expression yields

$$I=int f(x)dx$$

So far, I have tried the following to implement this in Mathematica:

``````<< NC`
<< NCAlgebra`

NCReplaceRepeated((A b(X) F(X)) ** (G h J), b(X_) ** h -> 1)
``````

which results in

``````A G J F(X)
``````

as expected. However, this does not work for

``````NCReplaceRepeated((Integrate(A b(X) F(X), X)) ** (G h J), b(X_) ** h -> 1)
``````

which results in

``````A G J (Integrate(b(X) F(X), X)) ** h
``````

which does not respect this simplification.

Of course, there need to be several assumptions about the integral regarding e.g. convergence. But I want to assume that “everything works out nicely” and that this type of simplification is OK. Additionally, instead of `Integrate`, `Sum` might be used. This simplification does not work either.

How can I properly perform this simplification?

## calculus and analysis – How to write these kinds of integrals in a more compact way?

I wanted to calculate the average distance between two points both lying inside $$B^2(vec 0, R)$$, and ended up writing

``````Clear(r1,r2,x,y,z,R,x1,x2,y1,y2,z1,z2)
Assuming(R>0,Integrate(Integrate(Sqrt(Abs(x1^2 +y1^2+z1^2 - x2^2 - y2^2 - z2^2)), {x1,y1,z1 }(Element)Ball({0,0,0},R)),{x2,y2,z2}(Element)Ball({0,0,0},R)))
``````

However, given the elegance of the Mathematica language, and any my experience in programming in general, I, sort of, felt ashamed of myself writing such ugly code.

Is there a more compact & elegant way of writing the same code in a short manner such that it will also improve the speed of the code? Because I run this code on wolframcloud and it couldn’t find the result withing the limits of its execution time.

## regularization – A set of divergent integrals that I think, equal to \$-gamma\$

So, we take $$frac{text{sgn}(x-1)}{x}$$ and apply $$mathcal{L}_t(t f(t))(x)$$ four times. The transform is known to keep area under the curve. These integrals, I think, are equal to minus Euler-Mascheroni constant. Since they all have infinite parts that cancel each other, their values are finite. I have already applied Laplace transforms to regularize divergent integrals in a similar way.

$$int_0^infty frac{text{sgn}(x-1)}{x}dx=int_0^inftyfrac{2 e^{-x}-1}{x}dx=int_0^inftyfrac{x-1}{x (x+1)}dx=int_0^infty left(2 e^x text{Ei}(-x)+frac{1}{x}right) dx=$$
$$int_0^infty frac{x^2-2 x log (x)-1}{(x-1)^2 x} dx=-gamma$$

Yes?

Proof: take 2 of them and find average:

$$int frac12left(frac{x-1}{x (x+1)}+ left(2 e^x text{Ei}(-x)+frac{1}{x}right)right)dx=-gamma$$

Right?

## laplace transform – Fractional power of the operator \$mathcal{L}_t[t f(t)](x)\$ and equivalence of divergent integrals

I wonder whether an expression for fractional power of operator $$mathcal{L}_t(t f(t))(x)$$ that involves Laplace transform can be derived?

I am asking this because this operator preserves the area under the function:

$$int_0^infty f(x)dx=int_0^infty mathcal{L}_t(t f(t))(x) dx$$

But more importantly, this still works well even with divergent integrals, thus allowing us to define equivalent classes of divergent integrals. For instance, the following divergent integrals are obtained by consecutive applying this transform, and as such, can be postulated to be equal (the first one and the third one are similar up to a shift by the way):

$$int_0^inftyfrac{theta (x-1)}{x}dx=int_0^inftyfrac{e^{-x}}{x}dx=int_0^inftyfrac{dx}{x+1}=int_0^inftyfrac{e^x x text{Ei}(-x)+1}{x}dx=int_0^inftyfrac{x-log (x)-1}{(x-1)^2}dx$$

The following plot illustrates the comparison:

So, is there a way to express fractional powers of this transform?

## nt.number theory – Truncation and weighted orbital integrals in hyperbolic term of trace formula for \$GL(2)\$

I am looking at Gelbart–Jacquet’s article in the first Corvallis volume (the article entitled Forms of $$GL(2)$$ from an analytic point of view).

To wrap my head around exactly what is going on with truncation, I am working out the case where the test function is cuspidal (also because this is the situation we are in when we try to derive the Eichler–Selberg trace formula for traces of Hecke operators from the general trace formula for $$GL(2)$$).

Basically, my question is about why truncation is necessary even when the test function is nice enough to make all the terms depending on the truncation parameter vanish (it seems to me like this is true: no matter what, you end up with a weighted orbital integral which doesn’t depend on the truncation parameter but does involve the height function — the only way I know of how to think about this is as arising from truncation).

Let $$G = GL_2$$, $$N$$ be the upper-triangular unipotent radical, $$M$$ the diagonal matrices, and $$P = MN$$ the upper-triangular Borel subgroup. Let $$varphi$$ be the test function on $$G$$, which is assumed to be cuspidal, smooth, and compact modulo center.

In Gelbart–Jacquet, the truncated hyperbolic term (which is ultimately integrated over $$g$$ in $$PGL_2(mathbf{Q})backslash PGL_2(mathbf{A}_mathbf{Q})$$) is
$$sum_{gamma neq 1} varphi(g^{-1}gamma g) – sum_{xi in P(mathbf{Q})backslash G(mathbf{Q})} int_{N(mathbf{A}_mathbf{Q})} sum_{alpha in mathbf{Q}^times – {1}} varphileft(g^{-1}xi^{-1}left(begin{matrix}alpha & 0 \ 0 & 1end{matrix}right)nxi gright)chi_{(T, infty)}(H(xi g)), dn$$
where $$chi_{(T, infty)}$$ is the characteristic function of being $$geq T$$, $$H$$ is the usual logarithmic height function using the Iwasawa decomposition, and the sum is over non-identity hyperbolic elements $$gamma$$.

Anyway, I know/thought that the point is that this thing is supposed to be absolutely integrable (thanks to the construction, which is essentially subtracting off the constant term in the Fourier expansion in a neighborhood except averaged so that it maintains automorphicity), and then one can use Fubini when integrating to end up with
$$(log T)mathrm{vol}(mathbf{Q}^times backslash mathbf{A}_mathbf{Q}^{times, 1})int_K int_{N(mathbf{A}_mathbf{Q})} sum_{alpha in mathbf{Q}^times – {1}} varphileft(k^{-1}left(begin{matrix}alpha & 0 \ 0 & 1end{matrix}right)nkright), dn dk$$
minus
$$frac{1}{2}mathrm{vol}(mathbf{Q}^times backslashmathbf{A}_mathbf{Q}^{times, 1})int_K int_{N(mathbf{A}_mathbf{Q})} sum_{alpha in mathbf{Q}^times – {1}} varphi left(k^{-1}n^{-1}left(begin{matrix}alpha & 0 \ 0 & 1end{matrix}right)nkright)log H(wnk), dndk,$$
where $$w$$ is the matrix you conjugate a diagonal matrix by to switch the two entries. If $$varphi$$ was nice enough to begin with, then the first term in this final result vanishes (not surprising — the truncation doesn’t matter, so any term depending on $$T$$ has to vanish). On the other hand, for the same reason, the truncation in the thing we started out with also has no effect. In particular, the correction term in the integrand (i.e. the thing being subtracted in the first display equation above) vanishes, and $$sum_{gamma neq 1}varphi(g^{-1}gamma g)$$ is supposed to already be absolutely integral modulo center. But this can’t be right — then if you carry out the computation, the integral you end up with won’t have anything to do with the height function $$H$$. Indeed, you end up with (by the Iwasawa decomposition)

$$int_K int_{N(mathbf{A}_mathbf{Q})} int_{mathbf{Q}^times backslash mathbf{A}_mathbf{Q}^times} sum cdots , da dn dk,$$
the inside integral of which diverges badly if the sum on the inside ever doesn’t vanish.

So am I mistaken that truncation has no effect on the integrand for nice test functions? I suppose that I haven’t checked that you can commute the sum and the integral over $$N$$. However, it definitely has to be true that the final orbital integral doesn’t depend on $$T$$ when the test function is cuspidal, which would suggest that I am right about the vanishing of both of the correction terms (that is, the terms that depend on $$T$$ in both the original integrand and final value of the orbital integral). All of this is made more confusing by the fact that Knightly-Li’s book Traces of Hecke operators says multiple times that truncation has no effect on the thing being integrated, and that the use of truncation is purely pedagogical.

Apologies for the long question. I think there is something very simple that I do not understand, so hopefully an experienced person will easily be able to tell me where I am going wrong.

## Are all integrals of the from "polynomial over polynomial" solvable? [closed]

I’m not asking for a method to solve all integrals of this type. I understand that are many ways to try to solve them. I just want to know what is the condition so that the integral is solvable.

## How do I automate pv reduction/loop integrals in mathematica

So in loop integrals, say over a 4 momentum $$q$$, we end up with terms like $$q.q$$, $$q.p_1 q.p_2$$ and $$q.q q.q$$ and we want to treat each differently. Is there a way we can get Mathematica to do this?

## real analysis – Does the sequence of integrals of this continuous function converge to zero?

Define $$g_n(x)$$ to be zero on $$(0,1-frac{1}{n})$$ and $$g_n(x)=x^n$$ on $$(1-frac{1}{n},1)$$

1. Is this sequence of functions $$g_n$$ in $$C^2(0,1)$$?

It seems so as its first and 2nd derivatives are continuous.

1. Is the sequence of integrals of $$g_n to 0$$ on (0,1)?
It seems yes
Since

begin{align} int_0^1 g_n(x)= int_0^{1-frac{1}{n}} 0 dx + int_{1-frac{1}{n}}^1 x^n = frac{1}{n(n+1)}(1-(1- frac{1}{n})^{n+1}) end{align}

And

begin{align} lim_{n to infty} a_n=frac{1}{n(n+1)}(1-(1- frac{1}{n})^{n+1})=0 end{align}

So am I correct?

## na.numerical analysis – Existence of efficiently computable integrals for “spiky” functions

$$DeclareMathOperatorspikify{spikify}$$Apologies if I’m misusing the word spiky, I mean it only as a visual description of a function, not in any technical mathematical sense!

We define the function
$$spikify(n, c, y) = begin{cases} y & n=0 \ c^{spikify(n-1, c, y)} & n > 0 \ end{cases}$$

Thus

$$spikify(1, c, y) = c^y$$

$$spikify(2, c, y) = c^{(c^y)}$$

$$spikify(3, c, y) = c^{(c^{(c^y)})}$$
and so on

Can there exist a constant $$c > 1$$ such that:
for all functions $$f$$ where

• $$f:mathbb{R}tomathbb{R}$$
• $$f$$ is continuous over the domain $$(0:1)$$
• $$f$$ has a global maximum and global minimum within that domain
• $$f$$ is differentiable
• $$f(x)$$ has an efficiently computable closed form integral $$F(x)$$

Then the integral:

$$int_{x_1}^{x_2} spikify(n, c, f(x)) dx approx spikify(n,c,max_{x_1 < x < x_2} f(x))$$ can be calculated (or numerically approximated) for fixed, large $$n$$ and $$c$$ in time polynomial relative to $$log(n+c+max(f(x))$$.

I suspect the answer is absolutely not, because if so I think I could binary search to global maxima of a somewhat arbitrary function within a closed domain. (Note that the integral of the spikified function will tend to be dominated by the spike of the maximum of $$f$$ for large $$n$$) But I honestly have no idea where to even start to disprove the existence of such a constant.

More generally, are there stronger restrictions on $$f$$ that would allow a closed form for integrals of “spikified” functions, or other variations of “spikifying” a function that would allow polynomial time approximation of an integral over a closed range, regardless of how spiky we chose to make the transformed function?