## fourier analysis – Definite integral with Dirac delta and Heaviside function

In relation with the question I posed on MathSE here, I want to ask how can Mathematica give an answer to my problem.

## The context

I am trying to get rid of the integral over $$y$$ in
begin{align} int_0^{infty}dy ; psileft(frac{y}{q}right)phi(y) int_{-infty}^{infty}frac{dp}{sqrt{2pi}} p^{gamma} e^{-ip(y-x)};, end{align}
and to obtain an analytical expression in terms of $$phi(x)$$, $$psi(x/q)$$ and their derivatives. I assume that $$gamma=1-delta$$, with $$0. I tried to break down the problem (though I don’t know if there is a better way) so that I am left with
begin{align} int_0^{infty} dy ; psileft(frac{y}{q}right)phi(y) times left(frac{delta(y-x)}{(y-x)^{1-delta}} – (1-delta)frac{H(y-x)}{(y-x)^{2-delta}}right);, end{align}
where $$delta$$ is the Dirac delta and $$H$$ the Heaviside function.

## The problem

Since the parameter $$delta$$ (not to be confused with the Dirac) is constrained as $$0, I expect troubles when evaluating the first term in the brackets. However, when I define $$psi$$ and $$phi$$, MMA returns an expression for this integral, which leaves me confused. Basically, my code is

``````  (Psi)(x_) := 9*6^{-1/2} x^{3/2} Exp(-3 x/2);
(Phi)(x_) := x; (* Or x^2, or E^(x)...*)
fourpfrac(x_, y_, (Delta)_) := (2*Pi)^(1/2)/Gamma((Delta))*HeavisideTheta(y - x)*(y - x)^((Delta) - 1); (* Fourier Transform of p^(1-(Delta)) *)
x1frac(x_, (Nu)_, (Delta)_, (Alpha)_, q_) := Integrate(Derivative(0, (Nu), 0)(fourpfrac)(x, y, (Delta))*(Psi)(y/q)*(Phi)(y)*y^(-(Alpha) - 1), {y, 0, Infinity}); (* The whole integral *)
x1frac(x, 1, (Delta), -1, q)
``````

$$left{text{ConditionalExpression}left(frac{45 sqrt{3} x ((pi delta ) sin ) Gamma left(-delta -frac{3}{2}right) (-x)^{delta +frac{1}{2}} , _1F_1left(frac{7}{2};delta +frac{5}{2};-frac{3 x}{2 q}right)}{8 q^{3/2}}+frac{sqrt{pi } 2^{delta +frac{3}{2}} 3^{-delta } Gamma left(delta +frac{3}{2}right) q^{delta } , _1F_1left(2-delta ;-delta -frac{1}{2};-frac{3 x}{2 q}right)}{Gamma (delta -1)},x<0right)right}$$

and in input form

``````{ConditionalExpression((2^(3/2 + (Delta)) 3^-(Delta) Sqrt((Pi)) q^(Delta) Gamma(3/2 + (Delta)) Hypergeometric1F1(2 - (Delta), -(1/2) - (Delta), -((3 x)/(2 q))))/Gamma(-1 + (Delta))
+ (45 Sqrt(3) (-x)^(1/2 + (Delta))x Gamma(-(3/2) - (Delta)) Hypergeometric1F1(7/2, 5/2 + (Delta), -((3 x)/(2 q))) Sin((Pi) (Delta)))/(8 q^(3/2)),  x < 0)}
``````

The thing is that in no way the form I have given to $$phi$$ and $$psi$$ could have canceled the singularity.

1. Is the result I have obtained with MMA correct?
2. What did MMA do to evaluate the integral?
3. Is there a better way to resolve the problem?

Many thanks!

## NDSolve::dvlen: The function [Theta][t] does not have the same number of arguments as independent variables

``````s = NDSolve({(m*g*
Sin((Theta)(t))) - (m*(r''(t) - r (((Theta)'(t))^2))) ==
k (r(t) - 14),  g*Cos((Theta)(t)) ==
r(t) (Theta)''(t) + 2*r'(t) (Theta)'(t), (Theta)'(0) == 0,
r'(0) == 0}, {r, (Theta)}, {t, 0, 60}, {k, 0, 2000})
``````

i want to solve this two equation to find equation of r(t). I’ve already set up m and g

## plotting – How to calculate Fourier coefficients in the transmittance function?

Good morning, could someone help me to propose this exercise to be able to solve it

Calculate the Fourier coefficients c_s of the transmittance function t(x)= t(x+P_x) given in|x|<P_x/2,characterizing the following 1-D diffraction gratings of period P_x:
The square-wave amplitude grating composed of evenly spaced parallel slits, givingt(x) =rect(x/w), where w < P_x is the slit width.

## sql server – Why changing the column in the ORDER BY section of window function “MAX() OVER()” affects the final result?

I have a table with below structure and it’s data :

``````create table test_table
(
Item_index   int,
Item_name    varchar(50)
)

insert into test_table (Item_index,Item_name) values (0,'A')
insert into test_table (Item_index,Item_name) values (1,'B')
insert into test_table (Item_index,Item_name) values (0,'C')
insert into test_table (Item_index,Item_name) values (1,'D')
insert into test_table (Item_index,Item_name) values (0,'E')
``````

I want to know why changing the column in `order by` section of the query , changes the result? In `QUERY-1` , I used `item_index` and in the `QUERY-2` I used `item_name` column in the order by section. I thought that both queries must generate the same result `because I used `item_index` in both queries for partitioning!` I’m completely confused now ! why the order by column should affect the final result?

QUERY-1:

``````select t.*,
max(t.Item_name)over(partition by t.item_index order by item_index) new_column
from test_table t;
``````

RESULT:

``````Item_index  Item_name     new_column
----------- --------------------------
0           A                E
0           C                E
0           E                E
1           D                D
1           B                D
``````

QUERY-2:

``````select t.*,
max(t.Item_name)over(partition by t.item_index order by item_name) new_column
from test_table t;
``````

RESULT:

``````Item_index  Item_name  new_column
----------- -----------------------
0           A             A
0           C             C
0           E             E
1           B             B
1           D             D
``````

Can anybody explain how exactly these two queries are being executed and why each of them generates different result?

## argument patterns – Unusual performace of the Cases function

Recently I was surprised by unusual performance of `Cases` function. The code

``````Cases[{{1, a}, {2, b}, {3, c}, {4, e}}, X_ /;X[[1]]==Part[RandomSample[{1, 3}, 1],1]]
``````

may return `{1, a}`, `{3, c}`, `{{1, a}, {3, c}}`, or `{}`. Try reproduce this by running

``````Table[Cases[{{1, a}, {2, b}, {3, c}, {4, e}}, X_ /; X[[1]] == Part[RandomSample[{1, 3}, 1],1]], {q,1,200}]
``````

Whereas explicit substitution of pattern like here

``````Selected = Part[RandomSample[{1, 3}, 1],1];
Cases[{{1, a}, {2, b}, {3, c}, {4, e}}, X_ /; X[[1]] == Selected],
``````

always works properly. As I understand, Mathematica has difficulties with calculation of the pattern in the body of `Cases`

## real analysis – Do we have full control the oscillation of a function by modifying it on a small set?

Definitions and some motivation:

Let $$mathcal B$$ be the set of bounded measurable functions from $$(0, 1)$$ to $$mathbb R$$. Denote by $$mathcal N$$ the set of measurable subsets of $$(0, 1)$$ with Lebesgue measure $$0$$.

Given a function $$f in mathcal B$$, define the function $$mathcal Of$$ by

$$mathcal Of(x) := inf_{N in mathcal N} lim_{delta to 0} sup_{y, z in B_delta (x) setminus N} |f(y) – f(z)|$$.

Thanks to Lusin’s theorem, we know that we can modify $$f$$ on an arbitrarily small set and get a continuous function, and so we force the oscillation to be $$0$$ everywhere. But can we force it to be whatever we want?

Question:

Does there exist, for any $$f, g in mathcal B$$ and $$varepsilon > 0$$, a function $$f’ in mathcal B$$ such that the following conditions are satisfied?

i) $$f’ = f$$ everywhere except for a set of measure at most $$varepsilon$$.

ii) $$mathcal Of’ = mathcal Og$$ everywhere.

Note: All functions are genuine functions and not equivalence classes modulo null sets of such.

## A condition under which an Lp function is L-infinity

I am looking for a condition under which a function in $$L_p(Omega)$$ is also in $$L_infty(Omega)$$. The condition may be on the function itself, or on $$Omega$$.

In other words, is there anything that guarantees a p-integrable function is bounded?

## RemoveLastWord Javascript function – Stack Overflow

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.

## probability – “First principles” proof of the limit of the expected excess of the uniform renewal function

The closed form of the expected number of samples for $$sum_r X_r geqslant t, X_r sim text{U(0,1)}$$ is given by:

$$m(t) = sum_{k=0}^{lfloor t rfloor} frac{(k-t)^k}{k!}e^{t-k}$$

From this we can deduce the expected amount by which this sum exceeds $$t$$, namely:

$$varepsilon(t) = frac{m(t)}{2} – t$$

From knowing that $$m(t) to 2t+dfrac{2}{3}$$, we can easily see that $$varepsilon(t) to dfrac{1}{3}$$.

Is there a simple (“low tech”) way of proving that $$varepsilon(t) to dfrac{1}{3}$$ without first passing through proving $$m(t) to 2t+dfrac{2}{3}$$ ?

## javascript – Do all dynamically typed languages not support function overloading?

The issue is easier to understand if you consider carefully what it really means to overload a function.

Overloaded methods e.g. in Java are really two completely separate entities; they share no byte code, no address, nothing except their name; and their name isn’t really the same either, since in the compiler symbol table, a `print()` method for `int`s and a `print()` method `String`s actually have a mangled name that contains both the user-visible identifier (“print”) and additional information encoding the argument type.

Now contrast this with Javascript, where a `print()` function really is called `print` and nothing else. The runtime system only knows that it is a function; what arguments it expects and how it deals with them is entirely defined by the code in the function’s body. Therefore, defining a second function “print” simply overwrites the previous one rather than add a second implementation.

The details vary a bit from language to language, but the gist is, if you don’t have an explicit representation of data types in your compile-time/run-time system, you can’t use them to tell elements of the system apart, and that is why overloading on types is largely restricted to systems with a strong presence of types in the language definition.