differential equations – NDSolve problem: WhenEvents detects the event, but the derivative of the function does not change

I have struggled a lot lately with the combination of NDSolve and WhenEvents. At this point I think it must be a bug of some sort.

I solve a differential equation with NDSolve, using WhenEvents to detect “impacts”, i.e., time instants when θ(t)==0. At that time instant, I want to reduce θ'(t) by multiplying it by, say, 0.5.

Here’s an example:

α = 10 π/180; p = 1.4; αp = 5 g Tan(α); g = 9.81; tmax = 1;

Eq = θ''(t) + 
    p^2 (Sin(α Sign(θ(t)) - θ(t)) + 
       αp (Sin(1 t) - Cos(8 t))/g Cos(α Sign(θ(t)) - θ(t))) == 0;

s = NDSolve({Eq, θ(0) == 0, θ'(0) == 0, 
    WhenEvent(θ(t) == 0, {Print(t), θ'(t) -> 0.5 θ'(t)}, 
     "DetectionMethod" -> "Interpolation")}, θ, {t, 0, 1});

When I run the code, it detects an impact successfully (as t is printed), around 0.43 seconds:

enter image description here

Yet the derivative is not changed. If I plot θ'(t), it shows that “something” happens to θ'(t), like a change in its slope, but there is no step change as I would expect.

θ(t)

If I change the code into:

s = NDSolve({Eq, θ(0) == 0, θ'(0) == 0, 
    WhenEvent(θ(t) == 0.000001, {Print(t), θ'(t) -> 0.5 θ'(t)}, 
     "DetectionMethod" -> "Interpolation")}, θ, {t, 0, 1});

i.e., when I change θ(t) == 0 into θ(t) == 0.000001, within the WhenEvent, then the change in derivative works:

θ'(t) as it should be

I have experienced similar problems with other excitation functions instead of αp (Sin(1 t) – Cos(8 t))/g, which is just an example. Sometimes even if I put 0.000001 in the event, the derivative sometimes changes, sometimes doesn’t. I have tried all “detection method” options too.

Any ideas?

TIA

matrices – derivative of matrix to a power – random walk

Hi I was solving a random walk problem which required to calculate the limit of a matrix when t approach 0. I found out that P^t -> 0 when t -> 0 so use the L’Hopital rule to find the derivative for this equation. However I’m not sure how to take the derivative of a matrix to a variable power when t = 0.

Here is the problem and part of my solution for it.

Thank youenter image description here

plotting – Plot a functional relation involving a derivative and inverse

I have a function f(x) obtained by solving certain ODE. Thus, it is given as an interpolation function. I need to plot $frac{dx}{df}$ as a function of $f$.

Below I will give a simple analytical example just to explain what I mean. Let $$f(x)=arctan(x).$$ Then we have
$$frac{df}{dx}=frac{1}{1+x^2},quad text{or} quad frac{dx}{df}=1+x^2.$$

Now we express $x$ in terms of $f$, i.e., $$x=tan(f),$$ and substitute in the equation above:
$$frac{dx}{df}=1+x^2=1+tan(f)^2.$$

Thus, given $f(x)=arctan(x)$, I would like to get a plot of $$1+tan(f)^2.$$

One naive way to do it is to parametrize $f$ and $frac{dx}{df}$ in terms of $x$ and use ParametricPlot

ParametricPlot({f(x), 1/f'(x)}, {x, -10, 10}, AspectRatio -> 1)

However, for my numerically defined function this does not work very well. Additionally, I would like to get the dependence in a functional form, the best would be again an interpolation function. How can I achieve this, maybe it is possible to formulate the problem as ODE and use NDSolve?

Estimate involving third order derivative

For $fin C^3(0,1)$, $f(0)=f(1)=f'(0)=0$, I need to prove there exists $C>0$ such that
$$|f'(1)|leq C(|f|_{L^2}+|f’+f”’|_{L^2})$$

My attempt:

Since $f(0)=f(1)=0$, there exists $ain (0,1)$ such that $f'(a)=0$, combine with $f'(0)=0$, we have $bin (0,1)$ such that $f”(b)=0$, so that
$$f'(1)=int_0^1int_b^t(f'(s)+f”'(s)),dsdt-int_0^1 f(t),dt+f(b).$$
the first term and second term can be controlled obviously, but how to controll $f(b)$? Any help will be apreciated.

integration – Calculate the derivative of $F(t)=int_0^tdzint_0^zdyint_0^y(y-z)^2f(x)dx$

The original problem in the book is finding a way to prove that $$F(t)=int_0^tdzint_0^zdyint_0^y(y-z)^2f(x)dx$$ has a derivative $frac{dF}{dt}=frac{1}{3}int_0^t(t-x)^3f(x)$.

I know how to get a $F(t)$ expressed without multi-integrals, which is, if I am right, $$F(t)=int_0^tfrac{1}{12}(x-t)^4dx,$$ but I don’t know how to proceed.

pr.probability – Integral bound for square of log derivative

I am currently facing the following problem:

Given a polynomial $f(x) = sum_{s in S_f} u_s x^s$, $f(0)neq 0$, $lvert S_f rvert leq t$ (i.e. $f$ is $t$-sparse) with $u_s$ coming as samples from i.i.d. $mathcal{N}(0,1)$-distributed variables, bound

$$ int_0^1 bigg(frac{f'(x)}{f(x)}bigg)^2 dx = int_0^1 bigg(frac{d}{dx} log(lvert f(x) rvert)bigg)^2 dx $$

in terms of the coefficients $u_s$ and the sparsity $t$, but not in terms of $deg(f)$. It is not too difficult to bound $int_0^1 frac{f'(x)}{f(x)} dx = log(lvert f(x) rvert) rvert_0^1 = log(lvert f(1) rvert) – log(lvert f(0) rvert)$ if $f(0) neq 0$, as we can plug in upper and lower bounds for $log$. This makes me hopeful a bound of the squared integrand should exist too. In the worst case, a bound of the expectation of the integral with respect to the $u_s$ would also suffice, i.e. a bound for

$$ mathbb{E}_{u_s, sin S_f} bigg(int_0^1 bigg(frac{f'(x)}{f(x)}bigg)^2 dx bigg). $$

It would take too long to explain where this comes from – I arrived at this problem looking at zero distributions of certain polynomials.

Thank you for all your ideas!

Upper derivative of the modified Bessel function of the first kind and order alpha j_alpha?

I calculated the upper derivative of the modified Bessel function of the first kind and order alpha j_{alpha} with respect to the variable with the maple program, but I could not show it for example by induction.
May you help me? Give me another key to show it?
Upper derivative of j_alpha

linear algebra – Derivative of a norm

I learned not use the Norm() function when computing vector derivative, so I use the dot product instead:

In: D(x.x, x)
Out: 1.x + x.1

What does the result mean? Is 1=(1, 1, .., 1) here? Why can’t it show just 2x as the result?
And Mathematica won’t resolve it when I define x?

In: 1.x + x .1 /. x -> {3, 4}
Out: {0.3 + 1.{3, 4}, 0.4 + 1.{3, 4}}

Taking the derivative of the following integral

$$h(x)=H(x,x)$$
where
$$H(x,y)=int_0^x!!int_0^y f(u,v),du,dv.$$
By the chain rule,
$$h'(1)=H_1(1,1)+H_2(1,1)$$
where $H_1(x,y)$ is the partial derivative of $H$ with respect
to its first argument and $H_2(x,y)$ is the partial derivative of $H$ with respect
to its second argument. Then
$$H_1(x,y)=int_0^y f(x,v),dv$$
and so
$$H_1(1,1)=int_0^1 f(1,t),dt.$$
Likewise
$$H_2(1,1)=int_0^1 f(t,1),dt.$$

matrix calculus – second order derivative of the loss function of logistic regression

For the loss function of logistic regression
$$
ell = sum_{i=1}^n left( y_i boldsymbol{beta}^T mathbf{x}_{i} – log left(1 + exp( boldsymbol{beta}^T mathbf{x}_{i} right) right)
$$

I understand that its first order derivative is
$$
frac{partial ell}{partial beta} = boldsymbol{X}^T(boldsymbol{y} – boldsymbol{p})
$$

where
$$
p = frac{exp(boldsymbol{X} cdot beta)}{1 + exp(boldsymbol{X} cdot beta)}
$$

and its second order derivative is

$$
frac{partial^2 ell}{partial beta^2} = boldsymbol{X}^Tboldsymbol{W}boldsymbol{X}
$$

where $boldsymbol{W}$ is a $n*n$ diagonal matrix and the $i-th$ diagonal element of $boldsymbol{W}$ is equal to $p_i(1-p_i)$. However, I am struggling with the first order and second order derivative of the loss function of logistic regression with L2 regularization

$$
ell = sum_{i=1}^n left( y_i boldsymbol{beta}^T mathbf{x}_{i} – log left(1 + exp( boldsymbol{beta}^T mathbf{x}_{i} right) right) + lambda Sigma_{j}^{p}beta_j^2
$$

I try to extrapolate $boldsymbol{X}^T(boldsymbol{y} – boldsymbol{p})$ and $boldsymbol{X}^Tboldsymbol{W}boldsymbol{X}$ by simply adding one more term according to my meager knowledge of calculus, making them $boldsymbol{X}^T(boldsymbol{y} – boldsymbol{p}) + 2lambdaboldsymbol{beta}$ and $boldsymbol{X}^Tboldsymbol{W}boldsymbol{X} + 2lambda$

But it appears to me that the thing does not work this way. So what is the correct 1st and 2nd order derivative of the loss function for the logistic regression with L2 regularization?