## fa.functional analysis – Existence of periodic solution to ODE

We shall consider the matrix-valued differential operator

$$(L u)(x) :=-iu'(x) – begin{pmatrix} 0 & 1+2sin(2pi x-frac{pi}{6})\ 1 – 2sin(2pi x+frac{pi}{6}) & 0 end{pmatrix} u(x).$$

This is a $$1$$-periodic operator. Thus, does there exist a $$lambda in mathbb C$$ and a $$1$$-periodic solution to this ODE such that

$$(L – lambda)u = 0.$$

Probably there is no explicit solution, but can we show the existence of such a solution?

## Finite-time criterion for ODE

In article Finite-Time Stability of Continuous Autonomous Systems i found this (page 4).

That’s what I don’t understand:

1. Can (2.7) $$dot{y(t)}=-k cdot sign(y(t)) cdot lvert y(t) rvert^{alpha}$$ be reformulated as a condition for convergence in a finite-time, i.e. $$dot{y(t)} + k cdot sign(y(t)) cdot lvert y(t) rvert^{alpha}=0$$ ?
2. What if I have an equation of the form $$dot{y}=nabla_y f$$ and want $$nabla_y f rightarrow 0$$ in finite-time. Does this mean that I must meet the condition $$dot{nabla_y f} + k cdot sign(nabla_y f) cdot lvert nabla_y f rvert^{alpha}=0$$

## differential equations – Series solution of an ODE with nonpolynomial coefficients

Basically, I have a second-order differential equation for `g(y)` (given below as `odey`) and I want to obtain a series solution at $$y=infty$$ where `g(y)` should vanish. That would be easy if the ODE contains polynomial coefficients, hence the Frobenius method can used. But in my case, the coefficients are not polynomial because of the presence of powers proportional to `p` (can take positive non-integer values). I have also expanded `ir` at infinity and have taken up to first order (given by `irInf`) since if I directly use `ir`, then it would be a mess later for the ODE.

``````ir(y_) := (Sqrt)(-5 + y^2 + (3 2^(1/3))/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) - (6 2^(1/3)y^2)/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) + (3 (2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3))/2^(1/3))
dir(y_) := D(ir(x), x) /. x -> y
irInf(y_) = Series(ir(y), {y, (Infinity), 1}) // Normal

p=1/10; (*p>=0*)
odey = (2 irInf(y) - p irInf(y)^(1 - p)) D(irInf(y), y) g'(y) + irInf(y)^2 g''(y) - l (l + 1) g(y) // Simplify
``````

What steps can I take to solve this? Thanks

## differential equations – Series solution of an ODE with nonpolynomial coefficients

Basically, I have a second-order differential equation for `g(y)` (given below as `odey`) and I want to obtain a series solution at $$y=infty$$ where `g(y)` should vanish. That would be easy if the ODE contains polynomial coefficients, hence the Frobenius method can used. But in my case, the coefficients are not polynomial because of the presence of powers proportional to `p` (can take positive non-integer values). I have also expanded `ir` at infinity and have taken up to first order (given by `irInf`) since if I directly use `ir`, then it would be a mess later for the ODE.

``````ir(y_) := (Sqrt)(-5 + y^2 + (3 2^(1/3))/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) - (6 2^(1/3)y^2)/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) + (3 (2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3))/2^(1/3))
dir(y_) := D(ir(x), x) /. x -> y
irInf(y_) = Series(ir(y), {y, (Infinity), 1}) // Normal

p=1/10; (*p>=0*)
odey = (2 irInf(y) - p irInf(y)^(1 - p)) D(irInf(y), y) g'(y) + irInf(y)^2 g''(y) - l (l + 1) g(y) // Simplify
``````

What steps can I take to solve this? Thanks

## differential equations – NDSolve. Can I use numerical ODE solution as PDE initial condition?

I want to use numerical solution of the ODE as an initial condition for the 1D linear PDE. But it seems Wolfram Mathematica has some issues with it.

Let’s consider an example. Here we have nonlinear ODE system.

``````s = NDSolve[{u'[x] == -3 v[x] + x, v'[x] == u[x] - v[x]^3, u[0] == -1,
v[0] == 1}, {u, v}, {x, 0, 50}]
``````

Then we use v[x] as an initial condition for 1D diffusion equation…

``````NDSolve[{D[H[x, t], t] == D[H[x, t], x, x], H[x, 2] == v[x]}, H, {t,
0, 10}, {x, 0, 10}]
``````

… and receive the following error:

``````NDSolve::ndnum: Encountered non-numerical value for a derivative at t == 2.`.
``````

I know that

This message is generated when non-numerical expressions are encountered in a differential equation.

But what to do to solve such problem? I don’t want to substitute v[x] with an approximation.

Any way to write a system of three equations if two of them are ODE and the third on is a PDE?

## ordinary differential equations – ODE eigenvalue problem with unusual boundary conditions

I am given:

y”+λy=0, y(0)=0, (1−λ)y(1)+y′(1)=0

As usual we are looking for not trivial solutions.
Looks like a standard eigenvalue problem and yet I am totally stuck.
The case when lambda = 0 is rather obvious. A=B=0. Not much fun.
But when I start trying for lambda greater or smaller than zero, I get to this:

1. λ < 0, the solution is of the form:
B((1+ω^2)sinh(ω)+ωcosh(ω))=0 where λ=-ω^2

2. λ > 0, the solution is of the form:

B((1-ω^2)sin(ω)+ωcos(ω))=0 where λ=ω^2

The question states:Find the nontrivial stationary paths,
stating clearly the eigenfunctions y. In the case 1) I cant see any non trivial solutions but… well in the second case I cant see either. I know there are solutions.
Any help would be highly appreciated

## algorithms – How to use Runge–Kutta methods in a second order ODE

Consider a second order equation $$F=ma=mddot{x}$$.

In the language of Euler’s method

1. $$ddot{x}(t+dt)=F(t,x(t),dot x(t))$$
2. $$dot{x}(t+dt)=dot x(t)+a(t)dt$$
3. $$x(t+dt)=x(t)+dot x(t)dt$$

Basically, the entire iteration was the second order force equation $$F(t,x(t),dot x(t))$$.

However, here one wanted to apply Runge–Kutta method(standard “RK4”, not Runge–Kutta–Nyström or else) to solve that equation.
For notation simplicity, let $$vsim dot x$$, $$asim ddot x$$(velocity and acceleration).

From $$x(t),v(t),a(t)$$:

$$ka1=F(t,x(t),dot x(t))$$; $$kv1=v(t)+a(t)frac{dt}{2}$$; $$kx1=x(t)+v(t)frac{dt}{2}$$;

$$ka2=F(t+frac{dt}{2},kx1, kv1)$$; $$kv2=kv1+ka1cdot frac{dt}{2}$$; $$kx2=kx1+kv1cdot frac{dt}{2}$$;

$$ka3=F(t+frac{dt}{2},kx2, kv2)$$; $$kv3=kv2+ka2cdot dt$$; $$kx3=kx2+kv2cdot dt$$;
and finally

$$ka4=F(t+dt,kx3, kv3)$$;

The final acceleration was collected to be

Iteration Update Method 01

$$ddot x(t+dt)=(ka1+ka2cdot 2+ka3cdot 2+ka4)/6$$; where

$$dot x(t+dt)=dot x(t)+ddot x(t)dt$$;

$$x(t+dt)=x(t)+dot x(t)dt$$

with such loop the iteration was able to run in RK4 method and things worked very nicely and coincided with the analytical calculation for an example which Euler method was known to fail.(Tested)

However, my question arise from a mistake where it made the code fail to work. It went like this:

In addition to the $$ddot x(t+dt)$$ condition, one further to average the speed directly,

Iteration Update Method 02

$$ddot x(t+dt)=(ka1+ka2cdot 2+ka3cdot 2+ka4)/6$$;

$$dot x(t+dt)=(dot x(t)+kv1cdot 2+kv2cdot 2+kv3)/6$$; and

$$x(t+dt)=x(t)+dot x(t)dt$$;

Notice that the new update method attempted to use the $$dot x(t)$$ to take care of the “overflow”. Intuitively, not only the acceleration $$ddot x$$ was “RK4”-ed, but also, the velocity $$dot x$$ was “RK4”-ed as well. However, thought still able to produce a sensible graph, the Iteration Update Method 02 was proven to have failed after the analysis.

Question 1: Why Iteration Update Method 01 worked? Even though it approximated $$frac{d^2y(t)}{dt}=f(t,y(t),dot y(t))$$ instead of $$frac{d y(t)}{dt}=f(t,y(t))$$?

Question 2: Why Iteration Update Method 02 failed? Shouldn’t it work better since it averaged more stuff?

## linear transformations – Need help solving this ODE with a 3×3 matrix

I have a 3×3 matrix A with the following elements:
5 -4 2
4 -5 4
6 -12 9

The ODE is y’ = Ay with y(0) = 1,0,1

I found the eigenvalue to be 3 with a multiplicity of 3. I found the two eigenvectors to be:
2, 1, 0 and -1,0,1 (both of these transpose) I feel like I need a 3rd eigenvalue to solve this ODE but not sure what to do. Also not sure how to type matrices so sorry for confusion

## differential equations – Simplifying DSolve ODE solution

I want to solve the following ODE

``````eqsm = {r*A'(r) == 1 - A(r) - G*r^2 - 8 Pi r^2 rho,
2 r*A(r)*T'(r)/T(r) == A(r) - 1 + G*r^2 - 8 Pi r^2 P};
``````

for A,T, where G,rho,P are some parameters. I tried

``````solm = DSolve(eqsm,{A, T}, r)
``````

which gives

``````    {{A -> Function({r}, (r - (G r^3)/3 - 8/3 (Pi) r^3 rho)/r + C(1)/r),
T -> Function({r},
E^(-(1/2) RootSum(
3 C(1) + 3 #1 - G #1^3 - 8 (Pi) rho #1^3 &, (-Log(r - #1) +
G Log(r - #1) #1^2 - 8 P (Pi) Log(r - #1) #1^2)/(-1 +
G #1^2 + 8 (Pi) rho #1^2) &)) Sqrt(r) C(2))}}
``````

I got rid of RootSum in the solution of T by using the following code, that I found at Forcing solutions to avoid Root()

``````f(x_) := Normal(DSolveValue(eqsm, {A, T}, r))((2))(x);
sollong =
Replace(f(x), {Root(x_, y_) :> ToRadicals(Root(x, y))}, Infinity);
sollong // Simplify
``````

The resulting expression is too massive to be reported here. Any ideas on how to simplify it further? From how it looks it seems it should be possible.

I’ve been working through some literature on ODEs and keep coming across the below pseudo-code expression and am having trouble interpreting it for use in the Wolfram Language. I feel like I’ve solved many initial value problems over the years with `NDSolve`, but now think I need some help from someone who works with ODEs in a more robust capacity to describe the differences.
The above is from here which states that `h(t0)` is the initial state of the system, `G` is the function that defines the system. `t0` & `t1` are the start and end times and `Theta` is the parameters. (Pytorch also has an implementation I’ve been looking at).
As I researched ways to understand the parameters, I kept coming up with pseudo-code examples using `ODESolve`. Should I be using a different MMA function? Maybe there is an option I’m not understanding? What’s the most straightforward way to think about this pseudo-code example in Mathematica?