differential equations – Finding an analytic solution with a JacobiSD function

We are searching for an analytic solution to the given equation for $f_text{n}(u)$, for $u in (0, d/2)$ (this problem is a snippet from this paper here)
$$-partial^2_{u} f_text{n} + leftlbrack 1 – f_text{n} ^{2} +frac{j_{u}^2}{f_text{n} ^4}rightrbrack
f_text{n} = 0$$

With a little manipulation (integrate from 0 to $x$, use that $f’$ is minimal at $x=0$…):

$$left(frac{partial f_n^2}{partial u}right)^2 = 2 (f_n^2 – f^2_0) left( f_n^2 (2 – f_n^2 – f^2_0) + frac{2j_u^2}{f^2_0}right)$$

or more usefully:
$$ 2left(frac{partialPsi}{partial u}right)^2 = left( (Psi^2 + f_0^2) (2 – Psi^2 – 2f^2_0) + frac{2j_u^2}{f^2_0}right)$$.

I know from extensive hand-calculation that this has a Jacobi elliptic function solution.

$$Psi_n^2 = f_n^2 – f_0^2 = frac{b_n^2 a_n^2}{a_n^2 + b_n^2} operatorname{sd}^2left(u sqrt{frac{a_n^2 + b_n^2}{2}}, sqrt{frac{b_n^2}{a_n^2 + b_n^2}}right)$$

where
$$
a_n^2 = – left(1-frac{3}{2}f_0^2right) + sqrt{2left(1-frac{1}{2}f_0^2right)^2 + frac{2j_u^2}{f_0^2}}
\
b_n^2 = + left(1-frac{3}{2}f_0^2right) + sqrt{2left(1-frac{1}{2}f_0^2right)^2 + frac{2j_u^2}{f_0^2}}
$$

When I try and replicate this in Mathematica, I obtain a different solution:

an^2 == - ( 1 - (3/2) f0^2) + Sqrt(2 (1 - f0^2/2)^2 + 2 ju^2/f0^2)
bn^2 == + ( 1 - (3/2) f0^2) + Sqrt(2 (1 - f0^2/2)^2 + 2 ju^2/f0^2)
eqn = {Sqrt(2)*Psi'(x) == Sqrt((an^2 + Psi(x)^2)*(bn^2 - Psi(x)^2))}
sol = DSolve(eqn, Psi(x), x)

Output:
{{Psi(x) -> bnJacobiSN((Sqrt(2)anx + 2an*C1)/2, -(bn^2/an^2))}}

It is not clear to me that these solutions are equivalent, and moreover, the form looks completely different, let alone being a completely different JEF solution. I have only looked into DSolve in the past, but I am not too sure how to use Reduce to find a better solution.
Any advice would be gratefully accepted. My end result is that I want to find the analytic solution, ideally able to find the results for $a_n$ and $b_n$ as well, but I appreciate mathematical manipulation is somewhat necessary!

parametric functions – How to solve this set of PDE equations? I tried ParametricNDSolve, but

I am tring to solve set of PDE equations with parameters of v(z),q(z),T(z,t,r). Below is my test code where I tried method of parametricNDSolve but failed.

ClearAll("Global`*") ;
equ = With({v = v@z, q = q@z, T = T @@ {z, t, r}, p = 1/(1 + T^(-3/2)), 
A = NIntegrate(p*Exp(-q*r^2) r, {r, 0, inf}),  B = NIntegrate(p*Exp(-2 q*r^2) r, {r, 0, inf})},
{-v D(q, z) + q*D(v, z) == -q^2*v*A,
-v*D(q, z) + 2 q*D(v, z) == -4*q^2*v*B,
D(T + (p + 1) v^2 Exp(-2 q r^2), t) + 1/r D(r*p*v^2 Exp(-2 q r^2), r) + D(p*v^2 Exp(-2 q r^2),z) == p*v^2 Exp(-2 q r^2)})
ic = {v(0) == 1, q(0) == 1, T(0, 0, 0) == 1}
{vsol,qsol,Tsol}=ParametricNDSolveValue({equ, ic}, {v, q, T}, {z, 0, 10}, {t, r})

Could anyone help me out? Thanks.

Integral Equations where the Unknown Function depends on the Integration Dummy

Consider the following the integral equation:
$$ f(s) = int_a^b K(s,t) g(s,t) dt, $$
where $f$ and $K$ are known functions, and $g$ is the unknown function we want to solve for.

Is there any way to solve this kind of integral equation? All the books I find talking about integral equations do not allow $g$ to be a function of $s$. I’m really curious about this more general situation where $g$ is a function of both $t$ and $s$ but couldn’t find any reference to go to.

Thanks in advance for any help and consideration!

differential equations – Asymptotic Output Tracking – Where to Place the Input Control Signal?

Asymptotic Output Tracking: Code Issues

I ask for help from specialists in differential equations, dynamical systems, optimal control and
general control theory;

I have the following system of differential equations:

begin{cases} frac{dx(t)}{dt}=G(t) \ frac{dz(t)}{dt}+z(t)=frac{df}{dt} \ frac{dG(t)}{dt}+G(t)=z(t) cdot alpha sin(omega t) \ frac{dH(t)}{dt}+H(t)=z(t) cdot (frac{16}{alpha^2}(sin(omega t)-frac{1}{2})) \ frac{dX(t)}{dt}+X(t)=frac{dx(t)}{dt} end{cases}

where, $x,z,G,H,X$ – variables; $f=-(x(t)+alpha sin(omega t)-x_e)^2$; $alpha, omega$ – parameters.

As an output $y$, I assign:

$y=tanh(k cdot H(t))$

As an reference signal $r_1$, I assign:

$r_1=-1$

As an constant time $p_1$, I assign:

$p_1=-1$

Well, I tried to program this in the Mathematica program and ran into a difficulty that I can’t get over yet. Question: in which of the equations should the control signal $u(t)$ be placed?

I chose the first equation, then the original system of equations will look like this:

begin{cases} frac{dx(t)}{dt}=G(t)+u(t) \ frac{dz(t)}{dt}+z(t)=frac{df}{dt} \ frac{dG(t)}{dt}+G(t)=z(t) cdot alpha sin(omega t) \ frac{dH(t)}{dt}+H(t)=z(t) cdot (frac{16}{alpha^2}(sin(omega t)-frac{1}{2})) \ frac{dX(t)}{dt}+X(t)=frac{dx(t)}{dt} end{cases}

(***)

Clear("Derivative")

ClearAll("Global`*")

Needs("Parallel`Developer`")

S(t) = (Alpha) Sin((Omega) t)

M(t) = 16/(Alpha)^2 (Sin((Omega) t) - 1/2)

f = -(x(t) + S(t) - xe)^2

Parallelize(
 asys = AffineStateSpaceModel({x'(t) == G(t) + u(t), 
     z'(t) + z(t) == D(f, t), G'(t) + G(t) == z(t) S(t), 
     H'(t) + H(t) == z(t) M(t), 
     1/k X'(t) + X(t) == D(x(t), t)}, {{x(t), xs}, {z(t), 0.1}, {G(t),
       0}, {H(t), 0}, {X(t), 0}}, {u(t)}, {Tanh(k H(t))}, t) // 
   Simplify)

pars1 = {Subscript(r, 1) -> -1, Subscript(p, 1) -> -1}

Parallelize(
 fb = AsymptoticOutputTracker(asys, {-1}, {-1, -1}) // Simplify)

pars = {xs = -1, xe = 1, (Alpha) = 0.3, (Omega) = 2 Pi*1/2/Pi, 
  k = 100, (Mu) = 1}

Parallelize(
 csys = SystemsModelStateFeedbackConnect(asys, fb) /. pars1 // 
    Simplify // Chop)

plots = {OutputResponse({csys}, {0, 0}, {t, 0, 1})}

At the end, I get an error.

At t == 0.005418556209176463`, step size is effectively zero; 
singularity or stiff system suspected

It seems to me that this is due to the fact that either in the system there is a ksk somewhere, or I have put the control input signal in the wrong equation. I need the support of a theorist who can help me choose the right sequence of actions to solve the problem.

I would be glad to any advice and help.

differential equations – How are Derivatives supposed to be used with DSolve?

How are Derivatives supposed to be used with DSolve?

I’m trying to solve a simple general PDE $-y_{tt}+beta y=0$ with $y_t(0),y_t(1)$ Neumann BCs.

The documentation wasn’t very helpful

https://reference.wolfram.com/language/ref/DSolve.html

Boundary conditions for PDEs can be given …

But it doesn’t seem to specify, how to input those BCs.

differential equations – boundary values with DSolve and plotting

I have been examining https://library.wolfram.com/infocenter/Books/8509/DifferentialEquationSolvingWithDSolve.pdf but cannot resolve my syntax issue

eqn = 0.1*y''[x] + 2 y'[x] + 2 y[x] == 0;
sol = DSolve[{eqn, y[0] == 0, y[1] == 1}, y[x], x]

but this yields

enter image description here

I am not sure why it is written “True, True” instead of applying the boundary condition.

differential equations – time dependent hamiltonian with random numbers

I have a Hamiltonian (Z) in matrix form, I solved it for time independent random real numbers, now want to introduce time dependent in such a way at any time the random real numbers change between the range {-Sqrt(3sigma2), Sqrt(3sigma2)}, here is my code

Nmax = 100; (*Number of sites*)

tini = 0; (*initial time*)

tmax = 200; (*maximal time*)

(Sigma)2 = 0.1; (*Variance*)

n0 = 50; (*initial condition*)

ra = 1; (*coupling range*)

(Psi)ini = Table(KroneckerDelta(n0 - i), {i, 1, Nmax});

RR = RandomReal({-Sqrt(3*(Sigma)2), Sqrt(3*(Sigma)2)}, Nmax);

Z = Table(
    Sum(KroneckerDelta(i - j + k), {k, 1, ra}) + 
     Sum(KroneckerDelta(i - j - k), {k, 1, ra}), {i, 1, Nmax}, {j, 1, 
     Nmax}) + DiagonalMatrix(RR);

usol = NDSolveValue({I D((Psi)(t), t) == 
     Z.(Psi)(t), (Psi)(0) == (Psi)ini}, (Psi), {t, tini, tmax});

What can I do for introduce this time dependent and solve the differential equation(usol)? I hope my question is clear

differential equations – How to pose Dirichlet and Neumann BCs on same boundary?

Let’ s look on the Laplace equation in a rectangle area:

Eq0 = Inactive[Laplacian][u[x, y], {x, y}]
[CapitalOmega] = Rectangle[{0, 0}, {2, 1}]

and try to solve it with various pairs of Dirichlet an Newman BCs on horizontal boundaries:

BCD0 = DirichletCondition[u[x, y] == 0, y == 0]
BCD1 = DirichletCondition[u[x, y] == 1, y == 1]
BCN0 = NeumannValue[1, y == 0]
BCN1 = NeumannValue[1, y == 1]

NDSolve yields reasonable solution when the Dirichlet and Neumann BCs are posed on different edges of the rectangle. For example:

u1 = NDSolveValue[{Eq0 == BCN1, BCD0}, 
  u, {x, y} [Element] [CapitalOmega]]
ContourPlot[u1[x, y], {x, y} [Element] [CapitalOmega], 
 AspectRatio -> Automatic
  , PlotLegends -> Automatic]

enter image description here

However it fails if the BCs are set on same edge:

u2 = NDSolveValue[{Eq0 == BCN0, BCD0}, 
  u, {x, y} [Element] [CapitalOmega]]
ContourPlot[u2[x, y], {x, y} [Element] [CapitalOmega], 
 AspectRatio -> Automatic
  , PlotLegends -> Automatic]

enter image description here

Nevertheless it is obvious that solution exists and is equal to u[x_,y_]=y.

My Question is: Is it possible to set 2 BCs on same edge of the rectangle?

differential equations – Series solution of an ODE with nonpolynomial coefficients

Basically, I have a second-order differential equation for g(y) (given below as odey) and I want to obtain a series solution at $y=infty$ where g(y) should vanish. That would be easy if the ODE contains polynomial coefficients, hence the Frobenius method can used. But in my case, the coefficients are not polynomial because of the presence of powers proportional to p (can take positive non-integer values). I have also expanded ir at infinity and have taken up to first order (given by irInf) since if I directly use ir, then it would be a mess later for the ODE.

ir(y_) := (Sqrt)(-5 + y^2 + (3 2^(1/3))/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) - (6 2^(1/3)y^2)/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) + (3 (2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3))/2^(1/3))
dir(y_) := D(ir(x), x) /. x -> y
irInf(y_) = Series(ir(y), {y, (Infinity), 1}) // Normal

p=1/10; (*p>=0*)
odey = (2 irInf(y) - p irInf(y)^(1 - p)) D(irInf(y), y) g'(y) + irInf(y)^2 g''(y) - l (l + 1) g(y) // Simplify

What steps can I take to solve this? Thanks

differential equations – Series solution of an ODE with nonpolynomial coefficients

Basically, I have a second-order differential equation for g(y) (given below as odey) and I want to obtain a series solution at $y=infty$ where g(y) should vanish. That would be easy if the ODE contains polynomial coefficients, hence the Frobenius method can used. But in my case, the coefficients are not polynomial because of the presence of powers proportional to p (can take positive non-integer values). I have also expanded ir at infinity and have taken up to first order (given by irInf) since if I directly use ir, then it would be a mess later for the ODE.

ir(y_) := (Sqrt)(-5 + y^2 + (3 2^(1/3))/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) - (6 2^(1/3)y^2)/(2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3) + (3 (2 + 10 y^2 - y^4 + Sqrt(64 y^2 + 48 y^4 + 12 y^6 + y^8))^(1/3))/2^(1/3))
dir(y_) := D(ir(x), x) /. x -> y
irInf(y_) = Series(ir(y), {y, (Infinity), 1}) // Normal

p=1/10; (*p>=0*)
odey = (2 irInf(y) - p irInf(y)^(1 - p)) D(irInf(y), y) g'(y) + irInf(y)^2 g''(y) - l (l + 1) g(y) // Simplify

What steps can I take to solve this? Thanks