Differential Equations – Complex Solutions for ODEs

How do I solve the following IVP problem in Mathematica, so I get real solutions?

$ Q ((t) = b – dfrac {Q (t)} {100-t}; quad Q (0) = 250 $

I tried the following:

$ text {$ $$ assumptions} = b> 0; text {$ $$ assumptions} = t> 0; $

$ f = text {DSolve} left[left{Q'(t)=b-frac{Q(t)}{100-t},Q(0)=250right},Q,tright][[1,1,2]]$

$ f (t) $

which leads to the following:

$ frac {1} {2} (-2 i pi b t-2 bt log (100) -200 b log (t-100) +2 bt log (t-100) +200 i pi b + 200 b log (100) -5 t + 500) $

Any help would be appreciated. Many Thanks!!!

approx. classical analysis and odes – Sturm-Liouville equation with finite number of eigenvalues?

Consider the following Storm Liouville (SL) intrinsic value problem $ x in (- infty, 0) $ $$ (py & # 39;) & # 39; – qy = – lambda ^ 2wy, $$ in which $ p (x) = x ^ 2 $, $ w (x) = 1 $, and $ q (x) = (x / 2 + a) ^ 2 + a $ with parameters $ a> 0 $, It has a regular singularity $ x = 0 $, We basically hope for something like a homogeneous Dirichlet b.c.

It is solved by the substitution $ y (x) = e ^ {x / 2} x ^ {- frac {1} {2} + sqrt {(a + frac {1} {2}) ^ 2- lambda ^ 2}} u (x) $This leads directly to a standard confluent hypergeometric equation $$ xu & # 39; & # 39; (x) + ( gamma-x) u & # 39; (x) – alphau (x) = 0, $$ in which $ alpha = sqrt {(a + frac {1} {2}) ^ 2- lambda ^ 2} -a + frac {1} {2} $ and $ gamma = 1 + 2 sqrt {(a + frac {1} {2}) ^ 2- lambda ^ 2} $, There are two independent solutions (1st type & 2nd type).
Let's take a few ubiquitous arguments when solving the eigensystem related to confluent hypergeometric equations. Non-divergence required $ x = 0 $the 2nd kind is dropped. Non-divergence required $ infty $the 1st kind is reduced to a polynomial if $ – alpha $ is a non-negative integer and eigenvalue $ lambda ^ 2 $ is reached.

However, seen from this condition for $ alpha $obviously We have only a finite and finite series of eigenvalues ‚Äč‚Äčthat is different from the infinite eigen spectrum that the SL theory claims.

What is wrong here? Do I miss a few solutions?

Differential Equations – Solving a Nonlinear System of 2nd Order ODEs with NDSolve

I'm trying to solve a second-order ODE system with five dependent variables.

The equations are:

eqn1 = D[z]* C & # 39; & # 39;[z] - u * C & # 39;[z] == a * r1[z]
eqn2 = k[z]* T & # 39; & # 39;[z] - q * T & # 39;[z] == a * (r1[z]* b + r2[z]* c + r3[z]* d)

from where A B C D are constants; k[z], D[z]ri[z] (from where i = 1,2,3) are dependent variables of T[z] and C[z], and C[z] and T[z] depend on each other and where

r1[z] = (kc1[z]* keb[z]* (y1[z]* p - (y3[z]* p + y4[z]* p) / keq[z]* P)) / (1 + keb[z]* y1[Z*p+kh2[Z*p+kh2[z*p+kh2[z*p+kh2[z]* y4[z]* p + kst[z]* y3[z]* p) ^ 2

r2 and r3 are like r1[z],

I have the following constraints:

initConds = {T[0] == 893.15, T & # 39;[L] == 0, C[0] 0.992, C & # 39;[L] == 0};

from where L = 9,

I use it to use NDSolve as shown below, but the code did not trigger the system, and the error message was also displayed.

s = NDSolve[{}{TeqnsinitConds[{}{TeqnsinitConds[{eqnsinitConds}{T[{eqnsinitConds}{T[z]C[z]}, {z, 0, 9, 0.01}]

NDSolve :: ntdv: Unable to resolve to find an explicit formula for the derivatives. Consider using the Method -> {"EquationSimplification" -> "Residual"} option.

Suppose Mathematica can solve my system, how could I write code? NDSolve or another method?

Differential Equations – Draw Gradient and Newton Directions for the Parametric System of Nonlinear ODEs

I have a bucket (system) of chemical kinetic models, a nonlinear dynamical system, given by:
equations

Kf> = 0 and kr> = 0 are the parameters. The initial conditions are A (0) = B (0) = 1 and C (0) = 0. I generated data according to y1 = C (0,5) + noise and y2 = C (2) + noise, where the Noise is normally distributed with mu = 0 and sigma = 0.1 using kf = 0.1 and kr = 2.

odes = {A & # 39;
B & # 39;
C1 & # 39;
C1[0] == 0};
odesData = odes /. {kf -> 0,1, kr -> 2};
Solution = NDSolve[odesData, {A, B, C1}, {t, 0, 5}][[1]];
seedRandom[10]
y1 = C1[0.5] + RandomVariate[NormalDistribution[0, 0.1]]/. solution
seedRandom[30]
y2 = C1[2] + RandomVariate[NormalDistribution[0, 0.1]]/. solution
Data = {{0.5, y1}, {2, y2}};

I have to visualize the data with an input / output image, a parameter space image and a data space image. For the parameter space and data space images, I also have to plot the gradient and newton directions for several randomly selected parameter values.

With ParametricNDSolve I can get an input / output image.

kfmax = 1;
krmax = 5;
number = 100;
kfrange = range[0, kfmax, kfmax/numsteps];
krrange = range[0, krmax, krmax/numsteps];

soln = ParametricNDSolve[odes, {A, B, C1}, {t, 0, 5}, {kf, kr}];

eqns = Rate[Table[C1[kf, kr]
feweqns = flattening[eqns][[ ;; ;; 1000]];
eqnsplot = plot[feweqns, {t, 0, 2.5}, PlotRange -> All]
bestfitplot = plot[model[kf, kr]
AxesLabel -> {"t", "C

Input-Output

Using ParametricNDSolveValue and FindFit, I can find the best fitting parameters for the model (line is with fit parameters and points are generated data).

model = ParametricNDSolveValue[odes, C1, {t, 0, 5}, {kf, kr}]
fit = FindFit[data, {model[kf, kr]
C1 & # 39;

bestfitplot = plot[model[kf, kr]
validptplot = ListPlot[{{0.5, y1}, {2, y2}}]; (* the data you are trying to match *)
show[bestfitplot, validptplot]

Best fit line and generated data

I can also visualize the parameter space using the log of the cost function:

list1 = eqns /. t -> 0.5;
list2 = eqns /. t -> 2;
Cost = (list1 - y1) ^ 2 + (list2 - y2) ^ 2; Costs // MatrixForm; (* kf-th row and kr-th column *)
parspaceplot = ListContourPlot[Log[cost], PlotLegends -> Automatic, DataRange -> {{0, krmax}, {0, kfmax}}, FrameLabel -> {"kr", "kf"}, Contours -> 50];
bestfitptplot = ListPlot[{{kr, kf}} /. fit];
parspace = show[parspaceplot, bestfitptplot]

Parameter space image

as well as the data space image

max = 2;
dy1 = 0.2; dy2 = dy1;
modely1 = MapThread[model[#1, #2][0.5]    &, Table[{i, j}, {i, 0, max, dy1}, {j, 0, max, dy2}][[#]][Transpose]]& /@ Offer[max/dy1];
modely2 = MapThread[model[#1, #2][2]    &, Table[{i, j}, {i, 0, max, dy1}, {j, 0, max, dy2}][[#]][Transpose]]& /@ Offer[max/dy1];
plot1 = ListPlot[{Modely1[{Modely1[{modely1[{modely1[[#]]modely2[[#]]}[Transpose] & /@ Offer[max/dy1], Joined -> True, PlotStyle -> Blue, AxesLabel -> {"y1", "y2"}];
plot2 = ListPlot[{Modely1[{Modely1[{modely1[{modely1[Transpose][[#]]modely2 [Transpose][[#]]}[Transpose] & /@ Offer[max/dy1], Joined -> True, PlotStyle -> Red];
plot3 = ListPlot[{{{y1, y2}}, {{model[kf, kr][0.5]    /. fit, model[kf, kr][2]    /. fit}}}, PlotStyle -> {{Black, dot, point size[0.025]}, {Green}}];
show[plot1, plot2, plot3]

Data Space

However, if I try to compute the directions of the gradient (j # 39) and Newton ((j # j) .x ==-degrees for x), where j === jacobian, j & # 39; j === Fischer information matrix, r === residuals and j & # 39; denotes the transposition of j, I do not understand what I think I should get. That is, the slope direction is not perpendicular to the contour lines.

seedRandom[]
Randmax = 10;
Randoms = table[{RandomReal[{0, krmax}], RandomReal[{0, kfmax}]}, {i, randmax}];
r = {y1, y2} - (model[kf, kr][#]    & / @ {0.5, 2});
j = - (degree[model[kf, kr][#]{kf, kr}]{0.5, 2} /0.1;
degree = j [Transpose].r;
Fish = j [Transpose].j /. Random[[#, 1]], kf -> Random[[#, 2]]} & /@ Offer[randmax];
grads = grad /. Random[[#, 1]], kf -> Random[[#, 2]]} & /@Offer[randmax];
Newts = LinearSolve[Fish[fish[Fisch[fish[[#]]-grads[[#]]]& /@ Offer[randmax];
gradarrows = graphics[{Black arrow

Random[{BlackArrow[{Randoms[{BlackArrow[{randoms[[#]]Random[[#]]+ Normalize[grads[grads[grads[grads[[#]]]/ 2.5}]& /@ Offer[randmax]}];
newtarrows = graphics[{Blue arrow

Random[{BlueArrow[{Randoms[{BlueArrow[{randoms[[#]]Random[[#]]+ Normalize[newts[newts[Molche[newts[[#]]]/ 10}]& /@Offer[randmax]}];
show[parspace, gradarrows, newtarrows]

Parameter space with (tried) gradient and Newton directions

Things still do not look good, even if I only have costs for log[cost],

This is a homework assignment for a degree in Predictive Modeling. A Jupyter notebook is provided for the class, and I know how to numerically calculate the Jacobian (derive sensitivity equations and solve the 9 equations simultaneously with odeint), but I've invested enough in that code to get it to the end to see. I could do the same with Mathematics NDSolve with the 9 equations, but it seemed like I should be able to get the Jacobi with the parameter equation generated by ParametricNDSolve (similar to the parameter sensitivity section in the ParametricNDSolve reference page).

Any suggestions on how I can get the Gradient and Newton directions? (Both math and coding recommendations are welcome).

P.S. This is my first post and for the first time I could not solve things with what I could find online and in this forum, which was very helpful!

Classical Analysis and Odes – Is it possible to construct a trigonometric series that converges to $ (0,1) $ while diverging in $ (2,3) $?

By a trigonometric series I mean

$$ f (x) = sum_ {n = 1} ^ infty a_n e ^ {i b_n x}, $$

from where $ a_n $, $ b_n $ can be any complex number.

As a related question it is possible to do $ f $ so defined differentiable in (0.1) $ but not differentiable in (2,3) $?

Plotting – How to draw phase porters and the ODES solutions

Dear Ladies and Gentlemen, I can draw phase porters for the system of non-linear oodes

but I did not know where the result came from show the parametric solutions with phase do not portage well

I wrote the code as follows:

Sol[{N0_, I0_}?NumericQ] : =
First @ NDSolve[{N1'
r N1
I1 & # 39;
m + N1
I1[0] == I0}, {N1, I1}, {t, 0, 365}];
P1 = Parametric Plot[
assess[{N1[{N1[{N1[{N1
30}, PlotRange -> All, AspectRatio -> Full, PlotRange -> Full,
Frame -> True, MaxRecursion -> 8] 

from where

r = 0.431201; [Beta] = 2.99 * 10 ^ -6;
[Eta] = 0.2; [Sigma] = 0.7; [Rho] = 0.003; m = 0.427; [Delta] = 
0.57; [Mu] = 0.82;

and I needed StreamPlot as follows:

f[N1_, I1_] = r N1 (1 - [Beta] N1) - [Eta] N1 I1;
G[N1_, I1_] = [Sigma] + ([Rho] N1 I1) / (
m + N1) - [Delta] I1 - [Mu] N1 I1;
G[{N1_, I1_}] = {f[N1, I1]G[N1, I1]};

then

StreamPlot[{F[{F[{f[{f[N1, I1]G[N1, I1]}, {N1, 0.30}, {I1, 0, 30},
StreamStyle -> Blue, AspectRatio -> Automatic, Frame -> True,
Axes -> False, AxesLabel -> {"N1", "I1"}]show[StreamPlot[{F[StreamPlot[{F[StreamPlot[{f[StreamPlot[{f[N1, I1]G[N1, I1]}, {N1, 0.30}, {I1, 0, 30},
StreamPoints -> Fine, StreamStyle -> Blue, AspectRatio -> 1/2,
Frame -> True, AxesLabel -> {"N1", "I1"}, StreamPoints -> Fine,
PlotRange -> All], P1]

the final result was

Enter the image description here

Can someone help me to improve my result

mathematical modeling – finding the nullclines of a two connected ODE’s

so I’ve given the following problem:

$dn_1/dt=N(t)n_1-n_1$

$dn_2/dt=N(t)n_2-4n_2$

$N(t)=25-6n_1-3n_2$

so I’ve found the fixed points of the 2 equations and got:
[(0,0),(0,7),(4,0)]

now I want to find the nullclines of them:

so for $dn_1/dt=0$ we get $n_1(24-6n_1-3n_2)$ which means:

  1. $n_1=0$ for all $n_2$

  2. $n_1=4-0.5n_2$

so for $dn_2/dt=0$ we get $n_2(21-6n_1-3n_2)$ which means:

  1. $n_2=0$ for all $n_1$

  2. $n_2=7-2n_2$

plotting the following using matlab in order to find the phase plane and basin of attraction gives peculiar results which I can’t find to be able to understand. (fig attached below)
I thought the the 3 nullcline need to be intersecting both the fixed points which is more intuitive to me but instead I get the following.

enter image description here

the Matlab code generating this plot is as follows:

clear, close all; clc
%Const
N0=25;
G1=1;G2=1;
a1=6;a2=3;
k1=1;k2=4;
%Params
t0   =   0;      % starting time
dt   =   0.5;    % step size
tEnd = 50;       % end time
timeVec=t0:dt:tEnd;
NumSteps=floor(tEnd./dt);
%Function Handlers
N_t=@(n1,n2) N0-a1.*n1-a2.*n2;
n1_dot=@(n1,n2) G1.*N_t(n1,n2).*n1-k1.*n1;
n2_dot=@(n1,n2) G2.*N_t(n1,n2).*n2-k2.*n2;

%%finding the fixed point's
%%%%%%%%%
system=@(n1,n2) [n1_dot(n1,n2);n2_dot(n1,n2)];
s = solve(system);
%%%%%%%%%

%TODO RECHECK!!!!
%%nulclines
%%%%%%%%%
%n1_dot=0
%case 1: n1  = 0
%case 2: n1 != 0

%n2_dot=0
%case 1: n2  = 0
%case 2: n2 != 0

%%%%%%%%%
ptsMesh=-2:0.2:10;
figure
[X,Y]=meshgrid(ptsMesh);
U=n1_dot(X,Y);
V=n2_dot(X,Y);
normalizedFactor=sqrt(U.^2+V.^2);
quiver(X,Y,U./normalizedFactor,V./normalizedFactor,0.8,'b')
hold on
plot(s.n1,s.n2,'k*','lineWidth',8)
%plot the trivial nullclines on the axis
[x,y]=size(ptsMesh);
nullclineN1_zero=[0,min(ptsMesh);0,max(ptsMesh)];
nullclineN2_zero=[min(ptsMesh),0;max(ptsMesh),0];
plot(nullclineN2_zero(:,1),nullclineN2_zero(:,2),'g','lineWidth',2);
plot(nullclineN1_zero(:,1),nullclineN1_zero(:,2),'r','lineWidth',2);

t1=@(x2) 4-(0.5).*x2;
res1=t1(ptsMesh./2);
plot(res1,ptsMesh,'c','lineWidth',2);


t2=@(x1) 7-(2).*x1;
res2=t2(ptsMesh./2);
plot(ptsMesh,res2,'k','lineWidth',2);

will appreciate some help here
thanks