I have a bucket (system) of chemical kinetic models, a nonlinear dynamical system, given by:

Kf> = 0 and kr> = 0 are the parameters. The initial conditions are A (0) = B (0) = 1 and C (0) = 0. I generated data according to y1 = C (0,5) + noise and y2 = C (2) + noise, where the Noise is normally distributed with mu = 0 and sigma = 0.1 using kf = 0.1 and kr = 2.

```
odes = {A & # 39;
B & # 39;
C1 & # 39;
C1[0] == 0};
odesData = odes /. {kf -> 0,1, kr -> 2};
Solution = NDSolve[odesData, {A, B, C1}, {t, 0, 5}][[1]];
seedRandom[10]
y1 = C1[0.5] + RandomVariate[NormalDistribution[0, 0.1]]/. solution
seedRandom[30]
y2 = C1[2] + RandomVariate[NormalDistribution[0, 0.1]]/. solution
Data = {{0.5, y1}, {2, y2}};
```

I have to visualize the data with an input / output image, a parameter space image and a data space image. For the parameter space and data space images, I also have to plot the gradient and newton directions for several randomly selected parameter values.

With ParametricNDSolve I can get an input / output image.

```
kfmax = 1;
krmax = 5;
number = 100;
kfrange = range[0, kfmax, kfmax/numsteps];
krrange = range[0, krmax, krmax/numsteps];
soln = ParametricNDSolve[odes, {A, B, C1}, {t, 0, 5}, {kf, kr}];
eqns = Rate[Table[C1[kf, kr]
feweqns = flattening[eqns][[ ;; ;; 1000]];
eqnsplot = plot[feweqns, {t, 0, 2.5}, PlotRange -> All]
bestfitplot = plot[model[kf, kr]
AxesLabel -> {"t", "C
```

Using ParametricNDSolveValue and FindFit, I can find the best fitting parameters for the model (line is with fit parameters and points are generated data).

```
model = ParametricNDSolveValue[odes, C1, {t, 0, 5}, {kf, kr}]
fit = FindFit[data, {model[kf, kr]
C1 & # 39;
bestfitplot = plot[model[kf, kr]
validptplot = ListPlot[{{0.5, y1}, {2, y2}}]; (* the data you are trying to match *)
show[bestfitplot, validptplot]
```

I can also visualize the parameter space using the log of the cost function:

```
list1 = eqns /. t -> 0.5;
list2 = eqns /. t -> 2;
Cost = (list1 - y1) ^ 2 + (list2 - y2) ^ 2; Costs // MatrixForm; (* kf-th row and kr-th column *)
parspaceplot = ListContourPlot[Log[cost], PlotLegends -> Automatic, DataRange -> {{0, krmax}, {0, kfmax}}, FrameLabel -> {"kr", "kf"}, Contours -> 50];
bestfitptplot = ListPlot[{{kr, kf}} /. fit];
parspace = show[parspaceplot, bestfitptplot]
```

as well as the data space image

```
max = 2;
dy1 = 0.2; dy2 = dy1;
modely1 = MapThread[model[#1, #2][0.5] &, Table[{i, j}, {i, 0, max, dy1}, {j, 0, max, dy2}][[#]][Transpose]]& /@ Offer[max/dy1];
modely2 = MapThread[model[#1, #2][2] &, Table[{i, j}, {i, 0, max, dy1}, {j, 0, max, dy2}][[#]][Transpose]]& /@ Offer[max/dy1];
plot1 = ListPlot[{Modely1[{Modely1[{modely1[{modely1[[#]]modely2[[#]]}[Transpose] & /@ Offer[max/dy1], Joined -> True, PlotStyle -> Blue, AxesLabel -> {"y1", "y2"}];
plot2 = ListPlot[{Modely1[{Modely1[{modely1[{modely1[Transpose][[#]]modely2 [Transpose][[#]]}[Transpose] & /@ Offer[max/dy1], Joined -> True, PlotStyle -> Red];
plot3 = ListPlot[{{{y1, y2}}, {{model[kf, kr][0.5] /. fit, model[kf, kr][2] /. fit}}}, PlotStyle -> {{Black, dot, point size[0.025]}, {Green}}];
show[plot1, plot2, plot3]
```

However, if I try to compute the directions of the gradient (j # 39) and Newton ((j # j) .x ==-degrees for x), where j === jacobian, j & # 39; j === Fischer information matrix, r === residuals and j & # 39; denotes the transposition of j, I do not understand what I think I should get. That is, the slope direction is not perpendicular to the contour lines.

```
seedRandom[]
Randmax = 10;
Randoms = table[{RandomReal[{0, krmax}], RandomReal[{0, kfmax}]}, {i, randmax}];
r = {y1, y2} - (model[kf, kr][#] & / @ {0.5, 2});
j = - (degree[model[kf, kr][#]{kf, kr}]{0.5, 2} /0.1;
degree = j [Transpose].r;
Fish = j [Transpose].j /. Random[[#, 1]], kf -> Random[[#, 2]]} & /@ Offer[randmax];
grads = grad /. Random[[#, 1]], kf -> Random[[#, 2]]} & /@Offer[randmax];
Newts = LinearSolve[Fish[fish[Fisch[fish[[#]]-grads[[#]]]& /@ Offer[randmax];
gradarrows = graphics[{Black arrow
Random[{BlackArrow[{Randoms[{BlackArrow[{randoms[[#]]Random[[#]]+ Normalize[grads[grads[grads[grads[[#]]]/ 2.5}]& /@ Offer[randmax]}];
newtarrows = graphics[{Blue arrow
Random[{BlueArrow[{Randoms[{BlueArrow[{randoms[[#]]Random[[#]]+ Normalize[newts[newts[Molche[newts[[#]]]/ 10}]& /@Offer[randmax]}];
show[parspace, gradarrows, newtarrows]
```

Things still do not look good, even if I only have costs for log[cost],

This is a homework assignment for a degree in Predictive Modeling. A Jupyter notebook is provided for the class, and I know how to numerically calculate the Jacobian (derive sensitivity equations and solve the 9 equations simultaneously with odeint), but I've invested enough in that code to get it to the end to see. I could do the same with Mathematics NDSolve with the 9 equations, but it seemed like I should be able to get the Jacobi with the parameter equation generated by ParametricNDSolve (similar to the parameter sensitivity section in the ParametricNDSolve reference page).

Any suggestions on how I can get the Gradient and Newton directions? (Both math and coding recommendations are welcome).

P.S. This is my first post and for the first time I could not solve things with what I could find online and in this forum, which was very helpful!