computational geometry – Boundary of points of a split bezier curve

Splitting up a bezier curve is easy but I wanted to know if the splitting up of a bezier curve always bounds the points of the two new bezier curves with in the shape defined by the original points.

Example, quadratic bezier curves have three points forming a triangle. The below shows a red bezier curve with a green triangle formed from its points. If I split the bezier in two at t = 0.5 I get two new blue triangles formed which are enclosed in / on the bounds of the green triangle. I can also split at t = 0.25 and get two yellow triangles forming the bounding box of the two new bezier curves formed here. Again these are enclosed in / on the bounds of the green triangle.

enter image description here

Back to the question, does this hold for all bezier curves of all degrees and the shapes formed in those degrees? (cubics will be bound by quadrangles etc)

differential equations – Piecewise PDE Over 3 Regions with Continuity Boundary Conditions

I am using Wolfram Language for the first time for my research and could really use some help! I am trying to find the analytical solution of a PDE which has piecewise constants and a piecewise input.

The PDE is to model the lithium-ion concentration in an electrolyte solution of a battery (in 1-dimension, x), and it has three regions, the Cathode, Separator, and Anode. This plot is an example to show these regions and the concentration gradient that the PDE is predicting:

enter image description here

(Ignore that the plot and axis above the plot don’t line up perfectly, it’s what I had available)

So with that background, this is the PDE:

enter image description here

Which in code I have as:

pde={-(yi  J(s)/(A Li ai)) + s ce(x,s) - De * D(ce(x,s),{x,2}) == 0}

With the general analytical solution found by:


But Li and ai have different constant values in the Anode vs. the Cathode region. Also important, I(s)=0 in the separator region. So I tried to tackle this as 3 separate PDEs rather than a single piecewise PDE, with the intention of stitching the PDEs together at the end based on their boundary conditions. (So 1 PDE for the anode region, 1 for the separtor, and 1 for the cathode). The boundary conditions of the PDEs are:

enter image description here

Essentially there’s 1 Neumann and 1 Dirichlet B.C. at each interior boundary, and 1 Neumann B.C. at each end of the battery cell. The problem I’m really running into is that the interior B.C.’s are all based off the solutions of the other PDEs, and I don’t know how to stitch them together.

I think I’ve imposed the exterior Neumann B.C.’s correctly…


pdeanode={-(yn  J/(A Ln an)) + s ceanode(x,s) - De * D(ceanode(x,s),{x,2}) == 0}
icanode = {Derivative(1,0)(ceanode)(0,s) ==0};
pdeanode/.pdeanodesoln((1)) //Simplify


pdecathode={-(yp  J/(A Lp ap)) + s cecathode(x,s) - De * D(cecathode(x,s),{x,2}) == 0}
iccathode = {Derivative(1,0)(cecathode)(Lc,s) ==0};
pdecathode/.pdecathodesoln((1)) //Simplify

Note: J used in place of I(s) becuase I is interpreted as an imaginary number

And I defined the Separator region PDE as this, given I(s)=0 in the separator:

pdesep={s cesep(x,s) - De * D(cesep(x,s),{x,2}) == 0}
pdesep/.pdesepsoln((1)) //Simplify

But I am not able to impose any boundary conditions to this atm. Or more accurately, I don’t know how to stitch these 3 PDEs together. Can somebody help me out? It’d be greatly appreciated!

compiler – Where is the boundary between things which can be statically typechecked, and those which must be typechecked dynamically?

I am brain storming on how to create a type system for a programming language, and what the compiler will do with the typing information. Here is what I have found, followed by the main question, which is if I can represent more complicated “validations” as types somehow.

First, it seems there is a difference between the physical record structure of an object stored in memory, and a “type”. An object for example can be “typed” such that it is either an odd fibonacci number, or an email formatted string, let’s say. That is its “type” in some theoretical type system. But the physical record is either an integer or a string, which have actual ramifications on the physical memory. It’s like there is a “data type”, and a “validation type”.

Given that, a traditional “type” (“validation type”) is really a set of constraints over the underlying data type(s). Since it can be over many data types (integer or string in the last example), it is really over an abstract data type which encompasses all data types, the “base” data type. Then on top of this base data type, which is some sort of in-memory record, you have a set of constraints. These constraints can be over individual record fields, or over several fields in the record. The constraints are boolean-resolving functions.

But this poses a problem for “inspecting the type”. If we want to see the type for our “odd fibonacci number or email string” object, and it is stored as a set of constraints (isOdd, isFibonacci, or isEmail), then you have to inspect these constraint functions to determine the type. But these functions would be useful for a type-checker, because the type-checker could check all the constraints of the type somehow and use them to infer other types or do typechecking (haven’t gotten this far to know how it’s implemented yet). Ideally, too, they would be useful for a compile-time type checker, a static type checker.

You could take the boolean constraint functions further so they are extremely complex async processes which run to determine if the data structure is the correct type. For example, say we had a type “if it’s a full moon then set the time to midnight, otherwise set the time to regular time”. Something crazy. Then it would make an HTTP request (a non-boolean function) to determine the full-moon status, and then check the value based on that status. There can be all kinds of async non-boolean functions embedded into the ultimate “boolean” constraint function, that it is hard to tease them apart and build a DSL that is “only boolean logic functions”. When you get into practice, the line between a pure boolean logic function and just an imperative function or set of assembly instructions starts to blur.

So a “type” is really a set of constraints on some underlying data type in memory so to speak.

With this in mind, can all of these sorts of things be done at compile time? I don’t think the moon example could be done, it seems some types will have to be checked at runtime dynamically. So then, what are the types of type-checking that can be done statically?

Are languages like Haskell or Coq able to somehow solve this problem of resolving these constraints at compile time? Or is there a fundamental limit to how far type systems can take you? I would like to do as much static compile-time type checking in a custom programming language as possible. But I keep thinking of these complex async function examples of constraints that might pop up in a “type”, and it throws the whole system off. I feel like types must be severely restricted and limited in what they can do, and everything else must be done at runtime. But I’m not sure.

differential equations – Diffusion System with Boundary Conditions

My original question was here. Thank you to those who took the time to answer. Essentially I was trying to replicate the concentration profiles for a simple cyclic voltammetry experiment. My code seems to match the boundary conditions but the answer doesn’t match the one in the book and is obviously incorrect at places. I’ll provide my code at the end but first I’d like to provide the complete system (from Compton’s Understanding Voltammetry, chapter 4):

Diffusion for species A and B

The boundary conditions are:

enter image description here

The electrochemical rate constants and the potential equations are:
enter image description here

enter image description here

The concentration profile should come as follows for various values of potential; and the rest of the parameters are given in the caption:

enter image description here

Now for my code. The parameter large is to model infinity:


(*Experimental Parameters*)
cAbulk := 1*10^-3;
k0:= 10^-5;
rtbyf:= 25.7 10^-3(*volt*);
dA:= 10^-5; dB:= 10^-5;
(Alpha):=0.5; (Beta):=0.5; T= 298; ef0= 0;
ts:= 1;(Nu):= -1;e1:= 0.5;


(*Equations for Reaction and other governing equations*)
eqn1=D(cA(x,t),t)-Div(dA Grad(cA(x,t),{x}),{x});
eqn2=D(cB(x,t),t)-Div(dB Grad(cB(x,t),{x}),{x});

e(t_):= Piecewise({{e1+ (Nu) t, 0<= t<= ts}, {e1+ 2(Nu) ts-(Nu) t,ts<= t<= 2 ts}})
kc:= k0 Exp(-(Alpha)/rtbyf (e(t)- ef0))
ka:= k0 Exp((Beta)/rtbyf (e(t)- ef0))

{ncA,ncB}=NDSolveValue({eqn1==NeumannValue(kc/dA cA(x,t)-ka/dA cB(x,t),x==0&&(t>=0)),

eqn2==NeumannValue(-kc /dB cA(x,t)+ka/dB cB(x,t),x==0&&(t>=0)),

(*Presentation of Results*)
Plot3D({ ncA(x,t)/cAbulk, ncB(x,t)/cAbulk},{t,0,2ts},{x,0,0.1  large},PlotRange->All, Boxed-> False, Axes -> True, AxesLabel->Automatic, 
MeshFunctions->{#1&}, MeshStyle->Gray, LabelStyle->Directive(Bold,Blue, 12), PlotStyle->Opacity(0.5), WorkingPrecision->20, PlotLegends-> {cA,cB}),
Plot3D({ ncA(x,t)/cAbulk},{t,dt,dt+0.03},{x,0,0.1  large}, PlotStyle-> Red, PlotRange->All, Boxed-> False, Axes -> True, AxesLabel->Automatic, MeshFunctions->{#1&}, LabelStyle->Directive(Bold,Blue, 12), WorkingPrecision->20)

Plot({ncA(x,dt)/cAbulk, ncB(x,dt)/cAbulk}, {x,0,0.1 large}, PlotRange-> {0,2}, PlotLegends -> {cA, cB})}, 
{dt,0,2ts}, ControlPlacement->Top)

The output below however has the bulk normalized concentration of specie A exceeding 1 at places; so there is obviously something wrong. Moreover the 2D plot shows the concentration dropping below 0 at intervals of 0.2mm. The finite element solution provided by Tim Laska doesn’t do either of those things.

enter image description here

So where did I go wrong?

Thanks in advance.

co.combinatorics – Okada-Schur functions and the Martin boundary of the Young-Fibonacci lattice

This question is related to three earlier posts addressing properties of the Young-Fibonacci lattice $Bbb{YF}$, namely:

Differential posets, the Plancherel state $varphi_mathrm{P}$, and minimality

Young-Fibonacci tableaux, content, and the Okada algebra

Infinite tridiagonal matrices and a special class of totally positive sequences

According to the results of Goodman and Kerov, the parameter space $Omega$ (as a set) of the Martin Boundary $E$ of the Young-Fibonacci lattice $Bbb{YF}$
consists of: (1) an outlier point $mathrm{P}$ together with (2) all pairs $(w,beta)$ where $0 < beta leq 1$
is a real parameter and $w = cdots a_4 a_3 a_2 a_1$ is an infinite fibonacci word with
$2$‘s occurring at positions $ dots, d_4, d_3, d_2, d_1$ when read from right to left
such that $sum_{i geq 1} 1/d_i < infty$. Note that the position
of $2$ in a fibonacci word of the form $w = u2v$ is $1 + |v|$ where $|v|$
denotes the sum of the digits of the suffix $v$, otherwise called
the length of $v$. The reader should consult Goodman and Kerov’s paper for a description of $Omega$‘s topology.

To each $omega in Omega$ the corresponding point $varphi_omega in E$,
is a non-negative, normalised harmonic function on $Bbb{YF}$. Under this correspondence $varphi_mathrm{P}$
is the Plancherel state, i.e. for $u in Bbb{YF}$

varphi_mathrm{P}(u) := {1 over {, |u|!}} , mathrm{dim}big( emptyset, u big)

where $dim(u,v)$ denotes the number of saturated chains $(u_0 lhd cdots lhd , u_n)$ in $Bbb{YF}$ starting at $u_0 = u$ and ending at $u_n =v$.
Recently Vsevolod Evtushevsky (see arXiv:2012.07447 and arXiv:2012.08107)
has announced a proof showing that the Martin Boundary $E$ coincides with its minimal boundary. If this is true, then any positive, normalised harmonic function
$varphi: Bbb{YF} longrightarrow Bbb{R}$ should be expressed as

varphi(u) = int_Omega dM_varphi(omega) , varphi_omega(u)

where $dM_varphi$ is
a measure (morally a boundary condition) uniquely determined by $varphi$.

There is an alternative supply of normalised harmonic functions on the Young-Fibonacci lattice: I’ll call them Okada-Schur functions,
but strictly speaking they are commutative versions of the polynomials defined in two non-commutative variables as introduced by Okada (Goodman and Kerov call them clone symmetric functions):

Let ${bf y} = (y_1, y_2, y_3, dots)$ be a sequence of real numbers.
The Okada-Schur function $sigma_{bf y}: Bbb{YF} longrightarrow Bbb{R}$
associated to the sequence ${bf y}$ is defined recursively (with respect to length) by

sigma_{bf y}(u) =
T_k ({bf y})
& text{if $u = 1^k$ for some $k geq 0$} \ \
S_k big({{bf y} + |v|} big) cdot sigma_{bf y}(v)
& text{if $u=1^k2v$ for some $k geq 0$}


T_ell ({bf y}) =
1 & y_1 & 0 & cdots\
1 & 1 & y_2 &\
0 & 1 & 1 & \
vdots & & & ddots
end{pmatrix}}_{text{$ell times ell $ tridiagonal matrix}}
S_{ell -1} ({bf y}) =
y_1 & y_2 & 0 & cdots\
1 & 1 & y_3 &\
0 & 1 & 1 & \
vdots & & & ddots
end{pmatrix}}_{text{$ell times ell$ tridiagonal matrix}}

for integers $ell geq 1$ and
where we employ the notation ${bf y} + r:= (y_{1+r}, , y_{2+r}, , y_{3+r}, , dots)$ for any integer $r geq 0$.
Okada proved that $sigma_{bf y}$ is a normalised harmonic function for any infinite sequence ${bf y}$. It is not
clear to me what are necessary and sufficient conditions for $sigma_{bf y}$ to be positive, let alone non-negative for that matter.
There is, however, at least one case when $sigma_{bf y}$ is
positive, namely:

Remark 1:
Let ${bf y} = big({1 over 2}, {1 over 3}, {1 over 4}, dots big)$
then $sigma_{bf y} = varphi_mathrm{P}$ (i.e. the Plancherel state).


Remark 2: If ${bf y} = (y_1, y_2, y_3, dots)$ is any sequence of real numbers for which $sigma_{bf y}$ is positive
then $sigma_{{bf y} + r}$ will be positive for any integer $r geq 0$. (This basically follows from some of the observations made in Infinite tridiagonal matrices and a special class of totally positive sequences).

Question 1: Suppose ${bf y}$ is a sequence such that $sigma_{bf y}$
is positive. Under which circumstances will $sigma_{bf y} in Omega$ ?
My guess is only when ${bf y} = big({1 over 2}, {1 over 3}, {1 over 4}, dots big)$ but what about ${bf y} =
big({1 over {r+2}}, {1 over {r+3}}, {1 over {r+4}}, dots big)$
for $r geq 1$ ?

Question 2: Suppose instead that $sigma_{bf y}$ is positive but $sigma_{bf y}
notin Omega$
. What is the unique measure $dM_{bf y}$ on the Martin boundary $Omega$ for which

sigma_{bf y}(u) = int_Omega dM_{bf y}(omega) , varphi_omega(u)

Again, what about ${bf y} = big({1 over {r+2}}, {1 over {r+3}}, {1 over {r+4}}, dots big)$ for $r geq 1$ ?

thanks, ines.

eigenvalues – Enforcing Vanishing Dirichlet Boundary Conditions While Computing Spectrum via Arnoldi Method

I want to compute an estimate of the spectrum for a free particle inside a tetrahedral box. Thus I computed the discretized Laplacian for the interior of a tetrahedron and its boundary. Using this discretized Laplacian, which for a given level of refinement of my simplicial approximation to a tetrahedron is a matrix, I used the Arnoldi method to compute its eigenvalues and eigenvectors. However I notice that the eigenvectors I obtain don’t vanish for the vertices on the boundary of my simplicial complex. How do I force the Arnoldi method to compute eigenvectors that obey vanishing Dirichlet boundary conditions for the vertices which make up the boundary of my simplicial approximation to a tetrahedron? Is there a special package I can download?


A sufficient condition for being the boundary of one’s convex hull?

Let $Asubsetmathbb R^n$ be such that:

  1. every non-zero linear functional is maximized by a unique point of $A$

  2. every point of $A$ is a point where some linear functional achieves its maximum over $A$ (i.e., every point of $A$ is an exposed point, though normally exposed points are only defined for convex sets)

  3. the boundary of $A$‘s convex hull is contained in the closure of $A$.

Does it follow that $A$ is closed, and hence equal to the boundary of its convex hull?

It doesn’t follow in the absence of (3). Let $A = { e^{itheta}: 0letheta<pi } cup { e^{itheta} – i : piletheta <2pi } subseteq mathbb C = mathbb R^2$.

differential equations – NDSolve to automate shooting method (or others) for free boundary value problem?

I am trying to solve the following system of differential equations using NDSolve:

f'(s) &= frac{f(s)}{g(s)-s}\
g'(s) &= frac{g(s)}{f(s)-s}\
f(0) &= 0\
g(0) &= 0\
f(overline{s}) &= 1\
g(overline{s}) &= 2

The solution is a Bayes-Nash equilibrium bidding strategy in a first-price auction, as described in a paper by Hubbard and Paarsch.

The right boundary condition is not necessarily known a priori. And there’s a singularity at the left boundary, and the system is overdetermined, but solutions do exist.

Actually, this represents a special case with an analytic solution. $overline{s}=2/3$ and then $f(s) = frac{2s}{1+frac{3}{4}s^2}$, $g(s) = frac{2s}{1-frac{3}{4}s^2}$. But I want to try to solve it numerically.

One technique Hubbard and Parsch advocate is a shooting technique. Don’t specify the left boundary; take a guess for $overline{s}$ and specify the right boundary, and solve the reverse initial value problem. Then decrease $overline{s}$ if the solutions diverge, and increase it if they are too far from $f(0)=0,g(0)=0$.

I can do this trial and error by hand and it works. It would be easy to wrap up NDSolve in a root-finding algorithm to do this.

maxBid = 2/3;
soln = NDSolve({
   {f'(s) == f(s)/(g(s) - s),
    g'(s) == g(s)/(f(s) - s)},
   {f(maxBid) == 1, g(maxBid) == 2}},
  {f, g}, {s, 0, maxBid}) (* gives nearly correct solution *)

However, I’m wondering if there is a way to get NDSolve to handle all this automatically for me. Can the built-in "Shooting" method be used for this? Do any differential equation wizards know of a better way to attack this problem?

finite element method – How to refine a boundary mesh with MeshRefinementFunction?

MeshRefinementFunction works well to refine elements in a defined spatial domain (here x>0):

f2d = Function({vertices, area},
   Block({x, y},
    {x, y} = Mean(vertices);
    If(x > 0 && area > 0.001, True, False))
m = ToElementMesh(Disk(), MeshRefinementFunction -> f2d);

resulting in:

It also works for 3d meshes:

f3d3 = Function({vertices, volume},
   Block({x, y, z},
    {x, y, z} = Mean(vertices);
    If(x > 0 && volume > 0.0001, True, False))
m = ToElementMesh(Sphere(), MeshRefinementFunction -> f3d3);
m("Wireframe"("MeshElement" -> "MeshElements"))

However, when I try that for a 2d surface mesh embedded in 3 dimensions, I do not get any refinement:

f3d2 = Function({vertices, area},
   Block({x, y, z},
    {x, y, z} = Mean(vertices);
    If(x > 0 && area > 0.0001, True, False))
bm = ToBoundaryMesh(Sphere(), MeshRefinementFunction -> f3d2);

What am I doing wrong or does Mathematica not support spatial refinement for boundary meshes (I am using Version 12.1)? Is there any workaround? I have also tried