Integral variable substitution

I have a problem which I do not understand why my solution doesn’t work.

Here is the setup: I integrate the function

func = Integrate((z*p),{z,0,1},{p,0,1}) = 1/4

However, now i want to replace

func=(z*p)/.z->dx*Exp(x*z1+y*p1)/.p->dy*Exp(x*z1+y*p1)

where dx, dy symbolises derivative of x and y, respectively. I change the name z->z1, p->p1 because mathematica cannot handle the replacement correctly but keep in mind that it is the same. Now by taking the derivative and the limit x -> 0 and y -> 0 you should get the original function back.

But before that, i want to integrate

eval=Integrate(func,{z1,0,1},{p1,0,1})
Limit(D(D(Coefficient(eval,dx*dy),y),x), {y->0,x->0}) = 1

So as you can see i get the wrong result and i do not know where the error is. It is very important that i take the derivative and limit after the integration step

3 variable constrained optimization with substitution method

While studying Implicit Functions I stumbled upon an exercise that stated:

Find all the Extrema of the function $f(x,y,z)=x+2y+3z$ under the constraint $x>0,space y>0,space z>0 $ and $xyz-1=0$ using:

  1. Langrage multipliers
  2. Substitution Method

Using Langrage I found the local Extremum at $(x,y,z)=(sqrt(3){6},frac{sqrt(3){3}}{2^(2/3)},frac{sqrt(3){2}}{3^(2/3)}) $ equaling $approx 5.45 $

When starting approaching the problem with the substitution method I quickly came to the realization that I’ve never used this method for a function of more than 2 variables. I searched extensively the web for examples of applying this method to 3 or more variables,to no avail as most examples only employed Lagrange multipliers, and even wikipedia said that this method is mostly used “for simple functions of two variables”.

The major problem I face is that, when on two variables, I simply solve the optimization equation for one variable, and then after substituting the problem is reduced to a simple one-variable calculus Extrema exercise. After trying to do the same on 3-variables the problem was reduced to two variables. I thought about applying the method once more, which lead me to no result.
Any help on how to approach the problem, or even specific literature to read about this case would be appreciated.

recurrence relation – Clarifying statements involving asymptotic notations in soln of $T(n) = 3T(lfloor n/4 rfloor) + Theta(n^2)$ using recursion tree and substitution

Below is a problem worked out in the Introduction to Algorithms by Cormen et. al.

(I am not having problem with the proof but only I want to clarify the meaning conveyed by few statements in the text while solving the recurrence and the statements are given as ordered list at the end. Simply because I want to master the text.)

$$T(n) = 3T(lfloor n/4 rfloor) + Theta(n^2)$$

Now the authors attempt to first find a good guess of the recurrence relation using the recursion tree method and for that they allow sloppiness and assumes $T(n)=3T(n/4) + cn^2$.

Recursion Tree

Though the above recursion tree is not quite required for my question but I felt like including it to make the background a bit clearer.

The guessed candidate is $T(n)=O(n^2)$. Then the authors proof the same using the substitution method.

In fact, if $O(n^2)$ is indeed an upper bound for the recurrence (as we shall verify in a moment), then it must be a tight bound. Why? The first recursive call contributes a cost of $Theta(n^2)$ , and so $Omega(n^2)$ must be a lower bound for the recurrence. Now we can use the substitution method to verify that our guess was correct, that is, $T(n)=O(n^2)$ is an upper bound for the recurrence $T(n) = 3T(lfloor n/4 rfloor) + cn^2$ We want to show that $T(n)leq d n^2$ for some constant $d > 0$.

Now there are a few things which I want to get clarified…

(1) if $O(n^2)$ is indeed an upper bound for the recurrence. Here the sentence means (probably) $exists$ a function $f(n) in O(n^2)$ such that $T(n)in O(f(n))$

(2) $Omega(n^2)$ must be a lower bound for the recurrence Here the sentence means probably $exists$ a function $f(n) in Omega(n^2)$ such that $T(n)in Omega(f(n))$

(3) $T(n)=O(n^2)$ is an upper bound for the recurrence $T(n) = 3T(lfloor n/4 rfloor) + cn^2$ This sentence can be interpreted as follows assume that $T'(n) = 3T'(lfloor n/4 rfloor) + cn^2$ and $exists$ a function $T(n) in O(n^2)$ such that $T'(n)in O(T(n))$

(4) $T(n)leq d n^2$ for some constant $d > 0$ We are using induction to verify to the definition of Big Oh…

I feel that the author could simply have written the $T(n)$ is Upper Bounded by $n^2$ and Lower Bounded by $n^2$ or the author could have simply written $T(n) = O(n^2)$ and $T(n)=Omega(n^2)$, did the author just use the above style of statements as pointed out in $(1),(2),(3)$ just for more clearer explanation or there are some extra meaning conveyed which I am missing out.

Substitution of monomorphic type variables in generalized Hindley–Milner

I am trying to understand the constraints-based Hindley–Milner type inference algorithm described in the Generalizing Hindley-Milner paper. The function $text{S}small{text{OLVE}}$ is defined as follows:

$$
begin{array}{l}
text{S}small{text{OLVE}} :: Constraints → Substitution \
text{S}small{text{OLVE}} (emptyset) = ( ) \
text{S}small{text{OLVE}} ({ tau_1 equiv tau_2 } cup C) = text{S}small{text{OLVE}} (mathcal{S} C) circ mathcal{S} \
quad quad quad
text{where} mathcal{S} = text{mgu}(tau_1, tau_2) \
text{S}small{text{OLVE}} ({ tau_1 leq_M tau_2 } cup C) = text{S}small{text{OLVE}} ({ tau_1 preceq text{generalize}(M, tau_2) } cup C) \
quad quad quad
text{if} (text{freevars}(tau_2) − M) cap text{activevars}(C) = emptyset \
text{S}small{text{OLVE}} ({ tau preceq sigma } cup C) = text{S}small{text{OLVE}} ({tau equiv text{instantiate}(sigma)} cup C) \
end{array}
$$

Most of this is clear, but where I am confused is around how substitution is defined for the monomorphic set $M$. The paper explains that

For implicit instance constraints, we make note of the fact that the substitution also has to be applied to the sets of monomorphic type variables.

$$
S(tau_1 leq_M tau_2) =_{def} mathcal{S} tau_1 leq_{mathcal{S} M} mathcal{S} tau_2
$$

but I don’t find any details of how $mathcal{S} M$ is defined. Based on Example 3, I think we should get something like:

$$
text{S}small{text{OLVE}} ({tau_4 leq_{{ tau_1 }} tau_3, text{Bool} rightarrow tau_3 equiv tau_1 }) \
= text{S}small{text{OLVE}} ({tau_4 leq_{{ tau_3 }} tau_3 }) circ(tau_1 := text{Bool} rightarrow tau_3)
$$

In this step, unifying $ text{Bool} rightarrow tau_3 $ and $ tau_1 $ gives a substitution $ mathcal{S} = ( tau_1 := text{Bool} rightarrow tau_3 ) $, and $ M = { tau_1 } $, and so apparently $ mathcal{S} { tau_1 } = { tau_3 } $, but how do we arrive at that? Maybe there is something obvious I have overlooked here.

ct.category theory – Finitary monads on $Set$ are substitution monoids. Finitary monads on $Set_*$ are…?

It is well known that the category of functors $F : Fin to Set$ is equivalent to the category of finitary endofunctors $Set to Set$; in this equivalence, finitary monads correspond to what are called substitution monoids on $(Fin,Set)$, i.e. to monoids with respect to the monoidal structure
$$
F diamond G = mmapsto int^n Fn times G^{ast n}m tag{$star$}
$$
where $G^{*n}$ is th functor
$$
m mapsto int^{p_1,dots, p_n} Gp_1 times dots times Gp_n times Fin(sum p_i, m).
$$

More precisely, the equivalence $(Set,Set)_{omega} cong (Fin,Set)$ can be promoted to a monoidal equivalence, and composition of endofunctors corresponds to substitution of presheaves in the following sense: let $J : Fin to Set$ be the inclusion functor, then
$$
Lan_J(Fdiamond G) cong Lan_JF circ Lan_JG tag{$heartsuit$}
$$

and
$$
(Scirc T) J cong SJ diamond TJtag{$clubsuit$}
$$
for two finitary endofunctors $S,T : Set to Set$. (Kan extending along $J$ and precomposing an endofunctor of $Set$ with $J$ is what defines the equivalence.)

I would like to prove the same exact theorem, replacing everywhere the cartesian category of sets with the monoidal category of pointed sets and smash product, but I keep failing.

The equivalence of categories
$$
(Fin_*,Set_*)cong (Set_*,Set_*)
$$
remains true; and this equivalence must induce an equivalence between the category of finitary monads on pointed sets, and the category of suitable “pointed substitution” monoids, that are obtained from the iterated convolution on $(Fin_*,Set_*)$ as
$$
Fdiamond’ G = m mapsto int^n Fn land G^{*n}m
$$
where $land$ is the smash product, and $G^{*n}$ iterates the convolution on $(Fin_*, Set_*)$ induced by coproduct on domain, and smash on codomain:
$$
G^{*n}m = int^{p_1,dots,p_n} Gp_1 land dotsland Gp_n land Fin_*(bigvee p_i, m)
$$
where $bigvee p_i$ is the coproduct of pointed sets, joining all sets along their basepoint.

This shall be the perfect equivalent of $(star)$.

However, if to prove the isomorphisms $heartsuit, clubsuit$, I find that it is not true that $Lan_J(Fdiamond G) cong Lan_JF circ Lan_JG$. I am starting to suspect that the generalisation is false as I have stated it, or that it is true in a more fine-tuned sense.

So, I kindly ask for your help:

To what kind of monoids on $(Fin_*,Set_*)$ do finitary monads on pointed sets correspond?

Integration – How do I convert $ int_0 ^ pi int_0 ^ pi | cos (x + y) | dxdyi $ using the substitution $ x = u-v, y = v $.

How do I convert? $ I = int_0 ^ pi int_0 ^ pi | cos (x + y) | dxdy $ to $ int_0 ^ {2 pi} int_0 ^ pi | cos u | dudv $ using the substitution $ x = u-v, y = v $.

Please explain with fig. I saw worth $ int_0 ^ pi int_0 ^ pi vert cos (x + y) vert dxdy $

Please help me understand the limit.
I understand the following:
(0,0) in the xy-plane corresponds to (0,0) in the u-v-plane

The x-axis in the xy-plane corresponds to the u-v-plane of the u-axis

The y-axis in the xy-plane corresponds to (0,0) in the u = v-line in the u-v-plane

$ x = pi $ Line corresponds $ u = pi + v $, $ v in (0, pi) $

object-oriented design – Liskov substitution principle – example of history

I'm looking for a good explanation of the LSP history rule. I read the Wikipedia entry that states the request and provides an example:

History restriction (the "history rule"). Objects can only be modified by their methods (encapsulation). Since subtypes may introduce methods that are not available in the super type, the introduction of these methods can enable status changes in the sub type that are not permitted in the super type. The history restriction prohibits this. It was the novel element introduced by Liskov and Wing. A violation of this restriction can be exemplified by defining a changing point as a subtype of an unchanging point. This is a violation of the history constraint, since the status in the history of the immutable point is always the same after creation, and therefore cannot generally include the history of a changeable point. However, fields added to the subtype can certainly be changed, since they cannot be observed with the supertype methods. This means that a circle with a fixed center point but a variable radius can be derived from the unchangeable point without violating the LSP.

In my opinion there is a problem with this example. The induction of the subtype Mutable Point to the Supertype Immutable Point interrupts the invariant requirement.

Super type invariants must be preserved in a sub type.

Can you provide an example of a poor OO design where the history requirements are broken and the invariants are met?

Alternatively, can you provide another explanation of why this requirement is required?

Regex – multipattern substitution along with spaces in Python

Entry: – "good && Toast && guest & fast & slow || wind || old || new || very good"
Requirement: – Replace "&&" with "and" (similar to "||" with "or") so my output for above should be as follows
Edition: – "good and toast && guest & fast & slow || wind || old || new or very good"

What I tried: –

import re

new_ = {
                                '&&':'and',
                                '||':'or'
}

inpStr = "good && toast&&guest &fast& slow||wind ||old|| new || very good"
replDictRe = re.compile( r'(s%ss)' % 's|s'.join(map(re.escape, new_.keys())))
oidDesStr = replDictRe.sub(lambda mo:new_.get(mo.group(),mo.group()), inpStr)
print(oidDesStr)

Runtime analysis – The iterative substitution method provides a different solution for T (n) = 3T (n / 8) + n than expected using the main clause

I like to estimate the duration of the repetition $ T (n) = 3T (n / 8) + n $ using the iterative substitution method. With the main clause I can check whether the term is $ O (n). $ With the substitution method, however, I come to a different answer than expected.

$ T (n) = 3T (n / 8) + n \
= 3T (3T (n / 8 ^ 2) + n) + n = 3 ^ 2T (n / 8 ^ 2) + 3n + n \
= 3 ^ 2 (3T (n / 8 ^ 3) + n) + 3n + n = 3 ^ 3T (n / 8 ^ 3) + 3 ^ 2n + 3n + n $

We can see the pattern: $ 3 ^ iT (n / 8 ^ i) + n * frac {3 ^ i -1} {2} $

The recursion ends when $ i = log_8 n $.

Inserting i into the discovered pattern,
$ 3 ^ {log_8 n} T (1) + n * frac {3 ^ {log_8 n} -1} {2} = \ n ^ {log_8 3} * c + 0.5n * n ^ {log_8 3 } – 0.5n = n ^ {log_8 3} * c + 0.5n ^ {log_8 3 + 1} -0.5n in O (n ^ {1.52}) $.

What am I doing wrong? Why is my answer not? $ O (n) $ ?