functional programming – Proving injectivity for an algorithm computing a function between sets of different types of partitions

I am attempting to solve the following problem:

Let $A$ be the set of partitions of $n$ with elements $(a_1, dots, a_s)$ such that $a_i > a_{i+1}+a_{i+2}$ for all $i < s,$ taking $a_{s+1} = 0.$ Define $g_n = F_{n+2}-1$ and let $B$ be the set of partitions of $n$ as $b_1 ge dots ge b_s$ such that $b_i in {g_n}$ for all $i,$ and if $b_1 = g_k$ for some $k,$ then $g_1, dots, g_k$ all appear as some $b_i.$ Prove $|A|=|B|.$

Attempt: Let $e_i$ be the vector with $1$ at position $i$ and $0$ elsewhere. If $b_1 = g_k,$ let $c=(c_k, dots, c_1)$ count how many times $g_i$ appears. We calculate $f: B to A$ as follows:

Let $c=(c_k,dots,c_1), a=(0,dots,0).$ While $c ne 0,$ let $d_1 > dots > d_k$ be the indices such that $c_{d_i} ne 0.$ Replace $c, a$ with $c-(e_{d_1}+dots+e_{d_k}), a+(g_{d_1} e_1 + dots + g_{d_k} e_k)$ respectively. After the while loop ends, let $f(b)=a.$

Let $sum a, sum b, sum c$ be the sum of the components of $a, b, c$ respectively. Since $sum c$ decreases after every loop, the algorithm terminates and $f(b)$ is well-defined. Since $c_k g_k + dots + c_1 g_1 + sum a$ does not change after every iteration, is $sum b$ at the start and $sum a$ at the end, we have $sum f(b) = sum b = n,$ so $f(b)$ is also a partition of $n.$ Now $a = (g_k, dots, g_1)$ after the first loop, which satisfies the condition $g_i > g_{i-1}+g_{i-2}$ since $g_i = F_{n+2}-1 = (F_{n+1}-1)+(F_n-1)+1 > g_{i-1}+g_{i-2}.$ Furthermore, after every iteration of the loop, the difference $a_i – (a_{i-1}+a_{i-2})$ changes by $0, g_{d_j}-g_{d_{j-1}} > 0,$ or $g_{d_j}-(g_{d_{j-1}}+g_{d_{j-2}}) > 0,$ so we have $a_i > a_{i-1} + a_{i-2}$ at the end and hence $f(b) in A.$ Thus, $f: B to A$ is well-defined.

In order to prove the injectivity of $f,$ it suffices to prove each loop iteration as a mapping $(c,a) to (c’,a’)$ is injective, which would imply the mapping $(c,0) to (0,a)$ that the while loop creates is injective. Indeed, if $f(b_1) = f(b_2) = a$ with $(c_1, 0), (c_2, 0)$ being sent to $(0, f(b_1)) = (0,a), (0, f(b_2)) = (0,a)$ respectively, then we have $(c_1, 0) = (c_2, 0) Rightarrow c_1 = c_2 Rightarrow b_1 = b_2.$

Suppose $d_1 > dots > d_i, f_1 > dots > f_j$ are the non-zero indices of $c_1, c_2$ respectively and $c_1 – (e_{d_1}+dots+e_{d_i}) = c_2 – (e_{f_1}+dots+e_{f_j}), a_1+g_{d_1}e_1 + dots+ g_{d_i} e_i = a_2 + g_{f_1} e_1 + dots + g_{f_j} e_j.$ If $x ge 2$ is an entry of $c_1,$ it decreases by $1,$ so the corresponding entry in $c_2$ after $c_2$ is modified is also $x-1,$ which means it must’ve been $(x-1)+1 = x$ before since $x-1>0.$ Thus, if the values of two positions of $c_1, c_2$ differ, one is $1$ and the other is $0.$ However, if $c_1 = (1,0), a_1 = (3,1), c_2 = (0,1), a_2 = (4,1),$ then $(a_1, c_1), (a_2, c_2)$ both get sent to $((5,1), (0,0)).$ I can rule out this specific example by arguing that one of the pairs is illegal and could not have come from any choice of initial $c,$ but I have no idea on how to do this in general.

What should I do next in order to show $f$ is injective? Furthermore, since the problem I’m trying to prove is correct, injectivity would imply $f$ is secretly a bijection. But I have no clue on how to even start on the surjectivity of $f,$ so I just constructed a similar algorithm for $g: A to B$ in the hopes of proving $g$ is injective too. If I can show $f$ is injective I will probably know how to show $g$ is.

Here is an example of $f, g$ in practice:

Let $n = 41, b = (12, 7, 7, 4, 4, 2, 2, 2, 1) Rightarrow c = (1, 2, 2, 3, 1).$

$$((1, 2, 2, 3, 1), (0,0,0,0,0)) to ((0, 1, 1, 2, 0), (12, 7, 4, 2, 1)) to ((0, 0, 0, 1, 0), (19,11,6,2,1)) to ((21,11,6,2,1),(0,0,0,0,0)),$$ so $f(b) = (21,11,6,2,1).$

Let $a = (21, 11, 6, 2, 1).$

$$((21,11,6,2,1),(0,0,0,0,0)) to ((9,4,2,0,0), (1,1,1,1,1)) to ((2,0,0,0,0),(1,2,2,2,1)) to ((0,0,0,0,0),(1,2,2,3,1)),$$ so $g(a) = (12, 7, 7, 4, 4, 2, 2, 2, 1).$

Help proving one direction of an iff for a self learner in differential equations

I am self learning differential equations from the book “Differential Equations With Hostorical Applications” by George Simmons. The following problems is the one I am having an issue with:

Given the homogeneous equation $y” + P(x)y’ + Q(x)y =0$, and change the independent variable from $x$ to $z=z(x)$. Show that the homogeneous equation can be transformed through this change of variables into an equation with constant coefficients iff $frac{Q’ + 2PQ}{Q^{3/2}}$ is constant, in which case $z = int sqrt{Q(x)}dx$ will effect the desired result.

As of now I have solved the “only if” direction with the following math:

Let $ z = z(x)$. We have the following:

$$frac{df}{dx} = frac{df}{dz}frac{dz}{dx} = z'(x) frac{df}{dx} Rightarrow frac{d}{dx} rightarrow z'(x)frac{d}{dz}$$

for the first derivative, and for the second derivative we have:

$$frac{d^{2}f}{dx^{2}} = frac{d}{dx}left(z'(x)frac{df}{dz}right) = frac{d}{dz}left(z'(x)frac{df}{dz}right)z'(x) = (z'(x))^{2}frac{d^{2}f}{dz^{2}} + z'(x)frac{d}{dz}left(z'(x)right) frac{df}{dz} = ldots $$
$$ ldots = (z'(x))^{2}frac{d^{2}f}{dz^{2}} +z'(x)frac{d}{dx}left(z'(x)right)frac{dx}{dz} frac{df}{dz}$$

$$frac{dx}{dz} = frac{d}{dz}left(z^{-1}(z) right) = frac{1}{z'(x)}$$
$$frac{d^{2}f}{dx^{2}} = (z'(x))^{2}frac{d^{2}f}{dz^{2}} + z”(x)frac{df}{dz} Rightarrow frac{d^{2}}{dx^{2}} rightarrow (z'(x))^{2}frac{d^{2}}{dz^{2}} + z”(x)frac{d}{dz}$$

and all together we have the following three equations, which combined gives us the transformed differential equation.

$$y” + P(x)y’ + Q(x)y =0$$

$$frac{d}{dx} rightarrow z'(x)frac{d}{dz}$$

$$frac{d^{2}}{dx^{2}} rightarrow (z'(x))^{2}frac{d^{2}}{dz^{2}} + z”(x)frac{d}{dz}$$

$$y” + left(frac{P(z)}{z'(x)} + frac{z”(x)}{(z'(x))^{2}}right)y’ + frac{Q(z)}{(z'(x))^{2}}y = 0$$

Now suppose that $frac{Q(z)}{(z'(x))^{2}} = c_{2}$ and that $left(frac{P(z)}{z'(x)} + frac{z”(x)}{(z'(x))^{2}}right) = c_{1}$. Plug $z'(x) = sqrt{frac{c_{2}}{Q(z)}}$ and $z”(x) = frac{Q'(x)}{2sqrt{c_{2}Q(x)}}$ into the the equation for $c_{1}$, and we get that :

$$frac{2PQ + Q’}{Q^{3/2}} = frac{2c_{1}}{sqrt{c_{2}}} = constant$$

I have tried various things for the other direction, but I can’t seem to make any progress.

expression simplification – Proving Properties of Boolean Algebras $(x+y) + (ycdot x’) = x+y$

I’m trying to justify the following simplification:
$(x + y) + (y cdot x’) = x + y$

The solution that was provided to me is as follows:

= (x + y) + (y cdot x’) – (LHS)\
= x + y cdot (1 + x’)\
= x + y cdot (1)\
= x + y\

I’m a little confused on the jump from steps 1 to 2, and what rule was applied and then also the switch from a disjunction to a conjunction.

Is anyone able to provide some insight into the reasoning?

digital – Proving you were at a certain location on a certain day

If I wanted to show that I’ve visited a place (say Central Park) I can show a photo of myself in Central Park. Let’s say I wanted to show that I’ve visited Central Park for each day of the year in 2019. Then I would need 365 photos of myself in Central Park, one on each day of 2019. A quick Google search reveals a 2006 article saying that timestamps can easily be altered. Is there now a way to take photographs whose timestamps can’t easily be altered?

turing machines – proving $E_{TM}$ is undecidable using the halting language

You are right, assuming $E_{TM}in R$ you have Turing machine $T$ which decides $E_{TM}$ and you can construct with it a Turing machine which decides $H_{halt}$:

If we have $T$ which decides $E_{TM}$ and suppose we want to decide whether $M$ halts on $x$.
Construct a Turing Machine $T_{M,x}$ which irrespective of its input $y$ simulates $M$ on input $x$: if the simulation halts and $M$ accepts or rejects $x$, then $T_{M,x}$ accepts its input, otherwise, it never halts.

You can convince yourself that if $M$ halts on $x$, then $L(T_{M,x}) = Sigma^*$, and if it doesn’t then $L(T_{M,x}) = phi$.

Now you can figure out why this acts as a decider for the Halting problem.

real analysis – sequence characterization of adhering points: am I proving this result correctly?

To let $ X $ to be a subset of $ textbf {R} $, and let $ x in textbf {R} $. Then $ x $ is an inherent point of $ X $ exactly when a sequence exists $ (a_ {n}) _ {n = m} ^ { infty} $, consisting exclusively of elements in $ X $that are converging too $ x $.


Let us prove the implication $ ( Leftarrow) $ first.

Since $ a_ {n} to x $, for each $ varepsilon> 0 $is there $ N geq m $ so that
begin {align *}
n geq N Longrightarrow | a_ {n} – x | leq varepsilon
end {align *}

Since $ a_ {n} in X $That means no matter what happens $ varepsilon> 0 $ you choose, there is one $ a_ {n} in X $ so that $ | a_ {n} – x | leq varepsilon $.

Consequently, $ x $ is an inherent point of $ X $.

Conversely, we want to prove that $ ( Rightarrow) $.

If $ x $ is an inherent point of $ X $for each $ varepsilon> 0 $Is there a $ a in X $ so that $ | x – a | leq varepsilon $.

Especially for everyone $ varepsilon = 1 / n $there is an element $ a_ {n} in X $ so that $ | x-a_ {n} | leq 1 / n $.

If you take the limit, you come to the conclusion $ lim a_ {n} = x $as requested.

Probability – How do you generally go about proving a statement?

I struggle to start at all when it comes to proving statements, so I read a book about stochastic processes and in the beginning they offer some basic properties of probabilities, and one of them is

$ P (A ^ c) = 1 – P (A) $

and next they ask me to prove it as an exercise, as simple as that may seem, I just don't know how to go about proving such a statement myself?

Any guidance?

Elementary number theory – congruence: proving a set takes all these values?

I came across this question in an Olympics math book:

To let $ gcd (m, n) = 1 $, $ A = {x mid0 le x le m-1, gcd (x, m) = 1 } $ and $ B = {x mid0 le x le n-1, gcd (x, n) = 1 } $. If $ C = {na + mb mid a in A, b in B } $ then prove it $ C $ takes all values $ equiv 0 le x le mn-1 $ Modulo $ mn $.

I have successfully proven that $ gcd (mn, na + mb) = 1 $ for all $ a, b. $ That would imply that $ forall c in C, $ $ 0 le c le mn-1 $ Taken modulo $ mn $.

However, I have difficulty continuing. My guess is that it has something to do with Euler's phi function, given the number of elements in $ A $ and $ B $ are each $ phi (m) $ and $ phi (n) $ and $ C $ must accept $ phi (mn) Rightarrow phi (m) phi (n) $ Elements.

How do I prove that this is valid for all such values?

Complexity Theory – Helps to understand questions and how to start proving that language is unrecognizable

I looked at this problem and I don't know how to start. Could someone please lead me in the right direction?

Also help clarify what the question is. I am not 100% clear what L (M) means. Is it the language of machine M? I only saw questions that read "M is a machine that accepts all strings that end with 010", never before L (M) = {…}.

Proving techniques – How to prove the function of a recursive Big-Oh without using repeated substitution, master phrase or closed form?

I have a function as defined $ V (j, k) $ with two base cases with $ j, k $ and the recursive part has an additional variable $ q $ which it also uses. Also, $ 1 leq q leq j – 1 $, The recursive part has the form: $$ jk + V (q, k / 2) + V (j – q, k / 2) $$I am not allowed to use repeated substitution and I want to prove this by induction. I can not use the main clause because the recursive part is not in that form. Any ideas how I can solve it with given limitations?