## Proofing techniques – comparison tree to get a minimum number of comparisons to merge sorted lists

Accept `A` is a sorted list of length `n` and `B` is a sorted list of length 2. I'm asked to find the minimum number of comparisons that an algorithm has to perform to bring these lists together. I also get an analysis of it that does not use a comparison tree, but argues combinatorially that the number of leaves must be $$binom {n + 1} {2}$$ and that the height $$h$$ must satisfy $$2 ^ h geq binom {n + 1} {2}$$,

However, I get a different answer when I actually draw the comparison tree. In my view, the comparison tree has two branches, one of which corresponds `a1 < b1` and one accordingly `a1 > b1`, If you follow the right branch again on the right, this corresponds to `a1 > b1` and `a1 > b2` and at that point the tree ends in a leaf. No further comparisons are required, since the comparison can simply be used to carry out any comparison `b`s in the merge list and save the rest of the `a`s in after.

That is not the interesting part, but hopefully it is a simple consideration that makes it clear how I am going to do it. The left branch where you get `a1 < b1` and `a2 < b1` and and `an < b1` corresponds to going left and left and ... and going left. There are `n` Compare to this branch. In fact, I think you can get one more if instead `an < b1` At the end you go to the right and say `an > b1` and then `an < b2` (or `an > b2` since it doesn't matter at this point, both sides are one sheet). So the tree is exactly the height `n+1`, Right? Or am I wrong?

## Grammar proofing and proofreading service with 1000 word grammar for \$ 8

#### Grammar proofing and proofreading service with grammar [1000 words]

Grammar check & proofreading with grammar service
Have your articles, essays, letters, blog posts, articles etc. checked, corrected and edited.

We will review and correct all grammatical errors, misspellings, and other grammatical problems using the Premium grammar service.

100% SATISFACTION GUARANTEED OR MONEY BACK

## Proofing Procedure – Prove a first order logic theorem in equation logic using a term rewriting system

I am trying to translate and prove a sentence that was originally written in first order logic (FOL) into a combination of equation logic (EL) and Boolean logic (BL) (more precisely, a model of Boolean algebra). The target language also allows skolemization (Sk). The translation task is therefore from FOL to EL + BL + Sk. My motivation is that if my translation and the subsequent proof in EL + BL + Sk are correct, I should be able to provide such proof using a term rewriting system ( TRS). TRS can be used to prove equation theories. Since EL + BL is a sub-logic of FOL and Skolemization leads to an equivalent system, it is to be hoped that a valid proof in EL + BL + Sk is a valid proof of the original FOL theorem. Below is a FOL example and my attempt to prove it with natural derivatives. This is followed by my attempt to translate and test in EL + BL + Sk. See notes on translation / proof below.

My questions are:

Is the preliminary translation from FOL to EL + BL + Sk correct?

Is the preliminary proof in EL + BL + Sk correct?

Does the proof in EL + BL + Sk count as proof for the original FOL theorem? I am not sure how conclusive the theoretical consequence is ($$vdash$$) in FOL refers to semantic enttailment ($$models$$) in EL + BL + Sk. Tut ($$Gamma models_ {EL + BL + Sk} varphi iff Gamma vdash_ {FOL} varphi$$) stop?

Example-FOL formulas

At least one person likes each person: $$exists y for all x: Likes (x, y)$$

Every person likes at least one person: $$( forall x exists y: Likes (x, y))$$

I want to prove: $$( exists y for all x: Likes (x, y)) vdash ( exists for all x exists y: Likes (x, y))$$

Natural Deduction (ND) proof

The ND proof uses syntactic consequences $$Gamma vdash varphi$$ means the sentence $$varphi$$ can be demonstrated from the assumptions $$Gamma$$,
begin {align *} & textbf {FOL Theorem} ~~ ( exists y for all x: Likes (x, y)) vdash ( for all x exists y: Likes (x, y)) \ & \ & textbf {notation for EL + BL + Sk} \ & x ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ & c ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~ & d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~ & mathtt {skFun} ~~~~~~~~~~~~~~~~~~ text {Skolem function} \ & mathtt {Likes} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~ & mathtt {true} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~ & \ & textbf {Translation of the sentence according to EL + BL + Sk:} \ & ( forall x: ( mathtt {skFun} (x) = c, mathtt {Like} (x, c))) models ( forall x: mathtt {Like} (x, mathtt { skFun} (x))) \ & \ & textbf {proof in EL + BL + Sk} \ & textbf {1} ~~ forall x: mathtt {Likes} (x, c) = true ~~~~~~ text {Assumption with Skolem constant c } \ & textbf {2} ~~ for everyone x: mathtt {skFun} (x) = c ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Text {interpretation of Sk. Function with Sk. Constant c } \ & textbf {3} ~~ mathtt {Likes} (d, mathtt {skFun} (d)) ~~~~~~~~~~~~~ text {Universal elim & Skolemization of term to be evidence } \ & textbf {4} = ~~~~~ mathtt {Likes (d, c)} ~~~~~~~~~~~~~~~~~~ text {Apply 2 to the second argument of 3 to} & textbf {5} = ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~ text {Apply 1 at 4} \ end {align *}
Instructions for translation / proofing.

• The EL + BL + Sk proof is based on an interpretation, so that the translation requires a semantic correspondence $$models$$, In general, this can be written as $$Gamma models_ {EL + BL + Sk} varphi$$, which means that under $$EL + BL + Sk$$ logically the sentence $$varphi$$ is true in all models of $$Gamma$$,

• In the EL, all variables are considered to be universally quantified.

• Existence variables in FOL that are not in the range of universals are translated into Skolem constants.

• Existence variables in FOL that are in the range of universals are translated into Skolem functions, e.g. $$mathtt {skFun} (x)$$ with a universal argument from $$x$$, The original existential was within the scope $$x$$,

• Each predicate in FOL is in EL + BL + Sk, e.g. predicate $$Likes$$ becomes a Boolean operation $$mathtt {Likes}$$,

• In EL, expressions are different unless they are made the same or the same by equations.

Below is the listing in CafeOBJ with TRS. The command `red` Reduces a specific term by viewing declared equations as rewriting rules from left to right

``````mod LIKES {
(Person)
pred Likes : Person Person
}

op c : -> Person .
ops d : -> Person .
op skFun : Person -> Person .
-- Hypothesis
eq (assumption) : Likes(x:Person,c) = true .
eq (skolem) :  skFun(x:Person) =  c .
red Likes(d,skFun(d)) .
--> Gives true
``````

## Which algorithm adjusts the difficulty of proofing work in a proof for work-based cryptocurrency?

For example, Bitcoin has a work-based consensus algorithm. In this way, Bitcoin selects a node to create the next block in the Bitcoin blockchain. In this way, the network agrees on the creation of the next block. Part of this block creation process is to issue the next job statement to the network, and the consensus node based on the algorithm in question may present a new difficulty in proof of work. Bitcoin tries to keep this proof of work difficult enough but simple enough so that the block creation times are around 10 minutes.

Which algorithm sets this difficulty?

## Logic – Automatic proofing

Sorry for the noob question: is there an automatic proof checker?

I mean, a kind of programming language that validates the steps of a proof.

I'm not talking about an automatic proofer, just a way to computationally validate a proof using axioms and previously validated proofs.

What I am looking for is a way to produce evidence that repeats the programming of a computer.

## Proofing – How to prove the function of a recursive big theta without using repeated substitution, mastering the sentence, or having the closed form?

I have defined a function: $$V (j, k)$$ Where $$j, k in mathbb {N}$$ and $$t> 0 in mathbb {N}$$ and $$1 leq q leq j – 1$$, Note $$mathbb {N}$$ includes $$0$$,

$$V (j, k) = begin {cases} tj & k leq 2 tk & j leq 2 tjk + V (q, k / 2) + T (j – q, k / 2) & j , k> 2 end {cases}$$

I am not allowed to use repeated substitution and I want to prove this by induction. I can not use the main clause because the recursive part is not in that form. Any ideas how I can solve it with given limitations?

When I start induction: I fix $$j, q$$ and introduce $$k$$, Then the base case $$k = 0$$, Then $$V (j, 0) = tj$$, The question indicated that the function may be $$Theta (jk)$$ or maybe $$Theta (j ^ 2k ^ 2)$$ (but it does not necessarily have to be either).

I choose $$Theta (j, k)$$, In the base case, this would mean that I had to prove that $$tj = theta (j, k)$$ when $$j = 0$$, But if I start with the big-oh, I have to show it $$km leq mn = m cdot0 = 0$$ which I currently do not think possible.

I am not sure if I did the basic case wrong or if there is another approach.

## Proofing – How to prove in contradiction that every non-empty hereditary language contains the empty string?

A language L is called hereditary if it has the following property:

For every non-empty string x in L, there is a character in x that can be deleted from x to get another string in L.

Prove inconsistently that any non-empty hereditary language contains the empty string.

Here is my attempt:

To prove the contradiction, suppose that for every non-empty string x in L, there is no character in x that can be deleted from x to yield another string in L.

This means that deleting a character in x leaves an empty string. Because an empty string is also a string, any non-empty hereditary language contains the empty string.

I am not quite sure how to prove by contradiction. Can someone help to verify this?

## Proofing – How do we prove the temporal complexity of this simple probabilistic problem problem in a Bayesian network?

Maybe a rather trivial question, but I try to refresh the evidence in CS …

Suppose we have a simple Bayesian network with two node rows: $$x_1, x_2, ldots, x_n$$ and $$y_1, y_2, ldots, y_n$$, Every node $$x_k$$ assumes a state of 0 or 1 with equal probability. Every node $$y_k$$ Probably assumes state 1 $$p_k$$ if $$x_k$$ is state 1 and probability $$q_k$$ if $$x_k$$ is state 0.

Exponential time is required to calculate the probability of all $$y_k$$ are 1 and if so, what is a suitable CS proof for that?

## Proofing Techniques – Prove that every complete prefix-free language is maximal

I am practicing a problem where I have to prove that every prefix-free language is maximal.

I know that a prefix-free language A is a maximum if it is not a proper subset of a prefix-free language, where a prefix-free language is full if

$$sum_ {x in A} 2 ^ {- | x |} = 1$$

Also, I know that a language A 0 {0, 1} * is a prefix-free if no element of A is a prefix of another element of A and that the force inequality says that for each prefix-free language A .

$$sum_ {x in A} 2 ^ {- | x |} leqslant 1$$

I'm pretty sure that a full free prefix language is maximal because it belongs to the maximum prefix-free set. But I do not know how to prove it formally. Should I deal with Kraft's inequality and what I know about the relationship between maximum and prefix-free quantities?

Any help would be appreciated!