Prove Correctness of Algorithm? – Computer Science Stack Exchange

Given $n$ images placed in indexes $x_1 < x_2 < … < x_n$ and an
endless number of guards, where each guard if placed in index $y$ can
protect $[y-0.5,y+1]$. I want to protect all images with minimal
number of guards.

My suggestion for an algorithm:

Place guard in $x_1+0.5$ then do loop from $i=2$ to $i=n$, if image $x_i$ protected by previous guard then do nothing, else place a new guard at point $x_i+0.5$


I proved that my algorithm returns valid solution, but stuck on proving that it returns minimal solution.

I am trying to prove this claim:

let $s_i$ be the point where guard $i$ was placed by my algorithm, then for each $i$ there is a minimal solution which placed guards in points $s_1, … , s_i$

I went for induction and proved base case for $i=1$ But stuck on later steps. Any help?

operating systems – Prove correctness of a solution to the critical section problem in general?

I was wondering if there is any formal, general way to prove the correctness of a candidate solution to the critical section problem in synchronisation. For example, in the image enclosed, i have considered the petersons solution to the critical section problem(reference : operating system concepts by gagne)
enter image description here
The example is worked out for all possible scenarios of concurrent execution and an observation is made that in any scenario, atmost one process enters inside the critical section, hence proving the mutual exclusion condition of the solution.
Doubt
Is there a better way to formally prove the above condition(and other conditions, such as bounded waiting and progress)? I wondered because the method I used above is really a brute force method and things can get pretty messy as the number of instructions increases. Thanks in advance.

algorithms – Is correctness implied by an optimality proof?

New to proofs (in the context of analysis of algorithms).

I’m wondering, if I were to prove a greedy algorithm is the optimal solution, does this imply its correctness as well? (partial correctness + termination).

I’m trying to understand how The Job Sequencing Problem is correct but can only find proofs for its optimality. How can we be sure this algorithm is correct?

EDIT: I have found an excellent resource that uses a proof of optimality to show (I think) partial correctness as well. It is here but for the life of me I cannot understand it. A layman’s explanation would be amazing!

Proof of Correctness Request for Greedy Algorithm that solves “The Weight Job Scheduling” problem

Today, in my self-lead studies, I found out about greedy algorithms, more specifically, a greedy approach to solve The Weighted Job Scheduling Problem.

I understand how the solution is implemented but, I’d love to see a proof of correctness for this solution (i.e. partial correctness and termination). If anyone can help me understand why this solution is correct mathematically in a general form, that’d really great!

Proof of Correctness Request for Greedy Algorithm that solves “The Job Scheduling” problem

Today, in my self-lead studies, I found out about greedy algorithms, more specifically, a greedy approach to solve The Job Sequencing Problem.

I understand how the solution is implemented but, I’d love to see a proof of correctness for this solution (i.e. partial correctness and termination). If anyone can help me understand why this solution is correct mathematically in a general form, that’d really great!

algorithms – Proof of Correctness : Arranging the sheep

I’ve come across a question in Codeforces contest 719(Div – 3).

The problem goes like this :

enter image description here

I was able to solve the problem by using another approach but had to use 4*n auxiliary space, where n is the length of the input string however the solution given in the editorial is way more efficient

The editorial goes like this

enter image description here

It basically says to choose the sheep whose number is ⌈k/2⌉ as pivot.(In the editorial they gave it as n/2 which is wrong. Consider it to be k/2, where k is the number of sheep in the given string.)

Here’s my doubt
Why sheep at present at ⌈k/2⌉ should make 0 moves to get an optimal solution. I’ve searched the internet but couldn’t find the proof. Can someone give me a generalized proof for this?.. Thanks in advance 🙂

Problem Link :- https://codeforces.com/contest/1520/problem/E

Editorial Link :- https://codeforces.com/blog/entry/90342

Note :- The editorial link has the solutions for all the problems. Scroll down to find the editorial for Arranging the Sheep problem.

Proof of correctness for D.P wordwrap problem?

I was reading another post about the time complexity of the word wrap problem ("The word wrap problem states that given a sequence of words as input, we need to find the number of words that can be fitted in a single line at a time") and this is the code they used to define their algorithm. I’ve defined the recurrence in the problem to be min(DP(j)+badness(i,j) for j in (i+1),(n+1).

import math

class Text(object):
def init(self, words, width):
self.words = words
self.page_width = width
self.str_arr = words
self.memo = {}

def total_length(self, str):
    total = 0
    for string in str:
        total = total + len(string)
    total = total + len(str) # spaces
    return total

def badness(self, str):
    line_len = self.total_length(str)
    if line_len > self.page_width:
        return float('nan') 
    else:
        return math.pow(self.page_width - line_len, 3)

def dp(self):
    n = len(self.str_arr)
    self.memo(n-1) = 0

    return self.judge(0)

def judge(self, i):
    if i in self.memo:
        return self.memo(i)

    self.memo(i) = float('inf') 
    for j in range(i+1, len(self.str_arr)):
        bad = self.judge(j) + self.badness(self.str_arr(i:j))
        if bad < self.memo(i):
            self.memo(i) = bad

    return self.memo(i)

How to prove correctness of the binary tree inversion algorithm?

Define the inversion of a binary tree as the tree whose left sub-tree is a mirror reflection of the original tree’s right sub-tree around the center and right sub-tree a mirror reflection of the original tree’s left sub-tree.

Consider the following binary tree inversion algorithm (source: LeetCode):

/**
 * Definition for a binary tree node.
 * public class TreeNode {
 *     int val;
 *     TreeNode left;
 *     TreeNode right;
 *     TreeNode() {}
 *     TreeNode(int val) { this.val = val; }
 *     TreeNode(int val, TreeNode left, TreeNode right) {
 *         this.val = val;
 *         this.left = left;
 *         this.right = right;
 *     }
 * }
 */
public TreeNode invertTree(TreeNode root) {
    if (root == null) return null;
    TreeNode tmp = root.left;
    root.left = invertTree(root.right);
    root.right = invertTree(tmp);
    return root;
}

I sought to prove it inductively on the depth of tree but am stuck at the inductive step. How can I show that the mirror reflection of the left subtree (or right) around the center of the root tree is a combination of moving the subtree from root.right to root.left and inverting the subtree around the center of the subtree itself?

sequences and series – Changing the order of summation – check for correctness

Assume $|r|<1$, I’m working with

$$
begin{align*}A &=sum_{i=1}^{infty} sum_{j=i+1}^{infty} r^{j-i}
sum_{u=1}^{i}frac{r^{i-u}}{u} sum_{v=1}^{j}frac{r^{j-v}}{v} \
&=sum_{u=1}^{infty} frac{1}{u} sum_{v=1}^{infty} frac{1}{v} sum_{i=u}^{infty} sum_{j: j geq i+1 & j geq v }^{infty} r^{j-i}
r^{i-u}r^{j-v} \
&=sum_{u=1}^{infty} frac{1}{u} sum_{v=1}^{infty} frac{1}{v} sum_{i=u}^{infty} sum_{j = min{i, v} }^{infty} r^{2j-u-v} \
&=sum_{u=1}^{infty} frac{1}{u} sum_{v=1}^{infty} frac{1}{v}left( sum_{i: i geq u & i leq v}^{infty} sum_{j = i }^{infty} r^{2j-u-v} + sum_{i: i geq u & i > v}^{infty} sum_{j = v }^{infty} r^{2j-u-v} right) \
end{align*} $$

The initial sum seems to converge as per numerical simulations, however, further expanding by change of the order of the summations, I keep arriving at diverging or complex values.

Am I missing something during the change of the summation order?

selection problem – proving correctness of algorithm to find minimum to $f(b) = sum|x_i – b|$

given a set of $N$ points ${(x_1, y_1),…(x_n, y_n)}$ I need to find an algorithm with liner running time to find the line $x=b$ where $f(b) = sum_{i=1}^{n}|x_i – b|$ is minimal.

I wrote the following algorithm:

MINIMAL-SUM-DISTANCE(A) // A is an array of the point's x values
1  return SELECT(A, 0, length(A), floor(length(A) / 2))

where SELECT is a function that find the kth smallest element in an array in $O(n)$ (using median of medians)

My problem is I don’t know how to prove the algorithm correctness. Previously I’ve used either loop invariant or induction to prove correctness, but in this instance loop invariant is useless and I don’t see how I can prove it with induction, the assumption that the algorithm is correct for $n$ doesn’t help me prove it is correct for $n + 1$ (at least as far as I can see). There is also the point that I’m not sure if I need to prove why I choose $lfloor n / 2 rfloor$, and if so how exactly to prove it. Originally I got to this solution by writing a simple script that, for a group of 3-20 points with x values -100 to 100, calculated the minimum by going on all integers between the minimum and the maximum x values.