## communication complexity – Is unary machine code a concept?

Yes, you could express machine code in unary. How do I know? Well, anything that can be represented in binary, can be represented in unary. Machine code can be represented in binary (as a binary string of bits)… therefore it can be represented in unary (as a unary string).

It wouldn’t be useful or practical to do so. It’s not just less comfortable; it’s much worse than that. Conversion to unary involves an exponential increase in size, so it would be totally infeasible in practice. But in theory, you can imagine it; there’s no barrier in principle.

## image processing – how can I calculate the complexity algorithm for my algorithm?

I had developed a system for fruit grading whose steps are segmentation, feature extraction, classification.
the dimension of image n*n
Number of images m
then segmented image has complexity O(m.n^2)
IF (n.k is a segmented part) features are extracted then complexity after feature extraction is O(m.n.k)
if p is the number of features extracted then complexity after classification is O(m.p)
Total Complexity=o(m(n^2+n.k+p))

I want to ask for different segmentation technique used, different number of features, different classifier used what are the changes in total complexity

## algorithms – what will be space complexity for snippet for(i=1 to n) int x=10;?

The space complexity of the code snippet given below:

``````int *a = malloc(n * sizeof(int));
int sum=0;
for (i = 0; i < n; i++)
{
scanf("%d", &a(i));
for (j = 0; j < n; j++)
{
int x = a(i);
sum = sum + x;
}
}
free(a);
``````

This is question of a test.
I answered it $$O(n)$$, considering space only for array and constant space for rest of the code.
But in the answer they given $$O(n^2)$$ and there explanation is “The array is of size $$n$$ and the inner most loop we are declaring a variable $$x$$ every time the loop is executed, this loop is executed $$O(n^2)$$ time hence overall space complexity is $$O(n^2)$$.”
Now my doubt is what should be space complexity of

``````for (i = 1; i <= n; i++)
int x = 10;
``````

What i thought is that it should be $$O(1)$$ because in each iteration variable $$x$$ gets destroyed.

## complexity theory – Inhabitation of STLC is in PSPACE

Urzyczyn: Inhabitation in Typed Lambda-Calculi (A syntactic approach) gives a proof that STLC inhabitation problem is in PSPACE (section 2, lemma 1). I don’t understand certain aspects of the proof:

Lemma: There is an alternating polynomial time algorithm to determine whether a given type A is inhabited in a given basis $$Gamma$$ in the STLC.

Proof.If a type is inhabited, it is inhabited by a term in a long normal form.

Question 1: what is a long normal form.

To determine if there exists a term $$M$$ in a long normal, satisfying $$Gamma vdash M:A$$ we proceed as follows:

• If $$A = A_1 to A_2$$ then $$M$$ must be an abstraction $$M = lambda x:A_1. M’$$. Thus, we look for an $$M’$$ satifying $$Gamma, x:A_1 vdash M’:A_2$$.

• If $$A$$ is a type variable, then $$M$$ is an application of a variable to a sequence of terms.

Question 2: I thought there weren’t type variables in the STLC.

We nondeterministically choose a variable z, declared in $$Gamma$$ to be of type $$A_1 rightarrow ldots rightarrow A_n rightarrow A$$. If there is no such variable , we reject. If $$n = 0$$ then we accept. If $$n > 0$$, we answer in parallel the questions if $$A_i$$ are inhabited in $$Gamma$$.

Question 3: it doesn’t matter the actual typing of $$z$$ in $$Gamma$$ right? as long as we consume it and don’t use it again in this step.

This alternating procedure is repeated as long as there are new questions of the form $$Gamma vdash ? : A$$. We can assume that all types in $$Gamma$$ are different. At each step of the procedure, the basis $$Gamma$$ either stays the same or expands. Thus the number of steps does not exceed the squared number of subformulas of types in $$Gamma,A$$.

Question 4: why? could someone spell out some steps of the reasoning here?

## Complexity of find histogram bins vs convex hull

For a list of n 2d points, finding the convex hull vertex takes O(n log(n)) time. And O(n) time if it’s sorted lexicon order.

Meanwhile What’s the complexity of finding the histogram bin edges of k bins on both axis ?

Which one is faster ?

## complexity theory – Provide a polynomial time algorithm that decides whether or not the language recognized by some input DFA consists entirely of palindromes

The most straightforward way is the following:

Let $$p(u,v)$$ (“p” for palindrome) be a predicate which means “any path from $$u$$ to $$v$$ is a palindrome”. We are interested in $$p(S, F)$$ for each starting state S and each finishing state F. To compute it, we need an auxiliary predicate $$c(u,v)$$ (“c” for connected): “there exists a pat from state $$u$$ to state $$v$$“. $$c$$ can be computed in $$O(n^3)$$ time using transitive closure.

Let $$E$$ be the set of transitions. Let $$ell(u,v)$$ be the label (symbol) on edge $$u to v$$. Then:

$$p(u,v) = false, text{if exists u’, v’: (u,u’), (v’, v) in E, c(u’,v’)=true, ell(u,u’) ne ell(v’,v)}$$
Simply put, if there is a path $$u to u’ leadsto v’ to v$$ such that the first and the last symbols don’t match, $$p(u,v)$$ is false.

If such a path doesn’t exist, we can define $$p(u,v)$$ recursively:
$$p(u,v) = land_{u’, v’: (u,u’), (v’, v) in E, c(u’,v’)=true} p(u’, v’)$$
I.e. if there is a path $$u to u’ leadsto v’ to v$$ such that $$u’ leadsto v’$$ is not a palindrome, then $$p(u,v)$$ is not a palindrome.

Now, we can write a DFS-like solution. Let $$G$$ be a graph where vertices are pairs of states and edges are as defined by the second equation:
$$(u,v) to (u’,v’) iff (u,u’), (v’, v) in E, c(u’,v’)=true$$
Intuitively, an edge leads from a problem to a “subproblem”.

Our starting vertices for DFS are $$(S,F)$$ for each starting state $$S$$ and each finishing state $$F$$. We need to check that none of these vertices reaches a “bad” vertex, where $$(u,v)$$ is bad if it fails the condition from the first equation, i.e.:
$$exists u’, v’: (u,u’), (v’, v) in E, c(u’,v’)=true, ell(u,u’) ne ell(v’,v)$$
This is a standard use-case for DFS or BFS.

## time complexity – Transforming multi-tape Turing machines to equivalent single-tape Turing machines

We know that multi-tape Turing machines have the same computational power as single-tape ones. So every $$k$$-tape Turing machine has an equivalent single-tape Turing machine.

About the computability and complexity analysis of such a transformation:

Is there a computable function that receives as input an arbitrary multi-tape Turing machine and returns an equivalent single-tape Turing machine in polynomial time and polynomial space?

## complexity theory – How hard would it be to state P vs. NP in a proof assistant?

Using proof assistants for this purpose is certainly possible in principle, but it would be impractical and not cost-effective. It would require an enormous amount of effort, way more than is warranted.

For example, a case study reporting on an effort to formalize the proof of the Prime Number Theorem (which is much easier to prove than P vs NP) stated that translating a proof written for humans into a format that a proof assistant could verify was tedious and time-consuming. They reported that it took them about a day of effort per page of human-written proof. That’s for a fairly basic theorem where a lot of the basic lemmas and definitions have already been formalized. When we look at recent attempts at proving P vs NP, they typically use a lot of advanced machinery and sophisticated pre-existing results from prior papers. You would first need to formalize and verify all of that machinery (which might come to tens of thousands of pages of research papers). I’ve seen another estimate of one week of effort per page. Regardless of the exact ratio, the effort required to formalize a P vs NP proof would be truly Herculean — I expect it’s more than someone could complete in a lifetime, if there was even anyone who wanted to pay their salary to do that.

You can look at existing libraries of theorems in mathematics and computer science that have been formalized and formally verified: see http://us.metamath.org/ and http://formalmath.org/ and https://www.isa-afp.org/topics.html and http://mizar.org/library/. You might notice that a lot of what is formalized there concerns basic undergraduate material. We’re a far way from formalizing all theorems taught at an undergraduate level, let alone those taught at a graduate level, let alone new research results.

For more background, see https://math.stackexchange.com/q/792010/14578 and https://math.stackexchange.com/q/113316/14578 and https://math.stackexchange.com/q/1767070/14578 and https://math.stackexchange.com/q/2747661/14578 and http://www.ams.org/notices/200811/tx081101370p.pdf.

## np complete – Complexity of Subset Sum where the size of the subset is specified

There are $$binom{n}{k}$$ $$k$$-subsets of an $$n$$ set, and $$binom{n}{k} = n (n – 1) dotsm (n – k + 1) / k!$$, which is $$O(n^k)$$, as you observe. The brute force complexity is a bit more (need to add up the numbers too and check), but that is ballpark.

This is polynomial for any fixed $$k$$, but not polynomial in $$n$$ if e.g. $$k = n / 2$$ (that is the Partition problem, known NP-complete).

## Time complexity analysis for factorial sum

For a rough estimation,

let `t = 0! + 1! + 2! + ...,+ (n-1)! + n!`,
can we assume t = O(n!) ?

let `t = n + n(n-1) + n(n-1)(n-2) + ... + n!`,
can we assume t = O(n*n!) ?