data structures – Time complexity of algorithms

The first and the last statements are correct, while the second one is incorrect.

Statement 1

Denote by $T_{min}$ the actual running time of the algorithm $A$, in the best case, and $T_{max}$ in the worst case. By how we chose $T_{min}$ and $T_{max}$ it follows that $T_{min}le T_{max}$.

From our assumptions, $T_{min}=Omega(g(n))implies T_{min}ge c_1g(n)$. Also from our assumptions, $T_{max}=O(f(n))implies T_{max}le c_2f(n)$.

Combining them together we get:

$c_1f(n)le T_{min} le T_{max} le c_2g(n)$, which means that $f(n)le frac{c_2}{c_1}g(n)implies f(n)=O(g(n))$

Statement 2

Consider the following algorithm:

if lst(0) != 0:
    for x in lst:
        print(x)

And consider the inputs $I_1:=(0,1,2,3,…,n)$ and $I_2:=(1,2,3,…n+1)$.
Clearly, the algorithm takes $O(1)$ time with input $I_1$, but $Omega(n)$ time with input $I_2$. Obvoiusly, $nneq O(1)$ and thus the statement is incorrect.

Statement 3

Repeat the proof of statement 1. Note that also $T_{avg}le T_{max}$ and thus the proof still holds.

time complexity – O(n^2) and O(nlogn) exercise

There is an exercise which says : Al and Bob are arguing about their algorithms. Al claims his O(n log n)-time method is always faster than Bob’s O(n^2)-time method. To settle the issue, they perform a set of experiments. To Al’s dismay, they find that if n<100, the O(n^2)-time algorithm runs faster, and only when n>= 100 is the O(n log n)-time one better. Explain how this is possible.

Can somene help me?

complexity theory – Classes of Functions Closed Under Polynomial Composition – Papadimitriou Exercise 7.4.4

I am studying Computation complexity using Papadimitrious’s book: “Computational Complexity”.

I am trying to do Problem 7.4.4:

“Let $C$ be a class of functions from nonnegative integers to nonnegative integers. We say that $C$ is closed under left polynomial composition if $f(n) in C$ implies $p(f(n))=O(g(n))$ for some $g(n) in C$, for all polynomials $p(n)$. We say that $C$ is closed under right polynomial composition if $f(n) in C$ implies $f(p(n))=O(g(n))$ for some $g(n) in C$, for all polynomials $p(n)$.

Intuitively, the first closure property implies that the corresponding complexity class is “computational model-independent”, that is, it is robust under reasonable changes in the underlying model of computation (from RAM’s to Turing machines, to multistring Turing machines, etc.) while closure under right polynomial composition suggests closure under reductions (see the next chapter).”

Which of the following classes of functions are closed under left polynomial composition, and which under right polynomial composition?

(a) – ${n^k: k > 0 }$

(b) – ${k cdot n: k > 0 }$

(c) – ${k^n : k > 0 }$

(d) – ${2^{n^k} : k > 0 }$

(e) – ${log^k n: k > 0 }$

(f) – ${log n}$

After understanding the definition of closed under left/right polynomial composition, I think I was able to solve items (a), (b), (c) and (f). However, I was not able to solve items (d) and (e).

My solution for item (a):

Closed Under Left Polynomial Composition: consider an arbitrary $f(n) in C$ and an arbitrary polynomial $p(n)$. Then, $f(n)$ is of the form $n^{k’}$, for some $k’ > 0$ and therefore $p(f(n))$ is a polynomial. Let $k”$ be the degree of the polynomial $p(f(n))$. Take $g(n) = n^{k”} in C$ and we have $p(f(n)) = O(g(n))$.

Closed Under Right Polynomial Composition: same reasoning.

My solution for item (b):

Not Closed Under Left Polynomial Composition: consider as a counterexample $f(n) = n in C$ and $p(n) = n^2$. Then, $p(f(n)) = n^2$. For every $g(n) = k n in C$ we have $O(g(n)) = O(n)$. Since $n^2 neq O(n)$ we conclude.

Not Closed Under Right Polynomial Composition: the previous counterexample applies.

My solution for item (c):

Closed Under Left Polynomial Composition: Consider an arbitrary $f(n) = k_1^n$ and a polynomial $p(n)$. Notice that $p(f(n))$ is a polynomial in $k_1^n$. For sufficiently large $n$, there exists some $k_2$ such that $p(n) leq n^{k_2}$ and therefore $p(f(n)) leq (f(n))^{k_2} = (k_1^{n})^{k_2} = (k_1^{k_2})^n$. Therefore, taking $g(n) = (k_1^{k_2})^n in C$ we obtain that $p(f(n)) = O(g(n))$.

Not Closed Under Right Polynomial Composition: Consider as a counterexample $f(n) = 2^n$ and $p(n) = n^2$. Then, $f(p(n)) = 2^{n^2}$, which is greater than $g(n) = k^n$, for every fixed value of $k$, if $n$ is sufficiently large. Therefore, $f(p(n)) not in O(g(n))$.

My solution for (f):

Not Closed Under Left Polynomial Composition: Consider as a counterexample $f(n) = log n$ and $p(n) = n^2$. Then, $p(f(n)) = log^2 n$. Also, $g(n) in C$ implies that $g(n) = O(log n)$. We have $log^2 n not in O(log n)$.

Closed Under Right Polynomial Composition: If $f(n) in C$ then $f(n) = log n$. Given an arbitrary polynomial $p(n)$, we have that there exists some $k’$ such that, for sufficiently large $n$, $p(n) < n^{k’}$. Then, for sufficiently large $n$:
$ f(p(n)) leq f(n^{k’}) = log n^{k’} = k’ log n = O(log n) = O(g(n)).$

Can anyone help me with items (d) and (e)?

Thanks in advance. Of course, corrections/comments on the other items are also welcomed.

algorithms – Time complexity of finding median in data stream

I was reading a solution to the problem in the title on leetcode and the article says that the time complexity of the following solution is O(n)

  1. setup a data structure to hold stream value and insert new element in the right place using linear search or binary search
  2. return median

my solution is as follows:

class MedianFinder:

    def __init__(self):
        """
        initialize your data structure here.
        """
        self.arr = ()
        

    def addNum(self, num: int) -> None:
        idx = bisect.bisect_left(self.arr, num)
        self.arr.insert(idx, num)

    def findMedian(self) -> float:
        # self.arr.sort()
        if len(self.arr) % 2 != 0:
            return self.arr(len(self.arr)//2)
        else:
            return (self.arr(len(self.arr)//2 -1) + self.arr(len(self.arr)//2 ))/2

My question is about the time complexity of the push method. the binary search will take O(log n) to find the index. the insert will take O(n). but since the method will be called on the stream, will the complexity be O(n^2) or O(n) ?

enter image description here

logic – Complexity of pattern matching for modus ponens logical conclusions

Is a Turing machine with added the following contant-time operation equivalent (in the sense that polynomial time remains polynomial time and exponential time remains exponential time) to a (usual) Turing machine:

By predicates I will mean predicates in first-order predicate calculus. (Note that predicates may have free variables.)

  • constant-time modus-ponens resolution (yes or no) and then adding $y$ to the end of this array if yes, for given predicates $x$ and $y$ and an array (or a linked list) of predicates. By definition of modus ponens, it’s yes, if and only if some element of the arrays is $XRightarrow y$ where $X$ is a pattern matching $x$.

Remark: The above operation is a part of the standard procedure of proof-checking is first-order predicate logic.

If the above hypothesis is false, then what is the running time upped bounds of the above operation in different kinds of Turning machine equivalents (such as Turing machine, Markov algorithms, von Neumann architecture with infinitely many infinitely big words of memory, etc.)?

BTW, is von Neumann architecture with infinitely many infinitely big words of memory a Turning machine equivalent? (I think yes, but not 100% sure.)

complexity theory – Constant-time adding an element?

Is a computer with infinite memory and infinite word size a Truing machine equivalent (in the sense that polynomial time remains polynomial time and exponential time remains exponential time) if we allow constant-time linked link element insertion (at the beginning of the list)?

I doubt this because element insertion requires memory allocation and allocation is usually not a constant-time operation.

complexity theory – Show that NL ⊆ P

$textsf{PATH}$ is in $textsf{NL}$, because to solve it, you just need to keep in memory the current vertex you are in, and guess (non-deterministicaly) the next one on the path until you reach your destination. Since you keep the current vertex $v$, numbered from $0$ to $|V| – 1$, you need a memory space corresponding to the binary encoding of $v$, which is at most $1 + log_2(|V| – 1)$. You also need to keep the potential adjacent vertex of $v$, next in the path.

All in all, a Turing Machine solving this problem would only need $O(log |V|)$ additionnal space memory (the memory of the graph and of the starting vertex and the destination vertex of the path you are guessing are not considered in the memory used, because they are part of the input).

$textsf{PATH}$ is $textsf{NL}$-hard, because to solve any $textsf{NL}$ problem, you have to determine if there exists a sequence of possible transitions from the initial configuration to an accepting configuration in the Turing Machine of the problem. If you consider a graph of the possible configurations (where there exists an edge from a configuration $alpha$ to a configuration $beta$ if and only if one can go from $alpha$ to $beta$ in one transition in the Turing Machine), then solving the $textsf{NL}$ problem is the same as solving $textsf{PATH}$ in the graph of possible configurations.

You then need to prove that the graph of configurations can be constructed in logarithmic additionnal space. This can be done, because if a non-deterministic Turing Machine works in space $s(n)$, then the number of possible configurations is $2^{O(s(n))}$. Considering the binary encoding of those configurations, one can determine if there exists an edge between two configurations in deterministic space $O(s(n))$.

Now, since $textsf{PATH}$ is solvable in polynomial time (with a graph traversal algorithm, for example), that means that any $textsf{NL}$ problem is solvable in polynomial time (via the $textsf{NL}$-completude of $textsf{PATH}$), so $textsf{NL}subseteq textsf{P}$. This stands true, because if a Turing Machine uses $s(n)$ space memory, then it has at most $2^{O(s(n))}$ configurations, and exploring all of them takes time $2^{O(s(n))}$. Since $s(n) = log n$ for problems in $textsf{NL}$, the total time is indeed $n^{O(1)}$.