Time Complexity – Big Theta Proof can not be completed

Prove this with the definition of the Θ notation

$ (3 n + 13) (7 n + 2) left ( log left (1024 n ^ {2} + 100 right) right) in
Theta left (n ^ {2} log n right) $

I also found a useful example in Stack exchange:

Big theta proof for the polynomial function

But my example has $ log $ Function, these are my steps:

apparently

begin {equation}
g (n) = n ^ {2} log n, f (n) = (3 n + 13) (7 n + 2) left ( log left (1024 n ^ {2} + 100 right) right)
end {equation}

Then we can get to the denomination of big-theta

begin {equation}
0 leqslant c_ {1} n ^ {2} log n leqslant (3 n + 13) (7 n + 2) left ( log left (1024 n ^ {2} + 100 right) right ) leqslant c_ {2} n ^ {2} log n
end {equation}

Divide the inequality by the largest n-term of the order we get

begin {equation}
0 leqslant c_ {1} leqslant left (21+ frac {97} {n} + frac {26} {n ^ {2}} right) log _ {n} left (1024 n ^ {2} +100 right) leqslant c_ {2}
end {equation}

(I see that $ n neq 1 $I think it's right, so I choose $ n geqslant $ 2)

by calculate the limit

begin {equation}
lim _ {n rightarrow infty} left (21+ frac {97} {n} + frac {26} {n ^ {2}} right) log _ {n} left (1024 n ^ {2} +100 right) = 42
end {equation}

we know $ c_2 = 42 $ when $ n geqslant $ 2, then I should choose a constant that is less than 42, then it should meet the LHS.

I choose $ c_1 = 41 $, To the $ n $I choose too $ 2 $ that can satisfy LHS.

So the constants that prove

$ (3 n + 13) (7 n + 2) left ( log left (1024 n ^ {2} + 100 right) right) in
Theta left (n ^ {2} log n right) $

are $ c_1 = 41, c_2 = 42, n geqslant $ 2

My steps or answers are right or wrong? Please explain or correct my mistake.

Machine Learning – Sample of complexity of mean estimation with empirical estimator and mean estimator?

For a random variable $ X $ with unknown mean $ mu $ and variance $ sigma ^ 2 $we want to create a quote $ has { mu} $ based on $ n $ i.i.d. Samples off $ X $ so that $ rvert has { mu} – mu lvert leq epsilon sigma $ with a probability of at least $ 1 – Delta $,

Empirical estimator: Why are $ O ( epsilon ^ {- 2} cdot delta ^ {- 1}) $ Samples required? Why are $ Omega ( epsilon ^ {- 2} cdot delta ^ {- 1}) $ Sufficient samples?

Averaging estimator: Why are $ O ( epsilon ^ {- 2} cdot log frac {1} { delta}) $ Samples required?

Complexity Theory – How to prove the NP hardness from the ground up?

I'm working on a problem whose complexity is unknown.
Due to the nature of the problem I can not use long edges, so 3SAT and variants are almost impossible to use.

Finally, I chose the most primitive method – Turing Machines.

Strangely, I could not find an example of the reduction in NP hardness that was directly achieved by modeling the problem as language. It turned out that a deterministic Turing machine can not decide if a given instance belongs to that language (I might have misunderstood the language) terminology here).

Assuming that there is no problem performing NP hardness reduction, how can one prove that a problem is NP hard? Are there any publications that do this?

Complexity Theory – Interesting analogy, why P! = NP with a labyrinth

I am currently taking an algorithmic course and have just come across NP-Complete in terms of computational complexity. Apparently the topic was P! = NP a topic that has been heavily explored without current evidence. In class we should discuss topic-related ideas.

I answered a classmate and thought about an interesting possible reason. Naively I try to say that I have an answer to P! = NP found. A bit down to earth, I find it just too exciting to think about the topic and would like to share my thoughts with you, in the hope of getting an insight into whether my thoughts can be expanded or simply let go.

So the classmate mentioned that there was the case of gerrymandering, which was tackled using graph theory, and the problem itself is NP-heavy. I thought that these types of graph theory applications are intuitively quite difficult to solve and fit into the NP-complete / NP-hard category.

The analogy I mentioned was running through a labyrinth trying to get from point A to point B. It's kind of intuitive that given the random sequence of turns and dead ends, there's no algorithmic way to get through the entire labyrinth. It's like trying to cross a boundary between space and time when the two are closely connected in the example of the labyrinth.

Even more abstractly, it is impossible to move from "space" at the beginning of the maze to "space" at the end of the maze without having to go through a significant amount of "time". Since space-time is bound together, it would be physically impossible to go through the entire maze in a polynomial way.

If you have really read the whole thing, thanks! If you can tell me something, I thank you too.

Complexity Theory – Convex quadratic approximation to binary linear programming

Munapo (2016, American Journal of Operations Research, http://dx.doi.org/10.4236/ major.2016.61001) provides proof that binary linear programming is solvable in the polynomial time and therefore P = NP.

It is not surprising that it does not really show that.

His results are based on a convex quadratic approach to the problem with a penalty whose weight $ mathcal {l} $ must be infinitely large for the approximation to correct the true problem.

My questions are the following:

  1. Is this an approximation that already existed in the literature (I guess that was the case)?
  2. Is this approach useful in practice? For example, one could solve a mixed integer linear programming problem by continuing homotopy and gradually increasing the weight $ mathcal {l} $?

Note: After writing this question, I discovered this related question: time complexity of binary linear programming. The related question considers a specific problem of binary linear programming, but mentions the above paper.

Complexity Theory – Polynomial Time Algorithm for Vertex Coverage

I am aware that the vertex cover problem NP is complete, and I have read the reduction of the clique problem.
I've written an algorithm that determines the minimum vertex coverage of a graph in polynomial time. Could someone explain what is wrong in my thinking?

I attached a picture of the algorithm and an example. Enter image description here

Time complexity – Multiset variant of the subset problem of known algorithms

I have been working in the time analysis for a solver that I have designed for the subset sum problem (multiset variant) and whose time complexity depends on the number of repeated elements in the input.

The temporal complexity is $ O (2 ^ {n / 2} cdot 0,75 ^ { frac {d / 2} {n}}) $ Where $ d = $ Number of duplicates in the input instance (both required) $ n $ and $ d $ are equal)

For example when $ d = n / 2 $ then:

$ O (2 ^ {n / 2} cdot 0,75 ^ { frac {n / 4} {n}}) approx. O (1,4142 cdot 0,93) approx. O (1,316 ^ n) $

In addition to the question of comments, I'm also looking for other well-known algorithms with similar behavior to compare approaches (looking for them, but so far found nothing …)

Algorithms – Impossible repeating relation helps with the master theorem and the evaluation of complexity

I had an exercise to solve a repeat relationship in my exam. I think it was a tough question, but I'm not 100% sure.

The repetition was $ T (n) = 2 * T (n) + sqrt {n} +42 $, it was explicitly asked to be resolved with the main clause, and I wrote that it can not be solved with the main clause, since it is required for b> 1.
Do you think that's right?

After that I had to evaluate a small piece of code, after I had looked at it, I decided that it was $ O (n ^ 3) $

    for i in range(1,n):
      j = i+1
      while(j/n<=n):
        k=1
        while(k<=n):
            constantFunction(4) # a function that executes in constant time
            k=k+3
        j=j+1

In my opinion, the function going from 1 to n always makes the while (j / n <= n) <= n as $ n-> infinite $, which causes the other while inside to be executed until k <= n, k grows fast, and that while loop asks me if it stops? I mean, suppose n becomes infinite, k will grow, but not as much as n, so it will be $ O (n ^ 3) $, I'm right?

thanks

Time Complexity – In the case of right rotation for red-black trees, there is a less efficient way to re-color them. Why is not it O (log (n))?

So the first time I tried to color that insertion point out of memory, it was repainted on the right side. Of course, the left side's staining is more efficient as the loop ends up here. If it does, we will check in the right case whether the grandparent of one If the root is (color is black) and the loop continues from this node otherwise, I've read that it does not make the stain any longer O (log (n)), why is that? In the worst case, it still seems to be O (log (2n)), even though the number of revolutions performed is no longer O (1). red-black inking alternatives