## Is this explanation confusing NP-hard and NP-complete?

My notes on P vs NP say the following:

Every problem x in the NP-hard class has the following properties:
– There is no known polynomial-time algorithm for x.
– The only known algorithms take exponential time.
– If you could solve x in polynomial-time, then you could solve them all in polynomial-time.

The last point sounds like it’s confusing NP-hard with NP-complete. I would appreciate it if someone would please clarify this.

## reductions – Is this proof of the PARTITION problem being NP-hard correct?

This is the PARTITION problem:

``````Given a multiset S of positive integers, decide if it can be partitioned into two equal-sum subsets.
``````

This is the SUBSET SUM problem:

``````Given a multiset S of integers and an integer T, decide if any subset of S sums to T.
``````

The positive variant of `SUBSET SUM` is NP-complete.

`SUBSET SUM` can be reduced to `PARTITION` as follows:

Define `S' = S + {c}`, where `c = 2T - sum(S)`. `S'` can be partitioned into two equal-sum subsets iff there is a subset in `S` summing to `T`.

Proof:

Partition `S` into `A` and `B`:

`S = A + B`

If there exists a subset in `S` summing to `T`, then `S` can be partitioned so either `sum(A) = T` or `sum(B) = T`. Suppose `sum(A) = T`. Let `S' = A + B'` where `B' = B + {c}`.

``````sum(B') = sum(S) - sum(A) + c
sum(B') = sum(S) - sum(A) + 2T - sum(S)
sum(B') = -sum(A) + 2T
``````

By assumption, `T = sum(A)` therefore `sum(B') = sum(A)`. This means `S'` can be partitioned into two equal-sum subsets.

If there does not exist a subset in `S` summing to `T`, then there cannot exist a `c` such that `S'` can be partitioned into two equal-sum subsets.

Suppose for contradiction `c` can exist.

Adding `c` to `B` creates `S' = A + B'`.

What value of `T` produces `c` such that `sum(A) = sum(B') = sum(B) + c`?

``````sum(A) = sum(B) + c
sum(A) = sum(B) + 2T - sum(S)
sum(A) = sum(B) + 2T - (sum(A) + sum(B))
sum(A) = sum(B) + 2T - sum(A) - sum(B)
2sum(A) = 2T
sum(A) = T
``````

But `A` already sums to `sum(A)`, contradicting the assumption that there is no subset of `S` summing to `T`.

Adding `c` to `A` is symmetric to adding `c` to `B`.

This is my attempt to prove `PARTITION` is NP-hard. Is this correct? Did I miss something?

## Why this problem is NP-Hard?

Knapsack Problem with exact required item number constraint

Can’t we iterate over $$binom{n}{L}$$ options (which is polynomial), and for each option check if the constraints are met?

## complexity theory – Is this variation of Max-Coverage NP-hard?

Setup

An instance of Max-Coverage is typically defined by a collection of $$n$$ sets $$S = {s_1, s_2, dots, s_n}$$, and a budget $$k$$, where the objective is to select a subset $$Usubset S$$ such that
$$big|Ubig| leq k,~text{ and }~big|cup_{sin U}sbig|~text{ is maximized}.$$

The variation I am interested in is as follows. Instead of being given a collection of sets, we are given a collection of $$n$$ pairs of sets, $$S = big{p_1 ={s_{1, 0}, s_{1, 1}}, p_2={s_{2, 0}, s_{2, 1}}, dots, p_n={s_{n, 0}, s_{n, 1}}big}$$. Further, instead of selecting $$k$$ sets, we now have to select one set from each pair of sets say $$U = { s_{1, i_1}, s_{2, i_2}, dots s_{n, i_n}}$$ s.t. $$i_jin{0, 1}$$ and
$$big|cup_{sin U}sbig|~text{ is maximized}.$$

Questions

1.] Is it NP-hard to optimally select one set from each pair such their union is maximized?

2.] Let $$A$$ be the universe of possible set elements and for each $$i leq n$$, we have $$s_{i, 0} cup s_{i, 1} = A$$ and $$s_{i, 0} cap s_{i, 1} = emptyset$$.

## decision problem – Is it NP-hard to check whether for a \$k\$ there exist both a Cut and a Bisection of value \$k\$?

Input: An undirected, unweighted graph $$G=(V,E)$$.

A cut is defined as a partition $$V=Adotcup B$$.

A bisection is defined as a partition $$V=Adotcup B$$ with $$|A|=|B|$$ if $$|V|$$ is even (or $$|A|= |B|+1$$ if $$|V|$$ is odd).

We define the value of a cut/bisection $$V=Adotcup B$$ as $$E(A,B)$$, i.e. as the number of edges between the partitions.

Question:

Is it NP-hard to solve the following problem:
Given an integer $$k$$, do there exist both a Cut and a Bisection of value $$k$$?

The problem is in NP, because given a cut and a bisection, we can efficiently check whether both have value $$k$$.

I’m also wondering whether there are somewhat general techniques that help one decide the NP-hardness of a problem which is more or less two NP-hard problems slapped together by an equality.

Literature:

NP-completeness of Max-Cut: DOI:10.1016/0304-3975(76)90059-1

NP-completeness of Vertex Bisection: DOI=10.1.1.154.5438

## optimization – Is the problem “find the sequence of \$N\$ numbers between 1 and \$D\$ with least cost”, NP-hard?

Consider sequences $$p=(p_1,dots,p_N)$$ (the order matters) of length $$N$$, where $$p_iin{1,dots,D}$$ for fixed $$D$$. Moreover, consider a cost function $$c:{1,dots,D}^Ntomathbb{R}$$ which comply $$c(q)neq c(p), forall p,qin{1,dots,D}^N$$. The problem is to find $$pin{1,dots,D}^N$$ with least cost $$c(p)$$. I want to show that this problem is NP-hard.

In my opinion, since all costs are different, there is no other way but to check all possible ($$D^N$$) combinations to find the optimum. Should this be enough to conclude that this problem is NP-hard? If so, what is the “formal argument” under this line of reasoning?

Alternatively, is there a standard reference which state that a problem like this is NP-hard? Which other NP-hard problem can be reduced to this one?

## optimization – Is this variation of the traveling salesman problem NP-hard

Consider the following setting. You have $$n$$ cities, and there is a cost to travel from a city $$i$$ to a city $$j$$ given by $$c_{ij}>0$$ where $$c_{ij}neq c_{ji}$$. Moreover, if you are traveling to city $$i$$, you are allowed to stay exactly $$d_i>0$$ days. The problem is:

Find a travel schedule, this is: a number $$m$$ and a sequence of different cities $$i_1,dots,i_min{1,dots,n}$$, such that the cost $$sum_{k=1}^{m-1} c_{i_k,i_{k+1}}$$ is minimal and that the whole trip lasts at least $$L$$, this is $$sum_{k=1}^m d_{i_k}geq L$$.

There are many similarities between this problem and the traveling salesman one. The key differences are the non symmetric costs $$c_{ij}neq c_{ji}$$ and that instead of having to travel to all cities once, one just needs to travel to enough cities to make the trip last enough.

I understand that to show that this problem is NP-hard, I need to show that another NP-hard problem can be reduced to this one in polynomial time. I suspect that the original traveling salesman problem may work to do so through some clever trick I haven’t found yet. Moreover, since there is plenty of literature on the traveling salesman problem, I suspect a similar problem to this may already be shown to be NP-hard.

My question is: Do you see how to show this problem to be NP-hard? Or, do you recognize this problem to be something standard in some piece of literature you can point me out?

## complexity theory – Is there any NP-hard problem which was proven to be solved in polynomial time or at least close to polynomial time?

I know this could be a strange question. But was there any algorithm ever found to compute an NP-problem, whether it be hard or complete, in polynomial time. I know this dabbles into the “does P=NP” problem, but I was wondering whether there were any particular instances when people were close to it. I’ve also heard about the factoring number problem being solved by a quantum computer by Shor’s algorithm. Would this count as an algorithm which solved an NP-problem in polynomial time please ? Please do ask if you want me to clarify any parts in my questions. Thanks in advance.

## algorithms – show the problem of find two subsets such that the difference of them of two sets is smaller than a value, is NP-Hard

As input, given two finite sets of integers X = {x1,…,xm},Y = {y1,…,yn} ⊆ Z, and a non-negative integer v ≥ 0. The goal is to decide if there are non-empty subsets S ⊆ [m] and T ⊆ [n] such that

How to show this problem is NP-Complete? I’m quite confused.
What I got so far is to reduct from subset sum problems, since the form is set to less than v. So I need to have 2v+1 subset sum problems to verify

## algorithms – Is the following problem NP-hard? (or have yous seen it before?)

Let’s formulate the decision problem form of this problem, which I’ll call Tree Scheduling (TS):

Given a number $$k$$, and a rooted tree with

• tasks $$t_1, dots, t_n$$ for vertices, each having some integer duration $$l_i$$ and requiring some resource $$r_i$$ from a set $$S$$ of resources, and
• arcs representing dependencies between tasks,

is there a schedule that satisfies all dependencies, avoids scheduling two jobs that use the same resource at overlapping times, and completes within $$k$$ time units?

This problem is NP-complete even with just two resources, by reduction from Partition (PART).

In PART, we are given a multiset of $$m$$ numbers $$a_1, dots, a_m$$ having $$sum_i a_i = T$$, and the task is to determine whether we can partition them into two multisets, each having sum $$T/2$$. Given an instance of PART, we construct an instance of TS with:

1. A root task $$t_r$$ using resource 1, of duration 1;
2. For each $$1 le i le m$$, a task $$t_i$$ using resource 1, having duration $$l_i = a_i$$ and on which $$t_r$$ depends;
3. A task $$t_{gap}$$ using resource 2, of duration $$T/2$$ and on which $$t_r$$ depends;
4. A task $$t_{sep}$$ using resource 1, of duration 1 and on which $$t_{gap}$$ depends;
5. $$k=T+2$$.

The idea is that an optimal solution to an instance consisting of just $$t_r$$, $$t_{gap}$$ and $$t_{sep}$$ has a “gap” of length $$T/2$$ in the usage of resource 1 between running $$t_{sep}$$ and $$t_r$$. After adding the tasks $$t_1, dots, t_m$$, we can complete all tasks in $$T+2$$ time iff we are able to fill up this gap with exactly $$T/2$$ time units’ worth of tasks.