pathfinder 1e – Cursed Item Drawback Price Reductions: Do They Stack?

The price reduction would stack.

Each of those is it’s own type of curse, and thus reduces the price accordingly.

However, you should note that the cursed item crafting rules state:

Drawbacks and Requirements: Drawbacks and requirements typically don’t reduce the cost of a cursed item in any way (and might increase it). Since the crafter of an intentionally cursed item is setting these requirements, it is expected that she does so with a particular agenda, such as choosing a requirement that doesn’t affect her very much but would make the item painful for her enemies to use should they steal it, or choosing a requirement that she wants someone to perform anyway and then offering the item as a gift.

That said, these curses typically affect the price when selling the cursed items to a merchant. The price may be reduced by 10% for minor drawbacks or requirements such as minimum skill ranks or the worship of a specific deity; by 30% for harmful or costly drawbacks or requirements such as an alignment change, ability damage, sacrificing wealth, or performing a quest to activate the item; or by 50% for severe drawbacks or requirements such as negative levels that cannot be removed or needing to routinely sacrifice sentient creatures to the item.

So a cursed belt of +2 strength would still cost the same to craft as a normal belt of +2 strength, despite selling to merchants for less.

Note that this doesn’t mean a merchant will sell you the item cheaper either, instead they may charge you full price, as it’s only reduced when selling to merchants.

reductions – How can I prove the following problem is NP complete?

The problem:

I have a list $displaystyle S={s_{1} ,s_{2} ,dotsc ,s_{n}}$ places. Each unordered pair of places has cost and gain: $displaystyle c{s_{i} ,s_{j}} in mathbb{N}$, $displaystyle g{s_{i} ,s_{j}} in mathbb{N}$. Also the pairs are only for $displaystyle ineq j$.

I have two numbers: $ $$displaystyle bin mathbb{N}$, $displaystyle pin mathbb{N}$.

The problem is to check wether it exist a combination of pairs such that:

$displaystyle c_{1} +c_{2} +dotsc +c_{m} leqslant b$ and $displaystyle g_{1} +g_{2} +dotsc +g_{m} geqslant p$. Where of course $displaystyle c_{i} ,g_{i}$ are cost and gain of a used pair $s_i$.

To show it is NP-complete I have to show a Karp reduction.


There are some problems we’ve studies and so that I can use:

Clique, Composite, SAT – Satisfiability problem, 3SAT, CSAT, IS – Independent Set, VC – Vertex, Cover, DHC – Directed Hamilton Cycle, HC – Hamilton Cycle, DS – Dominating Set, SSS – Subset, Set, 3COL, Partition, Bin-Packing.

Since it’s talking about a sum I thought of using Subset-sum. If I set $displaystyle p=b=k$, and I set $displaystyle c_{i} =g_{i}$ for each $displaystyle i$, then it really is just a sum question, to find a combination of elements of sum $displaystyle k$.

But what it’s complicating it for me is that if I have $displaystyle n$ elements, I will have $displaystyle frac{n^{2} -n}{2}$ pairs, and so these many different costs. How can I rewrite the $displaystyle n$ elements of partition, into some $displaystyle k$ elements here such that there are $displaystyle frac{k^{2} -k}{2} =n$ pairs?

I’ve thought maybe I can only assign the cost to the first $displaystyle n$ elements, and then all the rest to $displaystyle 0$, but then cost and gain must be natural numbers.

Any advice?

complexity theory – Consequence of NP-complete, and DP-complete w.r.t. randomized reductions

If a problem is NP-complete with respect to randomized (polynomial time) reductions, but not with respect to deterministic reductions, then we have P $neq$ BPP (See Question 2 here and its answer).

Suppose a decision problem is proved to be NP-complete; and it is also proved to be DP-complete with respect to randomized reductions. Does this have any real consequence?

Context
Given a graph $G$, it is coNP-complete to test whether $G$ has exactly one 3-colouring upto swapping of colours (because the “another solution” problem associated with 3-colouring is NP-complete (1)). The same problem is DP-complete with respect to randomized reductions (2).

(1) Dailey, David P., Uniqueness of colorability and colorability of planar 4-regular graphs are NP-complete, Discrete Math. 30, 289-293 (1980). ZBL0448.05030.

(2) Barbanchon, Régis, On unique graph 3-colorability and parsimonious reductions in the plane, Theor. Comput. Sci. 319, No. 1-3, 455-482 (2004). ZBL1043.05043.

reductions – Proving B-Min-Cost Strongly connected Subgraph is NP-Complete

We have a strongly connected directed graph where each edge has positive integer weights. We are also given a $B in mathbb{N}$. Does there exist a strongly connected subgraph where sum of edge weights $leq B$?

I’m not sure what NP-Complete problem to use. I tried subset sum, but it takes exponential time to build the graph I had in mind (yes/no tree with edge cost being the numbers in subset sum problem, and a final edge to root at end of every decision sequence). Any ideas? Is there a better NP-Complete problem to use?

algorithms – Reductions versus generalizations

In the questions above, we are saying that a special case of any of the problems above are NP-complete complete. This seems like a misnomer though. Shouldn’t this be a specialization rather than a generalization?

No. The problem you are proving NP-complete is a generalization of a known NP-complete problem. E.g. the MAX-SAT example contains Boolean satisfiability; just set $g$ large enough to demand satisfaction of all the clauses.

Another part of my confusion is how these so-called generalizations compare or contrast to reductions. Are we saying that some (special) version of a problem is NP-complete and therefore all the other instances are NP-complete? I thought that we needed to show that all instances of the problem are the NP-complete problem.

No, we’re saying that if an NP-complete problem can be specified using the rules that describe a different problem without changing those rules, then that different problem is NP-hard, and is also NP-complete if that different problem is in NP.

Regarding the direction of reductions: we reduce from a NP-complete problem (the hard problem) to the problem that we have at hand. For example, in the reduction from 3SAT to independent set we reduce from 3SAT that we know is NP-complete to what we have at hand, the independent set – did I misunderstand? Are these reductions generalizations? I thought that they were for all instances of the independent set.

No, you should just think of reductions as reductions. A generalization of an NP-complete problem contains that problem as a special case within the generalized problem. A reduction provides a way to go from an instance of one problem to an instance of another without regard to whether one problem is a special case of another.

Are generalizations reductions?

No, they are an expansion of the original problem that still contains the original problem. E.g. Boolean satisfiability is a generalization of 3-satisfiability because 3SAT is contained within the expanded problem set.

reductions – CFG string length

Can we describe a procedure that takes cfg G as input and checks for every natural number n there is a word of length n*1749 in L(G).

I tried several ways like taking intersection with regular language and filtering all 1749*n strings and using pumping lemma on this etc..But nothing seems to work so far

Please help

complexity theory – A question about domains in Karp reductions

A basic question or request for clarification regarding Karp reducibility:

Let $Sigma^*$ be the set of all finite strings of 0’s and 1’s. Call a subset of $Sigma^*$ a language. Let $Pi$ denote the set of all functions from $Sigma^*$ into $Sigma^*$ that are computable in polynomial time. According to Karp, a language $L$ is reducible to a language $M$, also denoted $L leq_K M$, if there is a function $f in Pi$ such that $f(x) in M Leftrightarrow x in L$.

For many problems, however, we are interested in the difficulty of determining membership in subsets of domains other than $Sigma^*$. To address this, Karp briefly discusses encodings: Given a domain $D$, there is often a natural “one-one” encoding, $e: D rightarrow Sigma^*$. He then says that given a set $T subset D$, $T$ is recognizable in polynomial time if $e(T) in mathcal{P}$. But don’t we, in practice, typically consider $T$ to be recognizable in polynomial time if, for any $x in D$, we can determine whether $x in T$ in polynomial time? On the face of it, this doesn’t seem to be the same as Karp’s definition, since there is no guarantee that $e(D) = Sigma^*$.

Similarly, according to Karp, $T leq_K U$ where $T subset D$ and $U subset D’$ if $e(T) leq_K e'(U)$ where $e: D rightarrow Sigma^*$ and $e’: D’ rightarrow Sigma^*$. However, when we are actually proving $T leq_K U$ for some real $T$, $U$, $D$, and $D’$, don’t we frequently just define an $f: D rightarrow D’$ computable in polynomial time, confirm that $f(x) in D’$ for any $x in D$, and show that $f(x) in U Leftrightarrow x in T$ for any $x in D$? Again, this doesn’t seem to be the same thing as Karp’s definition if $e(D) neq Sigma^*$.

What am I missing?

reductions – If a solution to Partition is known to exist, can it be found in polynomial time?

In the Partition problem, there is a set of integers, and the goal is to decide whether it can be partitioned into two sets of equal sum. This problem is known to be NP-complete.

Suppose we are given an instance and we know that it admits an equal-sum partition. Can this partition be found in polynomial time (assuming $Pneq NP$)?

Let’s call this problem “GuaranteedPartition”. On one hand, apparently one can prove that GuaranteedPartition is NP-hard, by reduction from Partition: if we could solve GuaranteedPartition with $n$ numbers in $T(n)$ steps, where $T(n)$ is a polynomial function, then we could give it as input any instance of Partition and stop it after $T(n)$ steps: if it returns an equal-sum partition the answer is “yes”, otherwise the answer is “no”.

On the other hand, the GuaranteedPartition is apparently in the class TFNP, whose relation to NP is currently not known.

Which of the above arguments (if any) is correct, and what is known about the GuaranteedPartition problem?

lambda calculus – Show that term cons works by showing all beta reductions

I’m new to functional programming.

So the terms cons appends an element to the front of the list. Where cons ≜ λx:λl:λc:λn: c x (l c n).

How should I go about proving that cons works correctly using beta reduction for a sample function. For example reducing cons 3(2,1) to (3,2,1)? Is there a formula like for the artihmetic operations in lambda? I’m a bit confused on how to approach this comapred to a arithmetic operation (i.e. addtion or multiplication).

Thanks.

functional programming – Lambda Calculus – Show the beta reductions of a multiplication

I’m new to functional programming. Trying to solve the following question:
Show the beta reductions of (3,2,1) times 1. Expected answer is 6.

I have praticed multiplication with simpler equations and I understand them. Where I get confused here is how do I encode the (3,2,1) list and how should it look in the multiplication function where I have to multiply the result from the list by 1? Should I be calculating two numbers at a time?

Is my assumption on how to get started correct?

N(3) ≡ λfx.f(f(fx))

N(2) ≡ λgy.g(gy)

N(1) ≡ λtj.t(tj)

MULT(N(3)N(2)N(1))N(1) ≡ (λnmh.n(mh))((λfx.f(f(fx)))(λgy.g(gy))(λtj.t(tj))(λtj.t(tj))

Thanks.