Complexity of the recursive algorithm – Computer Science Stack Exchange

It is difficult for me to understand the time complexity of my solution for the combination sum problem. The problem is as follows:

For a number of candidate numbers (candidates) (without duplicates) and a target number (target) you will find all unique combinations in candidates, in which the candidate numbers add up target,

The same number of repetitions can be selected from the candidates an unlimited number of times.

Below is my solution in Java with recursion:

public List> combinationSum(int() candidates, int target) {
    Arrays.sort(candidates);
    List> results = new ArrayList<>();
    recurse(results, candidates, target, 0, new ArrayList<>());
    return results;
}

private void recurse(List> results, int() candidates, int target, int idx, List acc) {
    if (target == 0) {
        results.add(new ArrayList<>(acc));
        return;
    }
    for (int i = idx; i < candidates.length; i++) {
        if (candidates(i) > target) {
            return;
        }
        acc.add(candidates(i)); 
        recurse(results, candidates, target - candidates(i), i, acc);
        acc.remove(acc.size() - 1);
    }
}

It can be seen that the problem size of each recursive step may not change and the depth of the recursion is bound by target Value, e.g. if the candidates Array contains number 1 The recursion will take place target times. If I simplify the code, the interesting part is:

private void recurse(List> results, int() candidates, int target, int idx, List acc) {
    if (target == 0) {
        results.add(new ArrayList<>(acc));
        return;
    }
    for (int i = idx; i < candidates.length; i++) {
        acc.add(candidates(i)); 
        recurse(results, candidates, target - candidates(i), i, acc);
        acc.remove(acc.size() - 1);
    }
}

Which feels like O(candidates.length * target) for the most pessimistic candidates Entry with number 1.

Since my solution is not really a divide and conquer algorithm, I probably cannot apply the main clause. It feels like a backtracking algorithm, but I'm not familiar with finding the upper limit for these types of algorithms.

Can someone please advise how to do the complexity analysis of the code above?

Runtime analysis – How to analyze the runtime for simple recursive algorithms

I've read some of the similar questions, but I'm still not sure how to proceed.

Problem:
For example, each positive integer can be written as a product of an odd integer and a power of two PowerOfTwo(40) will return 340 = 5 * (2 ^ 3)

    1 def PowerOfTwo(n):
    2   if n % 2 == 1
    3     return 0;
    4   else return (1 + PowerOfTwo( n/2 ))

I understand that the first few lines run at constant time and the recursive part runs until the element is odd. Is there a general "formula" of steps that I could use for this type of algorithm? Any help would be appreciated.

Python return value suddenly "None" in the simple recursive function

I am a Python student in the early stages of learning.

Today I learned about recursive functions and when I tried to try it, I wrote the following code. The goal was to finally trigger a message by looking for a Boolean return, but the return still appears as None.

I would like to add that the Boolean trigger works if the input is exactly "0".

For me, the code is as simple as possible, so I'm not sure what causes the None return. It has to be a technical detail that I wasn't exposed to. Any help is appreciated.

def countdown(s):
    # Why doesn't this return True even if it prints blastoff?
    if s <= 0:
        print(s)
        print("Blastoff!")
        return True

    elif s > 0:
        print(s)
        countdown(s-1)


tMinus = int(input("Type a number: "))
if countdown(tMinus):
    print("He's gone, Jim!")

recursive equation in assembler code

The function F is defined as F (1) = F (2) = F (3) = 1 and for n ≥ 3

F (n + 1) = F (n) + (F (n – 1) F (n – 2))

the (n + 1) th value is given by the sum of the n th value and the multiplication of the (n – 1) th and (n – 2) th values.

Write an assembler program to calculate the kth value F (k)

How would I go about answering this question?

json – Recursive Normalization – Code Review Stack Exchange

Let's say we have three models in our database that look like this:

User:

{
  "id": 1,
  "username": "Bob",
  "car_id": 1
}

Auto (OneToOne):

{
  "id": 1,
  "model": "Ford Fiesta",
  "user_id": 1,
}

Theoretically, if I needed the car's data and denormalized it, I would involve the user, who would have to include the car again. An endless loop.

How do ORMs deal with it? Do I have to implement custom code to exclude the car from the denormalized user relationship?

Functional construction – casual users – attempt to understand how to define a recursive formula for population growth

Clear(x, xx, n, R, seq)

Note that you cannot use x as a variable and as a function. Use xx for the function,

xx(0) = x0;

xx(n_) := xx(n) = R xx(n - 1) (1 - x);

seq = xx /@ Range(0, 10)

(* {x0, R (1 - x) x0, R^2 (1 - x)^2 x0, R^3 (1 - x)^3 x0, R^4 (1 - x)^4 x0, 
 R^5 (1 - x)^5 x0, R^6 (1 - x)^6 x0, R^7 (1 - x)^7 x0, R^8 (1 - x)^8 x0, 
 R^9 (1 - x)^9 x0, R^10 (1 - x)^10 x0} *)

This order can be generalized with FindSequenceFunction

FindSequenceFunction(Rest@seq, n)

(* (R - R x)^n x0 *)

Or the sequence can be created with RecurrenceTable

Clear(x, xx, n, R)

seq == RecurrenceTable({xx(n) == R xx(n - 1) (1 - x), xx(0) == x0}, 
  xx(n), {n, 0, 10})

(* True *)

Alternatively, you can find the general solution with RSolve

Clear(x, xx, n, R)

xx(n) /. RSolve({xx(n) == R xx(n - 1) (1 - x), xx(0) == x0}, xx, n)((1))

(* (R - R x)^n x0 *)

Recursion relation and time complexity of the recursive faculty

I'm trying to figure out the temporal complexity of a recursive factorial algorithm, which can be written as follows:

fact(n)
{
 if(n == 1)
 return 1;
 else
 return n*fact(n-1)
 }

So I write the repetition relation as

T(n) = n * T(n-1)

What is correct according to this article: relapse ratio of the faculty

And I calculate the temporal complexity with the substitution method as follows:

T(n) = n * T(n-1)  // Original recurrence relation
= n * (n-1) * T(n-2)
...
= n * (n-1) * ... * 1
= n!

In this article, however, both the repetition relation and the time complexity are wrong.

What am I doing wrong or wrong?

Recursion – generic recursive function

Let's take the famous Fibonacci problem as an example, but this is a generic problem. Let's also take Scala as a language because it offers more functions. I would like to know which solution you prefer.

We all know that the following implementation is bad because it is not stack-safe:

//bad!
def NaiveFib(n: Int): Int = {
  if (n <= 1)
    1
  else
    Fib2(n - 1) + Fib2(n - 2)
}

We can improve it and make it recursive and stackable:

def fibStackSafe(n: Int): BigInt = {
  @tailrec
  def internally(cycles: Int, last: BigInt, next: BigInt): BigInt =
    if (cycles > 0)
      internally(cycles-1, next, last+next)
    else
      next
  internally(n - 2 , last = 1, next = 1)
}

acceptable solution. A solution such as:

def fibByEval(n: Int): Eval(BigInt) = {
  def internally(cycles: Int, last: BigInt, next: BigInt): Eval(BigInt) =
    Eval.always(cycles > 0).flatMap {
      case true =>
        internally(cycles-1, next, last+next)
      case false =>
        Eval.now(next)
    }
  internally(n - 2 , last = 1, next = 1)
}

uses the cats Eval / Monix Coeval (small difference between the two, you are welcome to comment on your preference), which by definition guarantees stack security. Although an eval-powered feature is still lacking heap security and can give you memory errors. It brings you a few other things.

Here is an fs2 stream-based experiment that uses the concepts of lazy evaluation:

def fibByZip(n: Int): Int = {
  def inner:Stream(Pure, Int) = Stream(0) ++ Stream(1) ++ (inner zip inner.tail).map{ t => t._1 + t._2 }
  inner.drop(n).take(1).covary(IO).compile.lastOrError.unsafeRunSync
}

This comes with the function / the overhead to "cache" the intermediate results internally.

Here's a more / less (depending on your perspective) readable version of the above:

def fibByScan(n: Int): Int = {
  def inner: Stream(Pure, Int) = Stream(0) ++ inner.scan(1)(_ + _)
  inner.drop(n).take(1).covary(IO).compile.lastOrError.unsafeRunSync
}

and finally, here is an effective approach that uses cat ref:

def fibStream(n: Int): IO(Int) = {
  val internal = {
    def getNextAndUpdateRefs(twoBefore: Ref(IO, Int), oneBefore: Ref(IO, Int)): IO(Int) = for {
      last <- oneBefore.get
      lastLast <- twoBefore.get
      _ <- twoBefore.set(last)
      result = last + lastLast
      _ <- oneBefore.set(result)
    } yield result

    for {
      twoBefore <- Stream.eval(Ref.of(IO, Int)(1))
      oneBefore <- Stream.eval(Ref.of(IO, Int)(1))
      _ <- Stream.emits(Range(0, n-2))
      res <- Stream.eval(getNextAndUpdateRefs(twoBefore, oneBefore))
    } yield res
  }

  internal.take(n).compile.lastOrError
}

It's not pure, but thread safe and fairly readable.

The code just starts, but will change in the future and needs to be serviced.

  • Model change, whereby you may need the last three elements instead of the last two entries.

  • You just want to debug and / or justify it.

  • You want to test it easily and efficiently

  • The model may become numeric rather than exact, so you may want to do more than just test it

  • The model becomes asynchronous to the calculation.

  • Parallelism may have to be introduced.

I guess we should be agile and not too future-proof, but I'm interested in knowing which option people would choose considering the following criteria:

  • legibility
  • conciseness
  • Maintain flexibility
  • performance

Feel free to mention other criteria that I may be ignoring here.