RunTime Power Method Complexity

This is my implementation of the power method.

  • If strength is an even number, I square the base and dividing by 2.
  • If the cardinality is odd, I run the method recursively, with cardinality reduced by 1, to get an even number, and multiply the result basewise to account for the cardinality reduced by one.

  • The main case is reached when the cardinality is 1 and the result is 0.

My question is, what is the time complexity of this method? Since we are dividing the cardinality by 2, at each iteration, is it log to base 2?

      double myPow(double base, int pow) {
      if(pow == 0)
          return 1;

      if(pow < 0)
          return 1/base * 1/myPow(base, (pow*-1)-1);

      if(pow %2 == 1)
          return base * myPow(base, pow-1);
      else
          return myPow(base*base, pow/2);
    }

      

+3


source to share


2 answers


Yes, the complexity is logarithmic. Your method is essentially squaring



The number of squares is equal to the number of zeros in the binary value of pow, and the number of multiplications is equal to the number of ones in the binary value of pow, so the total number of operations is equal to the number of significant bits of pow, log2(pow)

+3


source


You are correct in your idea that this method is of O (log n) cardinality. Please note, I have not included the base of this log (in this case, base 2), this is because all logs differ by a constant .

MBo gives a short explanation above why this is, but I would like to go into a little more detail on why this is.

First of all, we need to decide what the execution time of this method depends on. Note that no matter what the base is, if the cardinality is the same, the runtime will be the same (give or take a nonessential randomness). However, when the power changes, the runtime will change based on the number of significant bits as indicated by MBo.

But what does this mean? Well, let's look at these two cases (read these assignments in binary, I'm using psuedocode here):



case 1:
pow = 1000

case 2:
pow = 1111

      

You will notice that these two cases are the best and worst-case scenario for a given number of bits. The first allows us to neatly divide by 2 until we hit the base case, which the other forces us to subtract 1 before dividing by two, which results in twice the number of operations as the first.

Since big O is the worst time complexity, let's look at the worst case in more detail (given that the first one neatly leads us to log n time). We have to do the same number of divisions as the best case, but each one precedes the subtraction. This means we get 2 * log n time. However, as I said above, big O doesn't care about constants, so we get asymptotically O (log n) time. This means that the number of significant bits (or the number of bits that there are) limits the number of operations that need to be performed (bit <= ops <= 2 * bits).

Hopefully this can shed some light on the question you had. If you don't get anything out of it, remember that big O doesn't care about constants wherever they are.

0


source







All Articles