Why does this loop return O (n log log n) and not O (n log n)?

Consider the following C function:

int fun1 (int n)
{
    int i, j, k, p, q = 0;

    for (i = 1; i<n; ++i)
    {
        p = 0;

        for (j=n; j>1; j=j/2)
            ++p;

        for (k=1; k<p; k=k*2)
            ++q;
    }
    return q;
}

      

The question is to decide which of the following most closely approximates the return value of the function fun1

?

(A) n ^ 3
(B) n (logn) ^ 2
(C) nlogn
(D) nlog (logn)

This was the explanation that was given:

int fun1 (int n)
{
    int i, j, k, p, q = 0;

    // This loop runs T(n) time 

    for (i = 1; i < n; ++i)
    {
        p = 0;

        // This loop runs T(Log Log n) time
        for (j=n; j > 1; j=j/2)
            ++p;

        // This loop runs T(Log Log n) time
        for (k=1; k < p; k=k*2)
            ++q;

    }
    return q;
}

      

But the complexity of the cycle time is considered as O (Logn) if the cycle variables are divided / multiplied by a constant.

for (int i = 1; i <=n; i *= c) {
    // some O(1) expressions
}

for (int i = n; i > 0; i /= c) {
    // some O(1) expressions
}

      

But it was mentioned that the inner loops take ฮ˜ (Log Log n) times each, can anyone explain to me why the ar argument is wrong?

+3


source to share


3 answers


This question is tricky - there is a difference between what is code execution time and what is its value.

The first time to run the loop is O (log n), not O (log log n). I reprinted it here:

p = 0;
for (j=n; j > 1; j=j/2)
  ++p;

      

At each iteration, the value of j drops by half. This means that the number of steps required to complete this cycle is given by a minimum value of k such that n / 2 k & le; 1. Solving, we see that k = O (log 2 n).

Note that each iteration of this loop increments the p value by one. This means that at the end of the loop, p is & Theta; (log n). Hence, this next loop does indeed run in O (log log n) time:



for (k = 1; k <p; k = k * 2) ++ q; }

The reason for this is that, using similar reasoning in the previous section, the execution time of this loop is & Theta; (log p), and since p = & Theta; (log n), this ends with & Theta; (log log n).

However the question is, don't ask what the runtime is. He asks what the return value is . At each iteration, the value of q that is ultimately returned is incremented by & Theta; (log log n) because it increments once per iteration of the loop, which is executed in & Theta; (log log n). This means that the net q is equal to [Theta; (n log log n). Therefore, although the algorithm runs in O (n log n) time , it returns a value that is O (n log log n) .

Hope this helps!

+3


source


The answer would be (D) O (n * log (log n)). The reason is described below: -

The first for loop spans the other 2 for loops, which are based on the values โ€‹โ€‹of j and k, respectively. In addition, j gets half of n until it is greater than 1. Thus, p is equal to the largest integer (log n). And, k doubles until it equals p --- where p is given from the previous loop and equals [log n], where [x] equals the largest number x.



So the third loop will run for the log (log n), so the q value will be log (log n)

. And therefore, the inner loops are also part of the outer for loop, which runs n times.

Approximate value q = n * log (log n)) = O (n log (log n)).

+1


source


The only thing I see here is the second loop: for (j = n; j> 1; j = j / 2) you say in the comments: this loop starts ฮ˜ (Log Log n) time

As I can see this loop runs O (Log n) times

The running times for the first and third loops are correct (O (n) and O (Log Log n)).

EDIT: I agree with the previous answer. I didn't notice that the question was about the return value, not the running time!

+1


source







All Articles