Homework - Big O Analysis

My homework involves Big O analysis and I think I'm fine, but I'm not 100% sure. Some of you, dear people, will look and say if I'm on the right track?

The appointment is below. For questions 1 and 3, my analysis and answers are on the right, after the // tags. For question 2, my analysis and answers are below the type of algorithm.

Thanks in advance for your help! :-)

1.For each of the following program fragments, give a Big-Oh analysis of the running time in terms of N:
    (a) // Fragment (a)
        for ( int i = 0, Sum = 0; i < N; i++ )      // N operations
            for( int j = 0; j < N; j++ )            // N operations
                Sum++;                              // Total: N^2 operations  =>  O(N^2)

    (b) // Fragment (b)
        for( int i = 0, Sum = 0; i < N * N; i++ )   // N^2 operations
            for( int j = 0; j < N; j ++ )           // N operations
                Sum++;                              // Total: N^3 operations  =>  O(N^3)

    (c) // Fragment (c)
        for( int i = 0, Sum = 0; i < N; i++ )       // N operations
            for( int j = 0; j < i; j ++ )           // N-1 operations
                Sum++;                              // Total: N(N-1) = N^2 – N operations  =>  O(N^2)

    (d) // Fragment (d)
        for( int i = 0, Sum = 0; i < N; i++ )       // N operations
            for( int j = 0; j < N * N; j++ )        // N^2 operations 
                for( int k = 0; k < j; k++ )        // N^2 operations 
                    Sum++;                          // Total: N^5 operations  =>  O(N^5)

2. An algorithm takes 0.5 milliseconds for input size 100. How long will it take for input size 500 if the running time is:
    a. Linear
        0.5 *5 = 2.5 milliseconds

    b. O( N log N)  
        O (N log N) – treat the first N as a constant, so O (N log N) = O (log N)
        Input size 100 = (log 100) + 1 = 2 + 1 = 3 operations
        Input size 500 = (log 500) + 1= 2.7 + 1 = 3.7 β‰ˆ 4 operations
        Input size 100 runs in 0.5 milliseconds, so input size 500 takes 0.5 * (4/3) β‰ˆ 0.67 milliseconds

    c. Quadratic        
        Input size 100 in quadratic runs 100^2 operations =   10,000 operations
        Input size 500 in quadratic runs 500^2 operations = 250,000 operations = 25 times as many
        Input size of 100 runs in 0.5 milliseconds, so input size of 500 takes 25 * 0.5 = 12.5 milliseconds

    d. Cubic
        Input size 100 in quadratic runs 100^3 operations =     1,000,000 operations
        Input size 500 in quadratic runs 500^3 operations = 125,000,000 operations = 125 times as many
        Input size of 100 runs in 0.5 milliseconds, so input size of 500 takes 125 * 0.5 = 62.5 milliseconds


3. Find the Big-O for the following:
    (a) f(x) = 2x^3 + x^2 log x             // O(x^3)
    (b) f(x) = (x^4 – 34x^3 + x^2 -20)      // O(x^4)
    (c) f(x) = x^3 – 1/log(x)               // O(x^3)

4. Order the following functions by growth rate: (1 is slowest growth rate; 11 is fastest growth rate)
__6_ (a) N
__5_ (b) √N
__7_ (c) N^1.5
__9_ (d) N^2
__4_ (e) N log N
__2_ (f) 2/N        
_11_ (g) 2^N
__3_ (h) 37
_10_ (i) N^3
__1_ (j) 1/ N^2
__8_ (k) N^2 /log N


* My logic in putting (j) and (f) as the slowest is that as N grows, 1/N^2 and 2/N decrease, so their growth rates are negative and therefore slower than the rest which have positive growth rates (or a 0 growth rate in the case of 37 (h)). Is that correct?

      

+3


source to share


4 answers


I looked at your questions 1 through 3 and everything looks good.

Follow these rules and see for yourself:

1) Multiplicative constants can be omitted, Example 50n ^ 2 is simplified to n ^ 2

2) n ^ a dominates n ^ b if a> b Example n ^ 3 dominates n ^ 2, so n ^ 3 + n ^ 2 + n, simplifies to n3



3) Any exponent dominates any polynomial Example 3 ^ n dominates n ^ 5 Example 2 ^ n dominates n ^ 2 + 5n + 100

4) Any polynomial dominates any logarithm Example n dominates (log n) 3

For Question 4, use the following as a guide (smallest to largest):

Log2 n <n <n log2 n <n ^ 2 <n ^ 3 <2 ^ n

+2


source


the answer to (b) time calculation is incorrect. you cannot consider one of n constant. So nlogn becomes 1log1, which means log1 is 0. so 0.

so the answer is 100 log100 comparison operations with 500log500 ...

approaching the least. b equals 4 and a equals 5. c, e, k - competitions for positions 6 and 7 and 8. The 1,2,3 positions given by you are correct .9,10,11 are correct.



i will do some analysis over 6,7,8 and let you know.

if you need any clarification on my answer you can comment on this.

+1


source


@op Could you please tell me why you considered O (nlgn) = O (lg n)? As far as I understand, your analysis for Q2 part b is actually an analysis of O (lg n) algorithms, for nlgn algos analysis you need to consider that n is on the left.

+1


source


  • (a) Correct (b) Correct (c) Correct. 0 + 1 + 2 + ... + (n - 1) = n (n - 1) / 2 = 0.5n ^ 2 - 0.5n = O (n ^ 2)
    (d) Correct (there is 1 / 2 as for (c), but complexity is still O (N ^ 5))

  • and. Correct
    b. let K be the duration of one step.
     K * (100 log 100) = 0.5, so K = 7.5 * 10 ^ -4
     K * (500 log 500) = 7.5 * 1 - ^ - 4 * 500 log 500 = 3.3737ms

    Alternatively (500 log 500) / (100 log 100) = 6.774

     With n = 500 it will be 6.7474 times slower, 6.7474 * 0.5ms = 3.3737ms

    from. Correct
    d. Correct

  • (a) Correct (b) Correct (c) Correct

  • __ 5 _ (a) N
    __ 4 _ (b) √N
    __7_ (c) N ^ 1.5
    __9_ (d) N ^ 2
    __ 6 _ (e) N log N
    __2_ (f) 2 / N
    _11_ (g ) 2 ^ N
    __3_ (h) 37
    _10_ (i) N ^ 3
    __1_ (j) 1 / N ^ 2
    __8_ (k) N ^ 2 / log N

I agree with the positioning of (f) and (j). However, you should be aware that they do not occur "in the wild" because every algorithm has at least one step and therefore cannot beat O (1). See Are there any O (1 / n) algorithms? for a more detailed explanation.

+1


source







All Articles