Complexity help..O (n ^ 2), 0 (nlog), etc.

Hey, can someone please help me determine the difficulty ?. The example given in my class was

bubble sort

int main() {
   int a[10] = {10,9,8,7,6,5,4,3,2,1};
   int i,j,temp;

   for (j=0;j<10;j++) {
      for (i=0;i<9;i++) {
         if (a[i] > a[i+1]) {
            temp = a[i];
            a[i] = a[i+1];
            a[i+1] = temp;
         }
      }
   }
   for (i=0;i<10;i++) {
      printf("%d ",a[i]);
   }
}

      

which had complexity O (n ^ 2) since it had two O (n) loops, hence O (n) x O (n).


and they said quicksort has O (nlog (n)) complexity .. why is that?

is it because when it goes around the loop it divides the number?

-Thank

+2


source to share


7 replies


There is no quick sentence of one sentence. Quicksort is actually O (n 2 ) worst case, but it's O (n log n) on average, so if you try to parse it, you can't prove it's always O (n log n). This is only O (n log n) on average , averaged over all possible datasets. For any particular list, it could be worse.

If the twists end up being really, really bad, then quicksort will perform very poorly. This can happen on data that is already sorted, for example, if you always select a fixed element at the beginning or end of an array as a pivot element.

Other sorting algorithms like merge sort and heap sort, on the other hand, are always O (n log n). They have no pathological cases where their performance degrades to O (n 2 ). This makes them preferred if what you want is consistent, predictable performance at all times. The advantage of quicksort is, in general, a faster algorithm on average, just not always the case.



Edit : Indeed, since @pst says merge sort requires O (n) space when sorting arrays (scratch space for merges) which is less than ideal, that's the point against it. But then another point against quick sort is that it is unstable. Items that are equal to each other can be shuffled after sorting.

Timsort is a terrific new search algorithm - well, less new, better mix of existing algorithms, plus a lot of fine tuning and smart optimization (the jumping thing is money). Read this text file to see some great programming.

+9


source


Big-O notation is simply the relationship between the input value (number of elements in your case) and complexity (time complexity in your case, van is also spatial complexity).

You are right about sorting the bubble. Since it loops n

over once in another loop n

once, the time complexity is O (n 2 ).

Quicksort is slightly different. It does multiple passes that depend on n

, but in each case it manages to put all values ​​lower than the midpoint on the "left" and all values ​​above the midpoint on the right "- both halves are still unsorted, but you know all the left the elements are smaller than any of the regular elements (let's call that rotation rule).

This basically breaks the workload in half for each sub-loop, resulting in an O (log n) average case. Similar to binary search or balanced trees, any algorithm that divides the load by a factor for each iteration is O (log n).



Combining the two gives you O (n log n).

This wikipedia page actually shows a little graphic in the upper right corner that shows quicksort in action. Since a picture is worth a thousand words (and animation is worth a thousand photographs), you should understand this for a while.

You will see that it first divides the workspace in two, then swaps the items between the two halves until the correct rule is reached.

Since the workload is split into two completely separate independent areas, quicksort is ripe for parrallel processing without resource contention. If you have enough processors, once you split the data into two regions, you can give each region a separate processor to split further. This is not possible with bubble sort, as it does not give you two independent regions.

+4


source


See analysis on Wikipedia .

+2


source


In fact, quicksort is O (n log (n)) on average. In the worst case, you select the largest or smallest element as the partition each time and do n + (n -1) + ... 1 = O (n ^ 2).

In the best case (middle case works with the same big-O), you do n comparisons for the first section. This causes two calls of n / 2 size problems, and these calls do n / 2 comparisons to the partition. This continues, so you end up with n + 2 * (n / 2) + 4 * (n / 4) + .... There are log (n) full terms and each of them n, so it's all O (n * log ( n)).

As Ton said, you can get the same result by applying the Wizard's theorem, but it is probably worth taking the time with some examples.

+2


source


The previous answers describe quicksort and its runtimes are pretty good, but I want to comment on the worst running times.

It is true that quicksort is in general O (n log n) in the middle case (with random permutation of inputs) and O (n ^ 2) in the worst case. However, quicksort does not define at all which element of the partition is around at each step, so an O (N ^ 2) case occurs when you select any element (usually the first or last) as a pivot without considering it links to other items on the list.

You can implement quicksort to run in worst case O (n log n) picking a reasonable point. One way is to find the median in O (n) time. (See http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_.22Median_of_Medians_algorithm.22 ). If you always split the median area, the number of times any item is checked is at most O (log n), so the total running time is O (n log n).

+2


source


Quicksort is recursive. Just write pseudocode and you can easily get a recursive formula for each repetition runtime, then use the main theorem to get your final answer.

0


source


Contrary to what everyone thinks, the complexity of your program here is O (1). You haven't defined what n is. I know my answer seems a little silly, but in practice it is often much more important to find the boundaries of your problem than the complexity of the algorithm. If your dataset will never be larger than the given size, you can use a simpler method that has reasonably good performance / behavior.

-1


source







All Articles