Java - Efficiency comparison of two O (n) algorithms

I'm researching linked lists and a question arises: write a function to print the middle term of a given linked list (assume LL has an odd number of nodes).

Method 1 - Walk LL and count the number of nodes using a counter. Add 1 (to make an even number) and divide the counter by 2 (ignore math for discrepancies). Repeat LL again, but this time only until the counter term and come back.

void GetMiddleTermMethod1(){
            //Count the number of nodes
            int counter = 0;
            Node n = FirstNode;
            while (n.next != null){
                counter = counter + 1;
                n = n.next;             
            }
            counter=counter+1/2;
            //now counter is equal to the half of the number of nodes

            //now a loop to return the nth term of a LL 
            Node temp = FirstNode;
            for(int i=2; i<=counter; i++){
                temp = temp.next;
            }
            System.out.println(temp.data);
        }

      

Method 2 - Initialize 2 links to nodes. One goes through 2 nodes at a time, and the other only goes through 1. When the fast link reaches zero (end of the LL), the slow one would reach the middle and return.

void GetMiddleTermMethod2(){
            Node n = FirstNode;
            Node mid = FirstNode;
            while(n.next != null){
                n = n.next.next;
                mid = mid.next;
            }
            System.out.println(mid.next.data);
        }

      

I have 3 questions -

Q1 - How can I know which algorithm is more effective if I am asked this question in an interview? I mean, both functions intersect LL one and a half times (the second does it in one cycle instead of 2, but still it intersects LL one and a half times) ...

Q2. Since both algorithms have Big O of O (n), which parameters will decide which one is more efficient?

Q3 - What is the general method for calculating the effectiveness of such algorithms? I would really appreciate If you could link me to a suitable tutorial ...

thank

+3


source to share


2 answers


Well there is no real simple answer for that, the answer may differ in compiler optimizations, JIT optimizations and the actual machine that runs the program (which for some reason may be optimized for one algorithm).

The truth is that, apart from the theoretical big O-notation that gives us asymptotic behavior, there is rarely a "pure, theoretical" way of defining Algorithm A faster than Algorithm B under conditions (1), (2) ..., ( to).



However, this does not mean that you cannot do anything, you can benchmark the code by generating various random data sets and the time during which each algorithm is executed. It is very important to do this more than once. How much more? Until you get the statistical significance of a statistical test , for example Wilcoxon signed a ranked test .

Also, in many cases, the negligible performance is usually not worth the time spent optimizing the code, and it would be even worse if it made the code less readable - and therefore difficult to maintain.

+7


source


I just implemented your i java solution and tested it on a LinkedList of 1.111.111 random integers up to 1000. The results are very similar:

Method 1:

time: 162ms

      

Method 2:

time: 171ms

      



Also, I would like to point out that you have two main flaws in your methods:

Method 1:

Change counter = counter + 1 / 2;

to counter = (counter + 1) / 2;

, otherwise you will end up with the end of the list, since the counter remains the counter :)

Method 2:

Change System.out.println(mid.next.data);

toSystem.out.println(mid.data);

+4


source







All Articles