How long does a simple operation in Java take in nano seconds?

I am taking a course in analysis of Algorithms and Algorithms. And I want to know how a simple operation +

, -

, /

, *

can run on my computer. So I write a simple stopwatch like this:

public class NanosecondsStopWatch implements StopWatch {

    private PrintStream stream;

    public NanosecondsStopWatch(PrintStream stream) {
        this.stream = stream;
    }

    @Override
    public void timeAndPrint(Action action) {

        long start = System.nanoTime();

        action.doAction();

        long end = System.nanoTime();

        stream.println(end-start);
    }

}


public class TestingOperationsTime {

    public static void main(String[] strings) {

        StopWatch watch = new NanosecondsStopWatch(System.out);

        watch.timeAndPrint(new Action() {
            @Override
            public void doAction() {
                int i= 2*2;
            }
        });

        watch.timeAndPrint(new Action() {
            @Override
            public void doAction() {
                int i= 2/2;
            }
        });

        watch.timeAndPrint(new Action() {
            @Override
            public void doAction() {
                int i= 2-2;
            }
        });

        watch.timeAndPrint(new Action() {
            @Override
            public void doAction() {
                int i= 2+2;
            }
        });

    }
}

      

The results are as follows:

2529
454
355
335

      

However, if I change the order of the operations, let's say this:

public class TestingOperationsTime {

    public static void main(String[] strings) {

        StopWatch watch = new NanosecondsStopWatch(System.out);

        watch.timeAndPrint(new Action() {
            @Override
            public void doAction() {
                int i= 2-2;
            }
        });

        watch.timeAndPrint(new Action() {
            @Override
            public void doAction() {
                int i= 2*2;
            }
        });

        watch.timeAndPrint(new Action() {
            @Override
            public void doAction() {
                int i= 2/2;
            }
        });

        watch.timeAndPrint(new Action() {
            @Override
            public void doAction() {
                int i= 2+2;
            }
        });

    }
}

      

The result is still pretty much the same:

2494
332
309
326 

      

How can you explain this behavior?

  • OS: Ubuntu 14.04
  • Java: 1.7.0_65
  • OpenJDK Runtime (IcedTea 2.5.1) (7u65-2.5.1-4ubuntu1 ~ 0.14.04.2)
  • 64-bit OpenJDK server (build 24.65-b04, mixed mode)
  • javac 1.7.0_67
+3


source to share


5 answers


Many factors affect the amount of system time your code is using. For example, if the computer executes a context switch while your code is running, the time you get includes the time spent running another program.

To mitigate this, you can start the timer many times, say thousands or millions, and take the average.



Also, as @rgettman points out, the compiler will most likely optimize these calculations because they are done with constant values. This means that you only use the time of the method call and print output, not the time of the calculation.

+5


source


There will always be differences because there are different processes on your computer, and depending on the OS, some processes will take priority over others. You cannot predict exactly how many milliseconds a single operation takes. It also depends on how fast the processor you have on your computer.



+2


source


The compiler evaluates Constant expressions at compile time, you must do so with a method that takes parameters.

Secondly, the system call on the clock takes more than a few seconds, so this check will never be required, what you actually get is how long it takes for Java to get the time.

+2


source


It is not simple. In short, Java is not the right language to measure things.

Java is a compiled Just-In-Time language. This means that the code you write runs in a "virtual machine" and can be fully interpreted, fully compiled, or partially compiled. This is why, in general, the first run is always slower: it is always interpreted. Only later can the VM decide to compile it and compile the compiled code with an interpreted procedure.

Also, there is significant overhead when calling system routines from the JVM, which somehow changes your measurements. So, yes, you can take measurements by first doing a warm-up loop to make the VM know that the given method should be compiled, and then discarding the first result. But the results do not accurately measure the performance of your processor. To do this, you must use C or assembler, and even then, you must deal with context switches and OS controls that change your results.

PS: And yes, I didn't mention this because there were already 4 other answers, but the Java compiler is not that stupid and it will evaluate constant work at compile time. i=2*2

compiled in i=4

, so you don't measure the multiplication time, just the assignment time.

+1


source


2 main problems (1) you are calling a function that is consuming a lot of resources. (2) you only run it once. If you run the statement directly or run it MANY times, you will see that the execution time is very short. Below is the resulttime=0ns

public class PerfTest {
public static void main(String[] args) {
    long t1 = System.nanoTime();
    int i = 2 * 2;
    long t2 = System.nanoTime();
    System.out.printf("time=%dns", t2 -t1);
}

      

}

0


source







All Articles