Inconsistent results when synchronizing a process

I am trying to increase the performance of my program using System.currentTimeMillis()

(or alternatively System.nanoTime()

) and I noticed that every time I run it it gives a different result in the time it took the task to complete.

Even a simple test:

long totalTime;
long startTime;
long endTime;
startTime = System.currentTimeMillis();
for (int i = 0; i < 1000000000; i++)
{
    for (int j = 0; j < 1000000000; j++)
    {
    }
}
endTime = System.currentTimeMillis();
totalTime = endTime-startTime;
System.out.println("Time: " + totalTime);

      

produces all sorts of different outputs, from 0 to 200. Can anyone tell me what I am doing wrong or suggest an alternative solution?

+3


source to share


2 answers


The loop does nothing, so you dictate how long it takes to detect the loop is pointless.

Timing of the loops won't help much more accurately, you need to do something slightly useful to get repeatable results.



I suggest you give it a try -server

if you are working in 32 bit windows.

A billion billion synchronization cycles take about 10 years, so it doesn't repeat itself many times.

+5


source


This is exactly the expected behavior - it should accelerate as time repeats. When you rerun a method many times, the JIT puts more effort into compiling it to native code and optimizing it; I would expect that after running this code long enough, the JIT will eliminate the loop entirely, since it actually does nothing.



The best and easiest way to get accurate tests in Java code is to use a tool like Caliper , which β€œwarms up” the JIT so that it fully optimizes your code.

+2


source







All Articles