C: Different implementation of clock () in Windows and other OS?

I had to write a very simple console program for a university that was supposed to measure the time it took to enter.

For this I used the clock()

front and after the call fgets()

. When running on my windows computer, it worked fine. However, when working with my friends Mac-Book and Linux-PC, it gave very small results (only a few microseconds of time).

I tried the following code on all 3 OS:

#include <stdio.h>
#include <time.h>
#include <unistd.h>

void main()
{
    clock_t t;

    printf("Sleeping for a bit\n");

    t = clock();

    // Alternatively some fgets(...)
    usleep(999999);

    t = clock() - t;

    printf("Processor time spent: %lf", ((double)t) / CLOCKS_PER_SEC);
}

      

On windows, the output shows 1 second (or the amount of time you chose to input when using fgets

), and on the other two OSs it shows no more than 0 seconds.

Now my question is why there is such a difference in implementation clock()

for these OSs. For windows it seems like the clock keeps ticking while the thread is sleeping / waiting, but for Linux and Mac it doesn't?

Edit: Thanks for the answers so far guys, so this is just Microsoft's flawed implementation.

Can anyone answer my last question:

Also is there a way to measure what I wanted to measure on all three systems using the C standard libraries since clock()

it only works on Windows?

+5


source to share


3 answers


If we look at the source code for clock()

Mac OS X, we can see that it is implemented using getrusage

and reads ru_utime + ru_stime

. These two fields measure the processor time used by the process (or system, on behalf of the process) . This means that if usleep

(or fgets

) forces the OS to do an exchange in another program to execute until something happens, then any amount of real time (also called "wall time" as in "wall clock") does not count the value returned clock()

in Mac OS X. Perhaps you could search and find something similar in Linux.

On Windows, however, it clock()

returns the amount of wall time elapsed since the process started.

In pure C, I am not aware of a function available in OS X, Linux, and Windows that will return the wall time to the nearest second (time.h is pretty limited). You have GetSystemTimeAsFileTime

on Windows, which will return you time in 100ns slices and gettimeofday

from BSD, which will return time to microsecond precision.



If second precision is acceptable to you, you can use time(NULL)

.

If C ++ is an option, you can use one of the clocks std::chrono

to get the precision you want.

+1


source


You will encounter a known error in the Microsoft C Runtime . Although the behavior is not in accordance with the ISO C standard, it will not be fixed. From the bug report:



However, we decided to avoid reimplementing clock () so that it can return time values ​​that accelerate faster than one second per physical second, as this change will quietly break programs depending on the previous behavior (and we expect a lot of such programs there) ...

+2


source


On Linux, you should read the time (7) . He suggests using POSIX 2001 clock_gettime , which should exist on recent MacOSX (and Linux). On Linux running on not too old hardware (such as a less than 6 year old laptop or desktop), clock_gettime

gives good accuracy, typically tens of microseconds or better. It gives measures with seconds and nanoseconds (in struct timespec

), but I don't expect the nanosecond figure to be very accurate.

Indeed, clock (3) is said to match ...

C89, C99, POSIX.1-2001. POSIX requires CLOCKS_PER_SEC to be 1,000,000 regardless of the actual resolution.

Finally, several framework libraries provide functions (wrapping target system functions) for measuring time. Have a look at POCO (in C ++) or Glib (from GTK and Gnome in C).

0


source







All Articles