Getting unexpected response from this clock () function in C

int main (int argc, const char * argv[]) {
    clock_t time1, time2;
    time1 = clock();
    //time1 = gettimeofday();

    time2 = clock();
    //time2= gettimeofday();
    float time = ((float)time2 - (float)time1) / 1000000.0F;
    printf("%fs\n", time);
    return 0;


I end up with something like 0.288640991s (not sure if it's a lot of digits to the right of the current code above. But I'm expecting seconds from the line:

float time = ((float)time2 - (float)time1) / 1000000.0F;


When I divide by 10000.0F, I get 28 seconds in real time, which it really is. So why not above this line?

EDIT: I am using a Mac, which I hear is problematic.


source to share

2 answers

You must use first CLOCKS_PER_SEC

, not some magic number.

Anyway, if you are on a POSIX system, it CLOCKS_PER_SEC

is defined to be one million (the same value you use) in POSIX, so this is probably not your problem.

You should be aware that it clock

only measures the time that the processor starts your user-space program, not the time spent in the kernel or in another process. This is important because if you call a system call or a function that calls a system call, or if your process is paged out by the kernel, the result clock

may not look like this to you.

See for example:

 time_t time1, time2;

 time1 = clock();
 time2 = clock();


difference clock

returns before and after (properly normalized with CLOCKS_PER_SEC

), the call sleep

will probably be 0 seconds.

If you want to measure elapsed time during the execution of a function or code, you are better off using a function gettimeofday





returns the number of measures that have passed since the start of the program.

To turn this into seconds, you need to divide by CLOCKS_PER_SEC

, not 1000000.0F


Shameless self-promotion:

Alternatively, you can use stopwatch.h and stopwatch.c to measure elapsed time.



All Articles