Calculating hours per second

Am I doing it right? From time to time my program will print 2000+ for a chrono solution and always print 1000 for CLOCKS_PER_SEC ..

What is this value that I am actually calculating? Is it hours per second?

#include <iostream>
#include <chrono>
#include <thread>
#include <ctime>

std::chrono::time_point<std::chrono::high_resolution_clock> SystemTime()
{
    return std::chrono::high_resolution_clock::now();
}

std::uint32_t TimeDuration(std::chrono::time_point<std::chrono::high_resolution_clock> Time)
{
    return std::chrono::duration_cast<std::chrono::nanoseconds>(SystemTime() - Time).count();
}

int main()
{
    auto Begin = std::chrono::high_resolution_clock::now();
    std::this_thread::sleep_for(std::chrono::milliseconds(1));
    std::cout<< (TimeDuration(Begin) / 1000.0)<<std::endl;

    std::cout<<CLOCKS_PER_SEC;
    return 0;
}

      

+3


source to share


2 answers


To get correct ticks per second on Linux, you need to use the return value ::sysconf(_SC_CLK_TCK)

(declared in the header unistd.h

), not a macro CLOCKS_PER_SEC

.

The latter is a constant defined in the POSIX standard - it is not related to the actual ticks per second of your CPU clock. For example, see the man page for clock

:



C89, C99, POSIX.1-2001. POSIX requires CLOCKS_PER_SEC to be 1,000,000 regardless of the actual resolution.

Note, however, that even with the correct ticks per second constant, you still won't get the actual CPU cycles per second. The "Clock tick" is a special block used by the CPU clock. There is no standard definition of how it relates to actual CPU cycles.

+4


source


There is a timer class in the acceleration library, use CLOCKS_PER_SEC to calculate the maximum time a timer can pass. He said that in Windows, CLOCKS_PER_SEC is 1000, and in Mac OS X, Linux it is 1,000,000. So on the latest operating systems, the accuracy is higher.



0


source







All Articles