Are there real systems where difftime takes leaps of seconds into account?

Standard C (ISO / IEC 9899) states:

7.2x.2.2 Function difftime

Summary

   #include <time.h>
   double difftime(time_t time1, time_t time0);

      

Description

The function difftime

calculates the difference between two calendar times: time1 - time0

.

Returns

The function difftime

returns the difference expressed in seconds as double

.

This leaves it ambiguous (intentionally, I assume) if the result accounts for leap seconds or not. The difference (26 seconds when compared from 1970 to July 2015) matters in some applications.

Most implementations of the C standard library do not account for leaps in seconds, and this can be checked: the following (deliberately short) code tends to output leap seconds accounted for from 2000/01/01 to 2015/01/01: 0

(or -473385600

, if mktime

not working) when it actually was 3 hops in that period.

#include <time.h>
#include <stdio.h>
struct tm t0,t1; // globals, thus initialized to zero
int main(void) {
  t0.tm_year  = 2000-1900;         // from 2000
  t1.tm_year  = 2015-1900;         // to   2015
  t0.tm_mday  = t1.tm_mday  = 1;   // first day of January   
  printf("leap seconds accounted for from 2000/01/01 to 2015/01/01: %.0f\n",
    difftime( mktime(&t1), mktime(&t0) ) - 86400.*(365.*15+4) );
  return 0;
}

      

Are there real systems with the C / C ++ standard library that takes leaps of seconds into account, how can one check using such a combination of mktime

and difftime

?

Otherwise it is said: many modern operating systems are aware of legislative changes about legal time using the update mechanism, and standard library functions such as localtime

use this information and calculate their result accordingly. It would be quite possible, and as far as I understand C-standard-compliant, a similar update mechanism informs the operating system about the past and next leaps of seconds and that either difftime

or mktime

uses this information. My question asks if such systems and standard libraries exist, because this will affect some code.


Following the comment : context is code that needs to be portable across systems (from embedded to mainframes, some are pretty old) and decides when (in seconds from the moment of the call, as an integer no more than 99999), some action should be triggered based (in addition to the system time) on a given "number" (no jump) "seconds elapsed since 2000/01/01 at midnight UTC" and the desired action time. Error / pm; 2 seconds (in addition to UTC reference drift) is acceptable.

The existing code uses a trivial combination time

, mktime

for 2000/01/01 and difftime

for the difference between them, followed by a subtraction of the given one. I am wondering if there is a serious concern that it might fail (and return something a little of the stated tolerance as 4 is too low at the time of writing and increasing). I am not asking how to make the code portable (one parameter will use gmtime(time(NULL))

and calculate the rest using explicit code).

The main question is phrased without time

, so as not to account for the portability difference if time

timezone is taken into account.

+3


source to share


1 answer


This is being asked as an informational question, but it is indeed a physical one.

First informational view:

Common operating systems are aware of UTC time and, ultimately, local time. They assume that this reference is UTC time and that all minutes are exactly 60 seconds long. They use leap seconds to compensate for errors between their local time source (quartz) and an external reference. From their perspective, there is no difference between sliding clock correction and true (physical) leap seconds. For this reason, they do not know the true seconds of the jump and are currently ignoring them.

Now the physical representation (ref at UTC and TAI on wikipedia):

In 1955, the atomic cesium clock was invented. This provided a form of timekeeping that was more stable and comfortable than astronomical observations.



[In 1972, the TAI (Temps Atomique International in French) was defined, based only on the cesium atomic clock.] In the 1970s, it became clear that the clocks participating in the TAI tick at different speeds due to gravitational time so the total the TAI scale corresponded to the average height of the various watches. Beginning with Julian Date 2443144.5 (Jan 1, 1977 00:00:00), fixes were applied to the output of all participating clocks so that the TAI would correspond to the correct time at mean sea level (geoid). Since the clock was well above sea level on average, this meant that TAI had slowed down by about one part per trillion. The Earth's rotation speed decreases very slowly due to tidal braking; this increases the length of the average sunny day.The length of the second second has been calibrated based on the second ephemeris time and can now be seen to be related to the average solar day observed between 1750 and 1892 as analyzed by Simon Newcomb. As a result, the second SI second is close to 1/86400 of the average solar day in the middle of the 19th century. In earlier centuries the average solar day was shorter than 86,400 SI seconds, and in later centuries it is more than 86,400 seconds. At the end of the 20th century, the average solar day length (also known simply as "day length" or "LOD") was approximately 86,400.0013 s. For this reason, UT is now "slower" than TAI, with a difference (or "excess" LOD) of 1.3 ms / day.analyzed by Simon Newcomb. As a result, the second SI second is close to 1/86400 of the average solar day in the middle of the 19th century. In earlier centuries the average solar day was shorter than 86,400 SI seconds, and in later centuries it is more than 86,400 seconds. At the end of the 20th century, the average sunny day (also known simply as "day length" or "LOD") was approximately 86,400.0013 s. For this reason, UT is now "slower" than TAI, with a difference (or "excess" LOD) of 1.3 ms / day.analyzed by Simon Newcomb. As a result, the second SI second is close to 1/86400 of the average solar day in the middle of the 19th century. In earlier centuries the average solar day was shorter than 86,400 SI seconds, and in later centuries it is more than 86,400 seconds. At the end of the 20th century, the average solar day length (also known simply as "day length" or "LOD") was approximately 86,400.0013 s. For this reason, UT is now "slower" than TAI, with a difference (or "excess" LOD) of 1.3 ms / day.At the end of the 20th century, the average solar day length (also known simply as "day length" or "LOD") was approximately 86,400.0013 s. For this reason, UT is now "slower" than TAI, with a difference (or "excess" LOD) of 1.3 ms / day.At the end of the 20th century, the average solar day length (also known simply as "day length" or "LOD") was approximately 86,400.0013 s. For this reason, UT is now "slower" than TAI, with a difference (or "excess" LOD) of 1.3 ms / day.

The first jump took place on June 30, 1972. Since then, leap seconds have occurred on average about once every 19 months, always on June 30 or December 31. As of July 2015, there were only 26 leap seconds, all positive, setting UTC 36 seconds behind TAI.

TL / DR So, if you really need this, you will need to get the date when the 26 second seconds were entered in (physical) UTC, and manually if necessary. AFAIK, no current operating system and standard library can handle them.

The table for the leap seconds introduction date is saved as plain text at http://www.ietf.org/timezones/data/leap-seconds.list

+3


source







All Articles