Set the mtime of a file with full microsecond precision in python
Let's say I create a test file and check it mtime:
touch testfile.txt stat testfile.txt File: `testfile.txt' Size: 0 Blocks: 0 IO Block: 4096 regular empty file Device: fc01h/64513d Inode: 3413533 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 1000/ me) Gid: ( 1000/ me) Access: 2014-09-17 18:38:34.248965866 -0400 Modify: 2014-09-17 18:38:34.248965866 -0400 Change: 2014-09-17 18:38:34.248965866 -0400 Birth: - date -d '2014-09-17 18:38:34.248965866 -0400' +%s 1410993514
is accurate to the microsecond (my understanding is that the system clock resolution makes the higher part of this resolution useless). The system call
lets me go through microseconds. However, the function
seems to concatenate it into one number.
I can pass
'testfile.txt', (1410993514.248965866, 1410993514.248965866))> os.utime(
stat testfile.txt File: `testfile.txt' Size: 0 Blocks: 0 IO Block: 4096 regular empty file Device: fc01h/64513d Inode: 3413533 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 1000/ me) Gid: ( 1000/ me) Access: 2014-09-17 18:38:34.248965000 -0400 Modify: 2014-09-17 18:38:34.248965000 -0400 Change: 2014-09-17 18:46:07.544974140 -0400 Birth: -
Presumably, precision is lost because the value was converted to
, and python knew better than trusting a few decimal places.
Is there a way to set the full microseconds field via python?
source to share
You have already set full microseconds. Micro means millionth;
is 248965 microseconds.
- 248965866 seconds nano.
Of course, it's also 248965.866 microseconds, but the portable APIs that Python uses to set the time on each platform, but Windows only accepts integral microseconds, not fractional ones. (And, in fact, POSIX does not require the system to remember anything less than microseconds.)
Starting with Python 3.3,
adds a keyword argument
on systems that support the way to set nanoseconds .1, 2 So you can pass integers for time and then pass nanoseconds in a separate argument. Like this:
'testfile.txt', (1410993514, 1410993514), ns=(248965866, 248965866))> os.utime(
Presumably, precision is lost because the value was converted to float and python knew better than trusting a few decimal places.
It might actually make sense ... but Python doesn't. You can see the exact code it uses here , but basically the only compensation they make for rounding is that negative microseconds become 0.3
But you're right that round-off errors are a potential problem here ... which is why both * nix and Python avoid the problem by using separate integers
(and Windows solves it with 64-bit int instead of double).
1 If you are using Unix, this means that you have a function
that is similar to
. You should have it on any non-ancient linux / glibc system; on * BSD is kernel dependent, but I think everyone except OS X has it currently; otherwise you probably don't have it. But the easiest way to check is simple
2 On Windows, Python uses native Win32 APIs that run in units of 100ns, so you only get one extra digit, not three.
3 I'm linked to 3.2 because 3.3 is a bit harder to track, partly because of the support
you care about, but mostly because of the support
source to share