Representing numbers in bash and printf hex numbers in bash
I want to understand how numbers (paired) are represented in bash and what happens when I print numbers in hexadecimal format in bash.
According to the IEEE 754 standard, a double must be represented by 64 bits: 52 bits (13 hexadecimal numbers) for the significant number, 11 bits for the exponent, and 1 bit for the sign.
To test this, I wrote a simple C program that converts hex to dec (using printf).
include <stdio.h>
int main(int argc, char **argv)
{
printf("hex read = %40.24a\n", 0x1.000010C6F7A0B5E1Fp+0);
}
compiling with gcc 4.2.1, I get
hex read = 0x1.000010c6f7a0b00000000000p+0
From this result, I conclude that as I expect the value is specified by 13 hexadecimal digits 000010c6f7a0b.
Now I go to bash and use the following script:
#!/bin/bash
echo "hex read = 0x"$1
printf "hex =%80.70a\n" "0x"$1
printf "hex -> dec=%80.70f\n" `echo "0x"$1`
GNU bash 3.2.48
$ bash hex2dec 1.000010C6F7A0B5E1F
hex read = 0x1.000010C6F7A0B5E1F
hex = 0x1.000010c6f7a0b000000000000000000000000000000000000000000000000000000000p+0
hex -> dec= 1.0000009999999999177333620536956004798412322998046875000000000000000000
So everything worked as I expected: 13 hexadecimal digits define the value of the number.
GNU bash 4.1.5
$ bash hex2dec 1.000010C6F7A0B5E1F
hex read = 0x1.000010C6F7A0B5E1F
hex = 0x8.00008637bd05af10000000000000000000000000000000000000000000000000000000p-3
hex -> dec= 1.0000009999999999993737856418540843606024282053112983703613281250000000
This is not what I expected!
Question 1 Why is a double value represented by 16 hexadecimal digits in GNU bash 4.1.5 (instead of 13 according to IEEE 754)?
Question 2 Why does printf "% a" represent a hexadecimal number in a different format in different versions of bash ( bash 3.2.48 0x1.hh ... hp + d and bash 4.1.5 0xh.hh ... hp + d?). If printf does not conform to the same standard in both bash versions and needs to be changed http://pubs.opengroup.org/onlinepubs/009695399/functions/fprintf.html ?
source to share
Answer 1 The current bash printf on x86 uses long double
to convert I / O according to IEEE 754 (see Extended and Extensible Precision Formats , x86 Extended Precision Format and bash printf definition of floatmax_t ), similar to the program
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
printf("%La\n", strtold("0x1.000010C6F7A0B5E1F", NULL));
}
- its conclusion
0x8.00008637bd05af1p-3
.
Answer 2 bash ends up using the C library printf
; the output of the C program above follows the standard you referenced:
there is one hexadecimal digit (which must be nonzero if the argument is a normalized floating point number and not otherwise specified ) before the decimal point character
source to share