15-17 significant decimal value for binary code 64?

I learned from wikipedia that a double number has at most 15-17 significant decimal digits

However, for a simple C ++ program below

double x = std::pow(10,-16);
std::cout<<"x="<<std::setprecision(100000)<<x<<std::endl;

      

(To test it use this online wrapper ) I get

x=9.999999999999999790977867240346035618411149408467364363417573258630000054836273193359375e-17

      

which has 88 significant decimal digits, which seems to contradict the above Wiki requirement. Can anyone clarify if I must be misunderstanding something? Thank you.

+3


source to share


1 answer


There is no contradiction. As you can see, the value x

is incorrect at the first 7

in its decimal expansion; I am counting 16 correct numbers before. std::setprecision

doesn't control the precision of the inputs on std::cout

, it just displays as many digits as you ask for. Possibly std::setprecision

badly named and should be replaced by std::displayprecision

, but std::setprecision

gets the job done. From a linguistic point of view, think of std::setprecision

as setting the precision std::cout

and don't try to control the precision of the arguments before std::cout

.



+6


source







All Articles