Print largest single-precision floating point number

I was looking for knowledge. I am trying to understand floating point numbers.
I am trying to figure out why when I print the largest floating point number it doesn't print correctly.

2-(2^-23)                          Exponent Bits
1.99999988079071044921875 * (1.7014118346046923173168730371588e+38) = 
    3.4028234663852885981170418348451e+38

      

This should be the largest single-precision floating point number:

340282346638528859811704183484510000000.0

      

So,

float i = 340282346638528859811704183484510000000.0;
printf(TEXT, "Float %.38f", i);
Output: 340282346638528860000000000000000000000.0

      

Obviously the number is being rounded, so I'm trying to figure out what exactly is going on.

My questions: The Wikipedia documentation states that 3.4028234663852885981170418348451e+38

is the largest number that can be represented at the IEEE-754 fixed point.

Is the number stored in the floating point register = 0 11111111 11111111111111111111111

and it is not displayed correctly?

If I write printf(TEXT, "Float %.38f", FLT_MAX);

I get the same answer. Perhaps the computer I'm using isn't using IEEE-754?

I understand the calculation errors, but I don't understand why a number 340282346638528860000000000000000000000.0

is the largest floating point number that can be accurately represented.

Is the Mantissa * Expoment metric causing calculation errors? If so, it 340282346638528860000000000000000000000.0

will be the largest number that can be accurately represented without calculation errors. I think it would make sense. You just need a blessing.

Thank,

+3


source to share


2 answers


It looks like the culprit printf()

(I think because it is float

implicitly converted to double

on passing it):

#include <iostream>
#include <limits>

int main()
{
    std::cout.precision( 38 );
    std::cout << std::numeric_limits<float>::max() << std::endl;
}

      



Output:

3.4028234663852885981170418348451692544e+38

      

+4


source


C float

as binary32 largest final float

is

340282346638528859811704183484516925440.0

printf("%.1f", FLT_MAX)

is not required to print exactly up to 38+ significant digits, so seeing output like the one below is not unexpected.



340282346638528860000000000000000000000.0

printf()

will accurately print floating point to DECIMAL_DIG

significant digits. DECIMAL_DIG

not less than 10. If more than is specified DECIMAL_DIG

, the compatible one printf()

can round the result at any time. C11dr ยง7.21.6.1 6 goes into detail.

+3


source







All Articles