Couldn't double possibility represent any number 2 ^ n without errors? n is natural
This code:
double x = 2.0;
for(int i = 1 ; i<1024 ; i+=i) {
Console.WriteLine( String.Format( "2^{0} = {1:F0}", i, x ) );
x*=x;
}
Outputs:
2^1 = 2 2^2 = 4 2^4 = 16 2^8 = 256 2^16 = 65536 2^32 = 4294967296 2^64 = 18446744073709600000 2^128 = 340282366920938000000000000000000000000 2^256 = 115792089237316000000000000000000000000000000000000000000000000000000000000000 2^512 = 13407807929942600000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
I thought it was a double formula sign * 2 ^ exponent * fraction
; Illustrate my situation; Set fraction
to 0.5
sign is positive, and by setting exponent
to any value from -1024
to 1023
, I can represent any number 2^n
that is in the range exponent
; What's wrong with this conclusion? Is the formula incomplete?
source to share
A double
can accurately represent powers of 2, as shown in the following code (which uses the Jon Skeet class DoubleConverter
):
Console.WriteLine(String.Format("2^{0} = {1}", i, DoubleConverter.ToExactString(x)));
For the F0 specifier, why did the .NET designers choose to round the value after the 15 most significant decimal digits?
My suggestion. Displaying an exact value (for example, 18446744073709551616) may mean that it double
is accurate for all of these digits when double
it cannot actually distinguish that value from 18446744073709600000. In addition, displaying a rounded value corresponds to the exponential notation: 1.84467440737096E + 19.
source to share
http://msdn.microsoft.com/en-us/library/678hzkk9.aspx
The double keyword means a simple type that stores 64-bit floating point values. The following table shows the accuracy and approximate range for the dual type.
Type
double
Approximate range
Β±5.0 Γ 10β324 to Β±1.7 Γ 10308
Precision
15-16 digits
source to share
This answer to your question is that an IEEE 754 double precision number is a 64-bit value:
- 1 bit is a sign
- 11 bits represent exponent
- 52 bits represent a significant value (but due to a little deep magic, signficand actually has 53 bits of precision).
It can represent at most 2 64 discrete values ββ- the same as a 64-bit integer (and actually less due to things like NaN
positive and negative zero, etc.)
Its range, however, is much larger than that of a 64-bit integer: it can represent decimal values ββfrom about 10 -308 to 10 +308 . albeit with 15 to 17 decimal digits precision.
Floating point precision for the range. This is a compromise.
See IEEE-754 Double Precision Binary Floating Point Format for details .
Better yet, read David Goldberg's 1991 article "What Every Computer Scientist Should Know About Floating Point Arithmetic" :
Annotation.Floating point arithmetic is viewed by many as an esoteric topic. This is quite surprising, since floating point is ubiquitous in computer systems: Almost every language has a floating point data type; computers from PCs to supercomputers have floating point accelerators; most compilers will be prompted to compile floating point algorithms from time to time; and virtually every operating system must respond to floating point exceptions such as overflow. This document provides a tutorial on floating point aspects that have a direct impact on computer system designers. It starts with floating point background and rounding, continues with a discussion of the IEEE floating point standard, and ends with examples ofhow computer system builders can better support floating points.
David Goldberg. 1991. "What Every Computer Scientist Should Know About Floating-Point Arithmetic." ACM Comput. Surv. 23, 1 (March 1991), 5-48. DOI = 10.1145 / 103162.103163 http://doi.acm.org/10.1145/103162.103163
source to share