How can I get the type precision?

Is there a way to get the maximum precision for a type or variable?

So you don't need to use magic constants ... I assumed it would be something like float.precision

, but I can't see it.

For float it will return 7, for double 15, etc.

+3


source to share


1 answer


There is no built-in way to detect this. The main problem is that the CLI spec does not specify how the float or double is encoded. It allows CLR implementations to use their own floating point type, which can have higher precision than the predefined types "float32" and "float64".

This is real enough, Intel / AMD processors actually store floating point values ​​with greater precision. Their internal format is 80 bits. This design decision was an unprecedented disaster, it leads to random changes in the calculation results depending on whether the code is running in a Release or Debug build, very small changes in the code can lead to significantly different results, when the calculation loses many significant numbers. The coprocessor was redesigned a while ago to fix this issue, but we're still stuck with the old issue in 32-bit code. Microsoft can't fix x86 jitter by switching to a new coprocessor design will break too many existing programs.



For all practical purposes, you can hard-code the precision to 7 and 15 digits. Cast a stone at the IEEE-754 standard, which dictates the encoding of single and double precision floating point values. This will continue for the next 50 years. .NET Micro Framework applications that run on a processor without hardware support are impractical.

Observe another reason why the precision is not displayed, it promises completely too much. When you calculate, say 0.1234567f - 0.1234568f, you will only leave one significant digit. In most cases, the result is undesirable.

+4


source







All Articles