Why does this decimal point show 8 decimal places on ToString ()?

I have a variable of type decimal

whose value is 1.0

. I am storing it in a column of a SQL Server 2012 table whose type is decimal(10, 8)

.

After getting the value, I see that it is 1, but when I call ToString()

, the return value "1.00000000"

(see below).

I understand that 8 decimal places correspond to the data type in the database. However, there are no attributes or anything in the properties created in Entity Framework that give it this behavior, so I have no idea how it goes.

Here are some tests I ran in the Immediate window:

myDecimal
1
myDecimal.ToString()
"1.00000000"
myDecimal == 1
true
myDecimal == 1.0m
true

      

As you can see from the last two tests, this is not a floating point error either (not what I expected, since the decimal value is fixed point, but I had to try it since I ended up with ideas).

Any idea how decimal would ToString()

output a string with 8 decimal places?

Edit . See the test below for comparison.

1m.ToString()
"1"

      

+3


source to share


4 answers


The reason is that the decimal type is not normalized. There are multiple representations for the same number and they will be presented as different strings.

This is not a special property of your database type, as the decimal number works fine. There is no special DataAnotation or anything associated with the variable.

(1m).ToString() == "1"
(1.00000000m).ToString() == "1.00000000"
((1m)==(1.00000000m)) == true

      

There is only one valid representation for a given twin, i.e. one combination of mantissa indicator * 2

There are many valid representations of mantissa * 10 exponent for a decimal number . Each one represents the same number, but additional information available through several possible representations is used to select the default number of lagging digits when converting a decimal string to a string. The exact data is not well documented and I have not found any information on what exactly happens to the exponent when decimal numbers are added or multiplied. But the effect it has on ToString () is easy to test.

The downside is that the Equals () and GetHashCode () operations are more complex than the normalized number format, and there were subtle bugs in the implementation: C # decimals generate unequal hash values?



This article by Jon Skeet is a little more detailed :

The decimal number is stored in 128 bits, although only 102 is strictly necessary. It is convenient to think of a decimal number as three 32-bit integers representing the mantissa, and then one integer representing the sign and exponent. The upper bit of the last integer is the sign bit (the usual way when the bit is set (1) for negative numbers) and bits 16-23 (the least significant bits of a high 16-bit word) contain the exponent. The rest of the bits must be clear (0). This representation is the one given by decimal.GetBits (decimal), which returns an array of 4 ints. [...]

The decimal type is not normalized - it remembers how many decimal digits it has (keeping the exponent whenever possible) and when formatted, zero can be considered a significant decimal digit.

You can check that the two decimal places you have are not identical by comparing the values ​​returned by decimal.GetBits (), that is:

decimal.GetBits(1m) == {int[4]}
    [0]: 1
    [1]: 0
    [2]: 0
    [3]: 0

decimal.GetBits(1.00000000m) == {int[4]}
    [0]: 100000000
    [1]: 0
    [2]: 0
    [3]: 524288

      

It might be tempting to rely on this behavior to format your decimal places, but I would recommend always choosing precision explicitly when converting to a string to avoid confusion and unexpected surprises, such as if a number is multiplied by a certain factor beforehand.

+7


source


When the Entity Framework retrieves values ​​from the returned query result, it uses your EDMX or data annotation definition to find out what precision to set to decimal.

Since you used decimal (10,8), Entity Framework will set the decimal to value 1.00000000

. A ToString

decimal implementation will take this precision into account and output all zeros as they are considered significant.



The type used is still equal decimal

. You define the precision for a decimal by specifying its exact value: 1.000m

more accurate than 1.0m

. This is how the decimal point works and is (briefly) mentioned below here .

ToString

does not know that you do not consider zeros significant until you report it. Zero is still a value.

+5


source


If you are using the database first EntityFramework Saves the column content in the edmx file in xml format as ...

<Property Name="ColumnName" Type="Decimal" Precision="8" Scale="4" Nullable="false" />

      

The edmx file provides information about an object in the same way as DataAnotation. In your case, the column is prefixed with = "8". Therefore, when you call the ToString () method on a column, the field is formatted accordingly.

you can format the resulting string with any of the strings in the standard number format http://msdn.microsoft.com/en-us/library/dwhawy9k(v=vs.110).aspx or you can use the custom number format strings http: / /msdn.microsoft.com/en-us/library/0c899ak8(v=vs.110).aspx

eg,

myDecimal.ToString("N") //1

      

+1


source


You are correct, decimal (10.8) tells DB to send 8 decimal places.

If you are assigning the result to a numeric type, it is simply that the number itself will be minimal without filling it.

When you do ToString on the database object, it outputs exactly what was sent back from the SQL server, in this case it's 1 followed by 8 zeros.

You can confirm this by querying the db server directly.

0


source







All Articles