Trying to assign a large number at compile time

The next two operations are identical. However, MaxValues1 will not compile due to "Compile-time operation overflow in check mode". Can someone explain what's going on with the compiler and how can I get around it without using a hardcoded value like in MaxValues2?

public const ulong MaxValues1 = 0xFFFF * 0xFFFF * 0xFFFF;

public const ulong MaxValues2 = 0xFFFD0002FFFF;

      

+3


source to share


4 answers


To get unsigned literals, add a suffix u

and make it a long suffix l

. those. you need ul

.

If you really want the overflow behavior, you can add unchecked

to get unchecked(0xFFFF * 0xFFFF * 0xFFFF)

, but this is most likely not what you want. You get an overflow because literals are interpreted as Int32

, not as ulong

, and 0xFFFF * 0xFFFF * 0xFFFF

does not match a 32-bit integer, since it is approximately 248.



public const ulong MaxValues1 = 0xFFFFul * 0xFFFFul * 0xFFFFul;

      

+4


source


By default, integer literals are of type int

. You can add "UL" suffix to change them to ulong

literals.



public const ulong MaxValues1 = 0xFFFFUL * 0xFFFFUL * 0xFFFFUL;

public const ulong MaxValues2 = 0xFFFD0002FFFFUL;

      

+2


source


I think it actually doesn't ulong

until you assign it at the end, try

public const ulong MaxValues1 = (ulong)0xFFFF * (ulong)0xFFFF * (ulong)0xFFFF;

      

i.e. in MaxValues1

you multiply 3 32-bit ints together, which overflows since the result is implied as still 32-bit int when you do the op-changes to multiply 3 ulongs

together, which usually does not overflow since you output the result isulong

(ulong)0xFFFF * 0xFFFF * 0xFFFF;

0xFFFF * (ulong)0xFFFF * 0xFFFF;

      

also works as the result type is calculated based on the largest type

but

0xFFFF * 0xFFFF * (ulong)0xFFFF;

      

won't work as the first 2 will overflow int

+1


source


Add numeric suffixes "UL" to each of the numbers. Otherwise, C # considers them Int32.

C # - numeric suffixes

+1


source







All Articles