Why do we define INT_MIN as -INT_MAX-1?
Since 2147483648
is a value long
, since it is not suitable for int
(on a shared 32-bit int
and 64-bit long
system), a 32-bit system long
is of type long long
). So it -2147483648
has a type long
, not int
.
Remember that in C, an immutable decimal integer constant is of the first type int
, long
or long long
wherever it can be represented.
Also in C -2147483648
it is not an integer constant; 2147483648
is an integer constant. -2147483648
- an expression formed with a unary operator -
and an integer constant 2147483648
.
EDIT: if you are unsure what is -2147483648
not of type int
(some people in the comments are still hesitant), you can try typing this:
printf("%zu %zu\n", sizeof INT_MIN, sizeof -2147483648);
You will most likely end up with:
4 8
on common 32 and 64 bit systems.
Also, to follow up on the comment, I'm talking about the recent C standard: use the c99 or c11 dialogs to check this. The c89 rules for decimal integer constant are different: -2147483648
has a type unsigned long
in c89. Indeed, in c89 (it differs from c99, see above), an unsuffixed decimal integer constant is of type int
, long
or unsigned long
.
EDIT2 : @WhozCraig added another example (but for C ++) to show -2147483648
not the type int
.
The following example, albeit in C ++, takes home that point. It was compiled with 32 bit g ++ architecture. Notice the type information from the inferred parameter:
#include <iostream>
#include <climits>
template<typename T>
void foo(T value)
{
std::cout << __PRETTY_FUNCTION__ << '\n';
std::cout << value << '\n';
}
int main()
{
foo(-2147483648);
foo(INT_MIN);
return 0;
}
Output
void foo(T) [T = long long]
-2147483648
void foo(T) [T = int]
-2147483648
source to share