Hexadecimal in typedef enum in C
I was looking for a reason for using hex typedef enum
in C.
I followed the following link but there are two answers: c, obj c enum without tag or id
LearnCocos2D says that "there is no way to use hex numbers and in particular it makes no sense to start hex numbers with a pass-through f (10 to 15)."
Sulthan says that "Hexadecimal numbers are commonly used when the integer is a binary mask." I was looking for binary mask and realized that its method is used in bitmap games from the following link https://en.wikipedia.org/wiki/Mask_(computing)
If the sultan is right, kindly help me understand this.
I don't have enough reputation to comment, so I created this as a new question.
source to share
For the bitmask, this helps to treat the values in binary, as this is the level required for the bitmask. And each enum value usually only sets one bit.
Thus, transfer values are set (in a binary manner) to 00001
, 00010
, 00100
, 01000
, 10000
etc.
The same values in decimal form: 1
, 2
, 4
, 8
, 16
, etc.
And in hexadecimal form, they will be 0x01
, 0x02
, 0x04
, 0x08
, 0x10
, etc.
This is indeed a matter of preference, but since the hexadecimal value is valid 2, it is more binary than decimal. This makes it somewhat clearer that the values represent bit mask values.
source to share
An example in the linked question (not cited in this question):
enum {
easySprite = 0x0000000a,
mediumSprite = 0x0000000b,
hardSprite = 0x0000000c,
backButton = 0x0000000d,
magneticSprite = 0x0000000e,
magneticSprite2 = 0x0000000f
};
This creates an anonymous enumerated type; you cannot define objects of type. So the only thing the declaration does is define a set of constants int
(enumeration constants are always of type int
) and define their values in hexadecimal format.
This may be an abuse of a construct enum
that is mainly for creating types, but it is the only real way to define integer constants without using macros. ( const int easySprite = 0x0000000a;
does not make it a easySprite
permanent expression.)
The first value 0xa
(decimal 10) and the rest of the values are in sequential order.
It:
enum {
easySprite = 10,
mediumSprite = 11,
hardSprite = 12,
backButton = 13,
magneticSprite = 14,
magneticSprite2 = 15
};
would mean the same thing. In this respect it is the same, albeit less explicit:
enum {
easySprite = 10,
mediumSprite,
hardSprite,
backButton,
magneticSprite,
magneticSprite2
};
So why use hex? I am assuming the values easySprite
, mediumSprite
et al are defined in some interface (the related question mentions OpenGL) and it is convenient to define the values in hex for some reason. It is possible that these values are bitwise with some other values, forming a bit pattern, whose lower 4 bits determine the type of the sprite, and the other bits determine other information. The writing 0x0000000a
, not the equivalent 0xa
, is probably meant to make it clear that it should be (part of) the full 32-bit value. You can't be sure without seeing the context.
In any case, of course, there is nothing wrong with using a hexadecimal code here, and given the context it might be clearer than the equivalent decimal.
(A Google search for "OpenGL magnetSprite" only gets three or so, so I doubt this is part of the OpenGL spec. A Google search mediumSprite
brings up a lot of information about soft drinks.)
source to share
To complete rmaddy's answer, an alternative notation to make the bitmask values more prominent without hex numbers could be:
typedef enum
{
// empty bitmask
empty_mask = 0,
// bitmask values
first_bit = 1 << 0,
second_bit = 1 << 1,
third_bit = 1 << 2,
// bitwise combinations
both_first_and_second = first_bit | second_bit
} My_enum;
The shift operator <<
removes the need to interpret hexadecimal values. Combinations can be declared after the "core" values.
source to share