Obj-C: Is it really safe to compare BOOL variables?
I used to think that in the 64-bit version of OBJ-C, BOOL is actually _Bool, and it's a real type, so it's safe to write like this:
BOOL a = YES;
BOOL b = NO;
if (a != b) {...}
It worked seemingly great, but today I found a problem where I use bitfield structures like this:
typedef struct
{
BOOL flag1 : 1;
} FlagsType;
FlagsType f;
f.flag1 = YES;
BOOL b = YES;
if (f.flag1 != b)
{
// DOES GET HERE!!!
}
It seems that the BOOL returned from the bitfield is -1, whereas the normal BOOL is 1 and they are not equal !!!
Note that I am aware of a situation where an arbitrary integer is passed to BOOL and therefore becomes a "strange" BOOL that is not safe to compare.
However, in this situation, both flag1 and b were declared BOOL and were never executed. What is the problem? Is this a compiler bug?
The bigger question is, is it really safe to compare BOOLs at all, or do I need to write a XORing helper function? (It would be so hard because boolean comparisons are so ubiquitous ...)
source to share
I am not repeating that using a C boolean type solves the problems that can arise with BOOL
. This is true - particularly here, as you can read below - but most of the problems arose from improper storage in a boolean (C) object. But in this case, _Bool
or unsigned
( int
) seems to be the only possible solution. (Except for side-by-side solutions.) There is a reason for this:
I can't find exact documentation about the new behavior BOOL
in Objective-C, but the behavior you found is somewhere between error and error. I expected the latter behavior to be similar _Bool
. This is not the case in your case. (Thanks for hearing about this!) Maybe it's for backward compatibility. To tell the whole story:
In C, a type object int
is signed int
. (This difference is in char
. For this type, the signature is done as an implementation.)
- int, signed or signed int
ISO / IEC 9899: TC3, 6.7.2-2
Each of the comma separated sets denotes the same type, [...]
ISO / IEC 9899: TC3, 6.7.2-5
But for historical reasons, there is a strange exception:
If the object int
is a bitfield, it is implementation-defined, be it signed int
or unsigned int
. (This is likely due to the fact that some processors in the past could not automatically expand the sign of a partial byte integer. Therefore, having an unsigned integer is easier because clearing the top bits is sufficient.)
In clang, the default signed int
. Therefore, according to full width integers int
always denotes a signed integer, even if it has only one bit. int member : 1
can only store 0
and -1
! (So ββthis is not a solution to use int
instead.)
Each of the comma-separated sets denotes the same type, except that for bit-fields it is implementation-defined, whether the int specifier is the same type as a signed int or the same type as unsigned int.
ISO / IEC 9899: TC3, 6.7.2-5
The C standard says that a boolean bitfield is an integer type and therefore participates in the integer parity rule for bitfields:
A bitfield is interpreted as a signed or unsigned integer consisting of the specified number of bits.
ISO / IEC 9899: TC3, 6.7.2.1-9
This is the behavior that you discovered. Since this doesn't make sense for 1-bit boolean types, the C standard explicitly states that storing 1 in a boolean bitfield must compare equal to 1 in each case:
If a value of 0 or 1 is stored in a nonzero width bitfield of type _Bool, the bitfield value must be compared with the stored value.
ISO / IEC 9899: TC3, 6.7.2.1-9
This leads to a weird situation: an implementation can implement boolean widths of 1 as {0, -1}, but must do 1 == -1. Fine.
So the story: BOOL
behaves like a whole bitfield (conforms to the standard), but does not participate in the additional requirement for _Bool
s.
I think this is due to legacy code. (In the past, one might have expected -1).
source to share