Working with arrays using unsigned values
I am creating a C function from Matlab Coder so that I can implement it in other MAX / MSP software. I'm trying to figure it out, thanks to my low level in C programming, and there are some syntactic elements that I can't figure out: using unsigned values like 0U or 1U to pass arrays.
The following example does nothing. Inserting all the code won't help if you don't think so.
void function1(const double A[49], double B[50])
{
function2( (double *)&A[0U] );
}
static void function2(const double A[])
{
}
Doing some math, Matlab wrote something like:
b = 2; f[1U] += b;
I don't understand the use of unsigned value ...
Thank you so much!
source to share
For the a[n]
array indexes are always non-negative values from 0
before n-1
. Adding a constant u
to a decimal constant is not a problem for array indexing, but it does have the advantage of ensuring that the value has a minimum font width and some unsigned type.
Automatic generation of a matlab-like fixed index using a suffix u
.
Consider small and large values on a 32-bit system unsigned/int/long/size_t
aweeu[0u]; // `0u` is 32-bit `unsigned`.
aweenou[0]; // `0` is 32-bit `int`.
abigu[3000000000u]; // `3000000000u` is a 32-bit `unsigned`.
abignou[3000000000]; // `3000000000` is a 64-bit `long long`.
Is this the meaning? May be. Some compilers look at the value first and see that everything is higher in the range size_t
and don't complain. Others may complain about the type index long long
or even int
. Adding u
, such rare complaints do not arise.
source to share
The suffix U
is obviously not needed here. In certain situations, it can be useful to enforce unsigned arithmetic with surprising side effects:
if (-1 < 1U) {
printf("surprise!\n");
}
In some rare cases, certain type changes must be avoided. On many modern architectures, the following comparisons are made, and type 2147483648
differs from type in 2147483648U
more than just a signature:
For example, on 32-bit Linux and 32- and 64-bit windows:
sizeof(2147483648) == sizeof(long long) // 8 bytes
sizeof(2147483648U) == sizeof(unsigned) // 4 bytes
On many embedded systems with 16-bit ints:
sizeof(2147483648) == sizeof(long long) // 8 bytes
sizeof(2147483648U) == sizeof(unsigned long) // 4 bytes
sizeof(32768) == sizeof(long) // 4 bytes
sizeof(32768U) == sizeof(unsigned int) // 2 bytes
Depending on the implementation details, the array index values may exceed the range of both types int
and type unsigned
, and pointer offset values may be even larger. Just specifying U
doesn't guarantee anything.
source to share