Strange preprocessor error
I am currently having a very strange error with the preprocessor. I have searched and could not find anything useful and therefore would like to post it here. Below is a sample code
#include <stdio.h>
#define DDR_BASEADDR 0x000000000U + 128*1024*1024
int main()
{
uint32_t* test2 = (uint32_t*) DDR_BASEADDR;
uint32_t whatIsThis = DDR_BASEADDR;
uint32_t* test3 = (uint32_t*) whatIsThis;
printf( "%x %x %x %x\n\r", DDR_BASEADDR, test2, test3, whatIsThis);
return 0;
}
The output of this code should be 0x8000000
. However, the yields: 8000000 20000000 8000000 8000000
.
I believe the datatype is not the reason for this as it also shows up even if I change uint32_t* test2 = (uint32_t*) DDR_BASEADDR;
toint32_t* test2 = (int32_t*) DDR_BASEADDR;
I am testing this on both ARM A9 with Zedboard and online C ++ compiler and get the same result. Thanks for your time and effort.
Thang tran
source to share
Your macro has been expanded to:
uint32_t* test2 = (uint32_t*) 0x000000000U + 128*1024*1024;
The same as with this:
//Check multiply by sizeof at the end
uint32_t* test2 = (uint32_t *) (0x000000000U + 128*1024*1024 * sizeof(*test2));
because your size is a pointer type sizeof(uint32_t)
, and therefore any operation to increase / decrease is multiplied by that size.
What would you like:
uint32_t* test2 = (uint32_t *) (0x000000000U + 128*1024*1024);
This means that you first do the computation of the address, then you sketch out the desired pointer.
source to share
There is nothing strange about this: on this line
uint32_t* test2 = (uint32_t*) DDR_BASEADDR;
cast to pointer operation is applied to 0x000000000U
, and then added to the result of the casting int
of 128*1024*1024
, that is,
uint32_t* test2 = (uint32_t*) 0x000000000U + 128*1024*1024;
// ^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^
// Pointer Offset
This means that pointer arithmetic is performed, so in integer expressions, the offset value is multiplied by sizeof(uint32_t)
, which gives 0x8000000
.
However, on this line
uint32_t whatIsThis = DDR_BASEADDR;
there is no pointer to pointer, so the addition is entirely done with integers. This is why there is no multiplication by sizeof(uint32_t)
, but the result 0x20000000
.
If you want all four cases to print 800000000
, must be inside a macro to be included in the pointer:
#define DDR_BASEADDR ((uint32_t*)0x000000000U + 128*1024*1024)
and the cast to int32_t
must be in the declaration whatIsThis
:
uint32_t whatIsThis = (uint32_t)DDR_BASEADDR;
However, uint32_t
it is not a portable type for a pointer. Use instead uintptr_t
.
source to share
FROM
#define DDR_BASEADDR 0x000000000U + 128*1024*1024
line of code
uint32_t* test2 = (uint32_t*) DDR_BASEADDR;
expands to
uint32_t* test2 = (uint32_t*) 0x000000000U + 128*1024*1024;
which (due to the ways of pointer arithmetic) is practically
uint32_t* test2 = (uint32_t*) (0x000000000U + (sizeof(uint32_t) *(128*1024*1024));
or
uint32_t* test2 = (uint32_t*) (0x000000000U + (4 * 0x8000000));
This gives you 200 00000 instead of 80 00000 in hex, of course.
Multiplying by 4 in hexadecimal format, for multiples of 4, is like multiplying by 16 (which in hexadecimal is multiplied by 10 in decimal) and divides by 4.
That is, it gives 1/4 the value shifted left one sixth digit: 80- > 200.
Other lines of code affect the pair of curly braces in one form or another ()
.
If you are using
#define DDR_BASEADDR (0x000000000U + 128*1024*1024)
you should be fine and as you have already confirmed,
Using braces generously is a very good practice, especially with macros.
Note. You also have to be more careful with printf and use the correct representation of the values you want to print (see comments and other answers). Otherwise, the result of U ndefined B is evil! All kinds of unwanted and unexplained things can be the result of UB. However, UB is not required to explain the meanings you get in this special case. (You're lucky.)
source to share