The character offset reported by objdump no longer matches the runtime offsets

In previous versions of GCC, the character offsets reported objdump

were consistent with those used during the actual execution of the code. For example:

$ cat example.c
#include <stdio.h>

int g_someGlobal = 0;

int main()
{
    printf("%d\n", g_someGlobal);
    return 0;
}

$ gcc-6 -v
...
gcc version 6.1.1 20160802 (Debian 6.1.1-11)


$ gcc-6 -O0 -g -o example example.c

$ objdump -x example | grep Global
...
080496f4 g     O .bss       00000004              g_someGlobal
...

      

Indeed, when running the binary, the actual address of the symbol used at runtime is the same as reported by objdump:

$ gdb ./example
...
(gdb) start
Temporary breakpoint 1, main () at example.c:10
10          printf("%d\n", g_someGlobal);

(gdb) p/x &g_someGlobal
$1 = 0x80496f4

      

Unfortunately, repeating the same sequence of commands in the recently released Debian Stretch, it happens instead:

$ gcc-6 -v
...
gcc version 6.3.0 20170415 (Debian 6.3.0-14)


$ gcc-6 -O0 -g -o example example.c

$ objdump -x example | grep Global
00002020 g     O .bss   00000004              g_someGlobal

      

Now the character offset appears to be a much smaller value, which ...

$ gdb ./example
...
(gdb) start
...
Temporary breakpoint 1, main () at example.c:7
7               printf("%d\n", g_someGlobal);
(gdb) p/x &g_someGlobal
$1 = 0x80002020

      

... is no longer the same as the one used at runtime.

Am I making a mistake here? Does the use of tools at the same time? If not, what is the reason for this change?

Regardless - in theory there should be a way to get the "expected time offset" of the .bss segment in which the variable is allocated ( objdump

tells which section it will be placed in, the time position can be calculated by adding the offset .bss

). In my preliminary attempts to do this, I haven't found a way to get this, though:

$ readelf --sections example | grep bss
[26] .bss         NOBITS     0000201c 00101c 000008 00  WA  0   0  4

      

It doesn't seem to report the "offset" 0x80000000, which appears to occur with .bss

-hosted variables in this example.

(Even if it's a "magic constant" for this new runtime, does that apply to variables .data

? And to be honest, I hate magic values ​​- previously, whatever came out objdump -x

was exact, no matter where the symbols are. ..)

Any information to solve this problem. Ideally, I would like to reproduce the old behavior objdump -x

- that is, statically (NOT at runtime) get the runtime value of the symbol from the ELF that hosts it.

UPDATE . I made my own compiler (from sources) GCC7.1.0 and this is no longer reproducible. This may have been a regression in GCC 6.3 (version packaged in Debian Stretch) ...

+3


source to share


1 answer


The reason is that the Debian gcc package is built with --enable-default-pie

. In a PIE executable, ELF segments can be loaded to an arbitrary (if correctly aligned) base address, usually chosen at random by the bootloader. The character addresses you see in the ELF file are offsets from the base address it loaded, not the absolute virtual addresses.



If you don't want / need PIE, you can add -no-pie

to bind the command line to get time-bound address links like you're used to.

+3


source







All Articles