Adding code size bloats to virtual destructor
I'm cheating with C ++ on a cortex m3 arm with the latest available arm-none-eabi-gcc 6.3. I made a dummy class and created a global object of this class:
class B
{};
class A : B
{
public:
A()
{
RCC->APB2ENR |= RCC_APB2ENR_IOPCEN;
GPIOC->CRH = 1;
GPIOC->ODR |= 1<<8;
}
virtual void foo()
{}
virtual ~A()
{}
};
A a;
int main(void) { while(1); }
This code is 1620 bytes. If I remove the virtual dtor it compiles to 1304 bytes. This is a tangible difference.
I went and looked into the assembly and the .map file and saw that many different execution functions are linked to my binary if I use a virtual dtor. There's malloc and free and _static_initialization_and_destruction, etc.
Strange thing: I don't understand what they are called? Main is called like this:
8000260: d3f9 bcc.n 8000256 <FillZerobss>
8000262: f000 f89f bl 80003a4 <SystemInit>
8000266: f000 f957 bl 8000518 <__libc_init_array>
800026a: f000 f865 bl 8000338 <main>
800026e: 4770 bx lr
After returning from main (which never happens by the way) the execution just sits there on bx lr, it keeps jumping to the same address.
So, I don't see a way to deinitialize a static object. Why isn't it optimized as unreachable code?
I compile like this:
arm-none-eabi-g++ -c -fmessage-length=0 -mcpu=cortex-m3 -mthumb -fdata-sections -ffunction-sections -fno-rtti -fno-exceptions -fno-threadsafe-statics
and a link like thisarm-none-eabi-g++ -mcpu=cortex-m3 -mthumb --specs=nosys.specs --specs=nano.specs -Wl,--gc-sections -T "${ProjDirPath}/src/Startup/STM32F100XB_FLASH.ld" -ffreestanding
I tried adding -fno-use-cxa-atexit or making it a dummy one __cxa_atexit
- and it makes the binary a little smaller (which is even weirder than no effect at all).
Is there a way to completely disable the destruction of static objects?
UPDATE:
- adding -O to the code done is shorthand, but there is still a difference with / without virtual dtor (1244 vs 1040 bytes), so the question remains.
- here's the compile output with -S with dtor , without dtor
source to share