Strange memory behavior with std card and shared_ptr

The code below is causing strange memory behavior on my Debian machine. Even after the cards are cleaned up, htop shows that the program is still using a lot of memory, which makes me think there is a memory leak. The strange fact is that it only appears under certain circumstances.

#include <map>
#include <iostream>
#include <memory>


int main(int argc, char** argv)
{
    if (argc != 2)
    {
        std::cout << "Usage: " << argv[0] << " <1|0> " << std::endl;
        std::cout << "1 to insert in the second map and see the problem "
                "and 0 to not insert" << std::endl;
        return 0;
    }

    bool insertion = atoi(argv[1]);
    std::map<uint64_t, std::shared_ptr<std::string> > mapStd;
    std::map<uint64_t, size_t> counterToSize;
    size_t dataSize = 1024*1024;
    uint64_t counter = 0;

    while(counter < 10000)
    {
        std::shared_ptr<std::string> stringPtr =
                std::make_shared<std::string>(dataSize, 'a');
        mapStd[counter] = stringPtr;

        if (insertion)
        {
            counterToSize[counter] = dataSize;
        }
        if (counter > 500)
        {
            mapStd.erase(mapStd.begin());
        }
        std::cout << "\rInserted chunk " << counter << std::flush;

        counter++;
    }

    std::cout << std::endl << "Press ENTER to delete the maps" << std::endl;
    char a;
    std::cin.get(a); // wait for ENTER to be pressed

    mapStd.clear();   // clear both maps
    counterToSize.clear();

    std::cout << "Press ENTER to exit the program" << std::endl;

    std::cin.get(a); // wait for ENTER to be pressed
    return 0;
}

      

Explanation:

The code creates two cards on the stack (but the problem is the same if they are created on the heap). Then it inserts std :: shared_ptr strings into the first map. Each line is 1MB in size. When 500 rows are inserted, one removes the first for every new insert, so the total memory used by the card is always 500 MB. When 10,000 lines are nested, the program waits for the user to hit ENTER. If you run the program and pass 1 as the first argument, then for each insert on the first card, another insert will be made for the second card as well. If the first argument is 0, then the second card is not used. After pressing ENTER for the first time, both cards are cleared. The program still starts and waits for ENTER again and then exits.

Here are the facts:

  • On my 64-bit Debian 3.2.54-2, after pressing ENTER (thus after the cards are cleared) and when the program is started with 1 as the first argument (thus with an insert on the second card), htop indicates that STILL USE program 500 MB MEMORY !! If the program is started with 0 as the first argument, then the memory will be properly released.

  • This machine uses g ++ 4.7.2 and libstdC ++. so.6.0.17. I've tried with g ++ 4.8.2 and libstdc ++. So.6.0.18, same problem.

  • I have tried 64 bit Fedora 21 with g ++ 4.9.2 and libstdc ++. so.6.0.20, same problem.
  • I've tried Ubuntu 14.04 32-bit with g ++ 4.8.2 and libstdc ++. so.6.0.19, the problem does NOT appear!
  • I have tried Debian 32 bit 3.2.54-2 with g ++ 4.7.2 and libstdc ++. so.6.0.17, the problem does NOT appear!
  • I tried 64 bit Windows, the problem does NOT appear!
  • On a machine where the problem exists, if you invert the lines where the maps are cleared (thus if you clear the uint64_t, size_t map first, the problem goes away!

Does anyone have an explanation for all this?

+3


source to share


1 answer


I recommend looking here and then here . Basically libc malloc starts with mmap for "large" allocations (> 128k) and brk / freelists for small allocations. When one of the big allocations is free, it tries to adjust the sizes where it can malloc, but only if the size is less than max (defined in that first link). In the 32-bit case, your lines are over the maximum, so it keeps using mmap / munmap for large allocations and allocates a smaller node map to the memory fetched from the system using sbrk. This is why you don't see the "problem" on 32-bit systems.

The other bit is one from fragmentation and when it tries to try to pool the memory and put it back on the system. By default, the free will simply insert small blocks into the free list so they are ready to move on to the next request. If a large enough block is free at the top of the heap, it will try to return memory to the system see the comment here . The threshold is 64K.

Your allocation pattern, in case you go through 1

, probably leaves some map element counterToSize

near the top of the heap, preventing it from being freed by the last release of one of the lines. The releases of various objects within the map are counterToSize

not large enough to trigger the thresholds.



If you switch the order of the calls .clear()

, you will find that the memory is released as you expected. Also, if you have to allocate a large chunk of memory and free it immediately after cleanup, this will trigger the release. (The large size in this case must be greater than 128 bytes - the maximum size used to run fast bins (i.e. free of this size and allocating less than this size is just listed.

Hope this was clear. Basically, this is not a problem. You have multiple pages. You don't use anything on them, but the last free one to let them go was too small to run this code path. The next time you try to allocate something, it will pull out of the memory you already have (you can do the whole loop again without increasing memory).

Oh, and you could call malloc_trim()

manually and force it to do the merge / cleanup if you really need the pages at this point.

+3


source







All Articles