Preventing memory fragmentation in a polymorphic container

This question requires some explanation, so if you don't skip this example, don't get to the end :)

I recently read a book describing memory fragmentation (on the heap) and got me thinking about my own projects. For example when using ptr_container (from Boost) like this

ptr_array<A> arr;       // 
arr.push_back(new A()); // a1.
arr.push_back(new A()); // a2.
arr.push_back(new A()); // a3.


it will rather quickly lead to memory fragmentation when replacing elements. For an argument, let's say that the actual container can contain all the pointers we give it. The heap will look something like this:



When we start replacing pointers with a subtype (which has a larger size), this situation changes, lets say that we are replacing all objects that are referenced by odd pointers (a1, a3, ...) with a larger type, Let's look like this:



where [xx] stands for unused space and b1 ... bN stands for new objects.

So, I need a container that stores objects (in STL containers for example), but supports polymorphic storage. I know how to implement this kind of container, but I prefer to use "expert" libraries (Boost, STL, ...). To summarize, my question is:

Is there a container that supports (dynamically allocated) polymorphic objects stored in a contiguous memory sequence?

For example, a memory layout might look like this:

[arr_array] = [ptr_lookup_table][storage]
            = [p1, p2, ..., pn][b1, a2, b3, ..., aN]


Thanks for your responses / comments!


source to share

3 answers

Fragmentation of memory requires some pre-knowledge about memory allocation, so I need to establish some concepts first.

Memory allocation

When you call a statement new

(which is often called malloc

behind the scenes by default ), you are not directly requesting memory from the OS, but what happens (usually) is this:

  • you call malloc

    for 76 bytes, it looks like if available:
    • If not, it requests a page (usually 4KB) from the OS and prepares it
  • then it serves up the 76 bytes you asked for

Memory fragmentation can occur at two levels:

  • You can run out of your virtual address space (not so easy with 64-bit platforms).
  • You may have almost blank pages that cannot be fixed and yet cannot serve the requests you are making.

Generally, since you malloc

should call 4KB pages at a time (unless you ask for a larger chunk, in which case it will pick more 4KB), you should never run out of your address space. This happened on a 32 bit machine (limited to 4 GB) for unusually large distributions.

On the other hand, if your implementation is malloc

too naive, you can end up with fragmented blocks of memory and therefore have much more memory than what you are actually using. This usually refers to the fragmentation of memory at the present time.

Typical strategy

When I say naively, I mean, for example, your idea of ​​constantly distributing everything. It is a bad idea. This is usually not the case.

Good malloc

implementations today use pools instead .

Typically, they will have pools by size:

  • 1 byte
  • 2 bytes
  • 4 bytes
  • ...
  • 512 bytes
  • ...
  • 4KB or more are specially handled (directly)

And when you make a request, they will find a pool with the smallest size that can satisfy it, and that pool will serve you.

Since all requests in the pool are submitted with the same size, there is no fragmentation in the pool, since a free cell can be used for any incoming request.

So fragmentation?

Currently, you shouldn't be seeing fragmentation as such.

However, you can still watch the memory holes. Suppose the pool is processing objects that are 9 to 16 bytes in size and you allocate 4,000,000 of them. This requires at least 16,000 4K pages. Now suppose you free all but 16,000 of them, but it's tricky to have one object still live for each page. Pages cannot be recovered by the operating system as you are still using them, however, since you are only using 16 bytes out of 4KB space is completely wasted (for now).

Some Garbage Collection languages ​​can handle these compaction cases, however, in C ++, moving an object into memory is not possible because the user has direct control over the object addresses.

Magic container

I do not know such a beast. I don't understand why this would be helpful.


Don't worry about fragmentation.

Note: "experts" may want to write their own pool allocation mechanism, I would like to remind them not to forget about alignment



(Edit: sorry, misread your question, previous answer deleted.)

You can use any memory pool for your objects. Typically, you group the same (or similar) sized objects in the same pool. Since you usually call a custom function delete

on the pool, I suggest you use shared_ptr

with custom delete. Then you can use this one shared_ptr

with any standard container you like.

Edit: It seems like an example is needed. Warning: this is completely untested and out of my head. Don't expect it to compile.

#include <boost/pool.hpp>
#include <memory>

class A;
class B; // B inherits from A

int main() {
  // could be global
  boost::object_pool<A> a_pool;
  boost::object_pool<B> b_pool;

  std::vector<std::shared_ptr<A>> v;

  v.push_back(std::shared_ptr<A>(a_pool.construct(), [&](A*p) {; }));
  v.push_back(std::shared_ptr<A>(a_pool.construct(), [&](A*p) {; }));
  v.push_back(std::shared_ptr<A>(a_pool.construct(), [&](A*p) {; }));

  v[2] = std::shared_ptr<B>(b_pool.construct(), [&](B*p) {; });

  return 0;


This will work even if B is much larger than A. It also doesn't rely on automatic pool freeing, which is IMHO's danger. The memory layout is not rigid because the pool will always loop, but it will not have fragmentation, which is what you want if I understood your question.



The fragmentation is not due to the use of an accelerating container. This will happen when you frequently allocate and free objects of different sizes using new

and delete

. ptr_array

just keeps pointers to those allocated objects and probably won't contribute significantly to fragmentation.

If you want to counter memory fragmentation, you can overload objects operator new

and implement your own memory management system. I suggest you read the topics Memory Pools and Free Lists .



All Articles