Is contiguous memory lightweight in a 64-bit address space? If so, why?

A comment in this blog post says:

We know how to make a bunch of heaps, but there is an overhead of using them. We have more requests for faster storage management than we do for large heaps in a 32-bit JVM. If you really want big heaps, switch to 64-bit JVM. We still need contiguous memory, but it's much easier to get in a 64-bit address space .

This consequence of the above statement is that it is easier to obtain contiguous memory in a 64-bit address space. It's true? If so, why?

+3


source to share


2 answers


This is very true. The process must allocate memory from the virtual memory address space. In which code and data are stored and the size of which is limited by the addressing capabilities of the architecture. You can never address more than 2 ^ 32 bytes in a 32 bit process, apart from bank switching tricks. This is 4 gigabytes. The operating system tends to take a big chunk of this as well, in 32-bit Windows, for example, which shrinks the address size of the virtual machine to 2 gigabytes.

Ideally, appropriations are made in such a way that they are tightly connected to each other. This very rarely happens in practice. In shared libraries or DLLs in particular, you need to choose your preferred load address, which you need to guess when the library is built.



Thus, in practice, distributions are made from holes between existing ones and the maximum possible contiguous distribution you can get is limited by the size of the largest hole. Typically much smaller than the address size of the VM, typically around 650 megabytes on Windows. Because of this, it tends to go downhill as the available address space becomes fragmented in allocation. In particular, by using your own code, which cannot afford allocations moved by a compact garbage collector. If you are using Windows, you can get an idea of ​​VM distribution using the VMMap SysInternals utility.

This problem disappears completely in a 64-bit process. The theoretical virtual memory address size is 2 ^ 64, a huge amount. So big that current processors don't implement it, they can go as high as 2 ^ 48. Also, you're limited by the version of the operating system and its willingness to store page mapping tables for this large virtual machine. The typical limit is eight terabytes. It is understood that the holes between the distributions are huge. Your program will depend on the paging file before it dies from the OOM.

+4


source


I can't talk about how the JVM is implemented explicitly, but from a purely theoretical point of view, if you have significantly larger virtual address space (e.g. 64-bit versus 32-bit) it should be much easier to find a large contiguous block memory available for allocation (going to extremes - you have no chance of finding a contiguous 4 GB of free memory in a 32-bit address space, but there is a good chance of finding that space in a full 64-bit address space).



It should be noted that regardless of the size of the virtual address space, this will still be implemented by allocating (possibly) non-contiguous pages of physical memory, especially if the required allocation is large - a larger virtual address space simply means more contiguous virtual addresses are more likely to be available for use. ...

0


source







All Articles