Gradual increase in resident memory usage by Jboss process (Java)

We are facing an issue where the memory of the Resident Java process is gradually increasing. We have Xmx defined at 4096 MB and XX: MaxPermSize = 1536 m. The number of active threads ~ 1500 with Xss 256K defined.

When the application server (JBoss 6.1) starts the memory resident, ~ 5.6GB is used (used the top command to monitor it); it grows gradually (0.3 to 0.5 GB per day) until it grows to ~ 7.4 GB when the OOM kernel killer kills the process due to insufficient RAM (server has 9 GB of RAM).

We monitored the thread dump regularly, unaware of a thread leak. We still can't figure out where this extra memory comes from.

The Pmap output shows several Anon blocks (besides the usual stack and heap blocks), mostly in 64MB arenas, which are not counted in terms of heap, perm gen and stack memory usage.

In the heap dump, we also tried to find the DirectByteBuffers and sun.misc.Unsafe objects, which are commonly used for non-heap memory allocation, but the number of objects as well as the memory size seems nominal. Is it possible that the built-in memory can still be freed even after these objects are GCed? Any other classes that could lead to non-heap memory usage?

Our application has its own calls on its own, but it is possible that some third party libraries have them.

Any ideas on what might be causing this? Any other details / tools that might help you debug such an increase? Any known issue that we should look out for? Platform: Jboss 6.1 runs on Centos 5.6.

+3


source to share


2 answers


Known issue with Java and glibc> = 2.10 (includes Ubuntu> = 10.04, RHEL> = 6).

The cure is to install this env. variable: export MALLOC_ARENA_MAX=4

There is an IBM article on installing MALLOC_ARENA_MAX https://www.ibm.com/developerworks/community/blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage?lang=en

This blog post states



Known that resident memory

crawls in a manner similar to memory leak or memory fragmentation.

search for MALLOC_ARENA_MAX on Google or SO for more links.

You might want to tweak other malloc options as well to optimize for low fragmentation of allocated memory:

# tune glibc memory allocation, optimize for low fragmentation
# limit the number of arenas
export MALLOC_ARENA_MAX=2
# disable dynamic mmap threshold, see M_MMAP_THRESHOLD in "man mallopt"
export MALLOC_MMAP_THRESHOLD_=131072
export MALLOC_TRIM_THRESHOLD_=131072
export MALLOC_TOP_PAD_=131072
export MALLOC_MMAP_MAX_=65536

      

0


source


The increase in RSS usage can be caused by a memory leak. A common problem is inline memory leak caused by not closing ZipInputStream

/ GZIPInputStream

.

The typical way to open ZipInputStream

is by calling Class.getResource

/ ClassLoader.getResource

and calling openConnection().getInputStream()

on an instance, java.net.URL

or calling Class.getResourceAsStream

/ ClassLoader.getResourceAsStream

. You need to ensure that these streams are always closed.



You can use jemalloc to debug memory leaks by enabling malloc profiling by specifying options in an environment variable MALLOC_CONF

. Detailed instructions are available on this blog: http://www.evanjones.ca/java-native-leak-bug.html . This blog post also has information on using jemalloc to debug native memory leaks in Java applications.

The same blog also contains information on another memory leak related to ByteBuffers .

0


source







All Articles