OpenJDK Client VM - cannot allocate memory

I am running a Hadoop map, downsizing the cluster. I am getting this error.

OpenJDK Client VM warning: INFO: os :: commit_memory (0x79f20000, 104861696, 0) failed; error = 'Unable to allocate memory' (errno = 12)

There is not enough memory for the Java Runtime Environment to continue.

The configured memory allocation (malloc) was unable to allocate 104861696 bytes to commit the reserved memory.

what to do?

+3


source to share


3 answers


make sure your computer has swap

ubuntu@VM-ubuntu:~$ free -m
             total       used       free     shared    buffers     cached
Mem:           994        928         65          0          1         48
-/+ buffers/cache:        878        115
Swap:         4095       1086       3009

      



pay attention to the line swap

.

I just ran into this issue on an Elastic Computing instance. Disabled paging space is not set by default.

+2


source


You can increase the size of the memory allocation by passing these runtime parameters.

For example:



java -Xms1024M -Xmx2048M -jar application.jar

      

  • Xmx - maximum size
  • Xms is the minimum size
+1


source


There may be a container overflow with the options you are using for the JVM

Check if there are attributes:

yarn.nodemanager.resource.memory-mb
yarn.scheduler.minimum-allocation-mb
yarn.scheduler.maximum-allocation-mb

      

on yarn.xml matches the desired value.

For more information on memory read:

HortonWorks Memory Link

A similar problem

Note. This is for Hadoop 2.0, if you are using hasoop 1.0 check the Task attributes.

+1


source







All Articles