Hadoop cannot allocate memory java.io.IOException: error = 12
I am getting the following error in hasoop greenplum
java.lang.Throwable: Child Error
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Cannot run program "ln": java.io.IOException: error=12, Cannot allocate memory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:488)
at java.lang.Runtime.exec(Runtime.java:610)
at java.lang.Runtime.exec(Runtime.java:448)
at java.lang.Runtime.exec(Runtime.java:386)
at org.apache.hadoop.fs.FileUtil.symLink(FileUtil.java:567)
at org.apache.hadoop.mapred.TaskLog.createTaskAttemptLogDir(TaskLog.java:109)
at org.apache.hadoop.mapred.DefaultTaskController.createLogDir(DefaultTaskController.java:71)
at org.apache.hadoop.mapred.TaskRunner.prepareLogFiles(TaskRunner.java:316)
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228)
Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory
at java.lang.UNIXProcess.<init>(UNIXProcess.java:164)
at java.lang.ProcessImpl.start(ProcessImpl.java:81)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:470)
... 8 more
the server has 7G ram and 1G swap.
heap size is 1024m and mapred.child.opts is 512m.
any ideas?
reduced the memory of the checklist to 256MB and limited the number of tracker tasks to 1 per node, something larger causes errors for children and takes longer to complete the mapreduce task.
Whatever memory you come up with, Hadoop will likely do it anyway. The problem is that for simple filesystem tasks like creating symlinks or checking free disk space, Hadoop opens a process from TaskTracker. This process will have the same amount of memory as its parent.
Typical ways to prevent this problem are to leave as much physical memory unallocated as allocated for TT, by adding some swap to the host for these tasks, or by allowing "over commit".