Hadoop2.5.2 gets stuck in Running Job when I try to run pi-example on YARN

I am running Hadoop2.5.2 on 3 machines with Ubuntu Server 14.04

One of them is namenode and resourcemanager with ip 192.168.3.1 Others are slaves working with datanode and nodemanager with ip 192.168.3.102 and 192.168.3.104 respectively.

I can run start-hdfs.sh and start-yarn.sh without any error. HDFS and YARN website works well, I can visit both websites in my browser and see the status of the two slaves.

But when I try to run the mapreduce example under ~/hadoop/share/hadoop/mapreduce'

via yarn jar hadoop-mapreduce-examples-2.5.2.jar pi 14 1000

Process gets stuck atINFO mapreduce.job: Running job: ...

The yarn website shows that there is one container on the slave and the state of the application is accepted.

When I tpye 'jps' on the slave

20265 MRAppMaster
20351 Jps
19206 DataNode
20019 NodeManager

      

Syslog file on slave:

INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
...

      

It seems that the slave is not using the defalut RM ip address instead of the real one at 192.168.3.1

Here is my config on slaves: yarn site.xml

<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>192.168.3.1</value>
</property> 
<property>
    <name>yarn.resourcemanager.address</name>
    <value>192.168.3.1:8032</value>
</property>  
<property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>192.168.3.1:8030</value>
</property>

<property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>192.168.3.1:8031</value>
</property>

<property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>192.168.3.1:8088</value>
</property>

<property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>192.168.3.1:8033</value>
</property> 
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>

<property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

      

<strong> HDFS-site.xml

<configuration>

<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///home/hduser/hdfs/namenode</value>
    <description>NameNode directory for namespace and transaction logs storage</description>
</property>

<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>

<property>
    <name>dfs.permissions</name>
    <value>false</value>
</property>

<property>
    <name>dfs.datanode.use.datanode.hostname</name>
    <value>false</value>
</property>

<property>
    <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
    <value>false</value>
</property>
</configuration>

      

core-site.xml

<configuration>
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://192.168.3.1:8020</value>
    <description>NameNode URI</description>
</property>
</configuration>

      

mapred-site.xml

19 <configuration>
20 
21 <property>
22     <name>mapreduce.framework.name</name>
23     <value>yarn</value>
24     <description>Use YARN</description>
25 </property>

      

The configuration on the master is almost the same except for the yarn-site.xml

65 <property>
66     <name>yarn.nodemanager.aux-services</name>
67     <value>mapreduce_shuffle</value>
68 </property>
69 
70 <property>
71     <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
72     <value>org.apache.hadoop.mapred.ShuffleHandler</value>
73 </property>

      

And change yarn-env.sh export YARN_CONF_DIR="${YARN_CONF_DIR:-$HADOOP_YARN_HOME/etc/hadoop}"

I am not changing / etc / hosts

Does anyone know how I can fix this? Thanks to

if you need other information just tell me. I will update.

+3


source to share


2 answers


Finally, I figured it out myself.

I downloaded the newer version of the Hadoop-2.6.0 source code and built it on my own machine.



The configuration was the same as version 2.5.2, but it just works!

I think this is the best way to start from source instead of inline.

0


source


INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
...

      

It tries to connect to the resource manager. It doesn't seem to work.



Check the resource manager service.

0


source







All Articles