Hasoop The protocol message protocol had the wrong wiring type

I Set up a hadoop 2.6 cluster using two nodes of 8 cores each on Ubuntu 12.04. sbin/start-dfs.sh

and sbin/start-yarn.sh

both success. And I can see the following after jps

on the main node.

22437 DataNode
22988 ResourceManager
24668 Jps
22748 SecondaryNameNode
23244 NodeManager

      

The result jps

on the slave node is

19693 DataNode
19966 NodeManager

      

Then I run the PI example.

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 30 100

Which gives me the error log there

java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310; 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
    at org.apache.hadoop.ipc.Client.call(Client.java:1472)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)

      

The issue is related to the HDFS file system, as when trying to execute a command, a bin/hdfs dfs -mkdir /user

similar exception fails.

java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310;

      

where xxx.ww.y.zz

is the ip-address of Master-R5- Node

I have checked and followed all the ConnectionRefused recommendations in Apache and on this site.

Despite weeks of efforts, I cannot fix this.

Thank.

+3


source to share


1 answer


There are so many reasons that could lead to the problem I am facing. But I finally decided to fix it using some of the following things.

  • Make sure you have the required permission for files /hadoop

    and hdfs temporary

    . (you have to figure out where this is for your paticular case)
  • remove the port number from fs.defaultFS

    in $HADOOP_CONF_DIR/core-site.xml

    . It should look like this:
`<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://my.master.ip.address/</value>
<description>NameNode URI</description>
</property>
</configuration>`

      



  1. Add the following two properties to `$ HADOOP_CONF_DIR / hdfs-site.xml
 <property>
    <name>dfs.datanode.use.datanode.hostname</name>
    <value>false</value>
 </property>

  <property>
     <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
     <value>false</value>
  </property>

      

Voila! Now you have to work and work!

+6


source







All Articles