Hadoop datanode cannot start. msgstr "does not contain a valid host: port authority"

I am currently using hasoop 1.2.1 (because I need to run my spatial processing software just to support this version). I am trying to deploy in multi-tier mode with one master and three slaves.

I am sure I can ssh between all masters and slaves without a password (including myself). Also the hostname on each node is correct. Each node uses the same host file:

192.168.56.101 master
192.168.56.102 slave1
192.168.56.103 slave2
192.168.56.104 slave3

      

I am having problems with node slaves, error log information is as follows,

2015-05-21 23:39:16,841 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
    at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:181

      

Configurations in core-site.xml file

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://master:9000</value>
    </property>
</configuration>

      

In mapred-site.xml:

<configuration>
    <property>
        <name>mapred.job.tracter</name>
        <value>master:8012</value>  
    </property>
</configuration>

      

In hdfs-site.xml:

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
</configuration

      

+3


source to share


3 answers


Try changing "master" to your actual IP address in all configuration files.



+1


source


There might be a problem with the naming convention of your node hostnames. Make sure they do not contain characters like "_". Check Wikipedia for restrictions.



0


source


You have configured OK. You need to run the command " $ HADOOP_HOME / bin / hdfs namenode -format master " after running the command "$ HADOOP_HOME / sbin / start-dfs"

0


source







All Articles