Call from <hostname> / <ip> to <hostname>: 9000 connection failed: java.net.ConnectException: Connection refused
I tried to deploy a hasoop cluster test environment. When I run it all the logs are correct but cann't run any hadoop command and I found 9000 ports were not listening.
Run the hadoop command (ERROR, all commands have the same error):
hadoop-2.5.0/bin$ ./hdfs dfs -ls / 14/08/15 10:19:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable ls: Call From master-hadoop/172.17.65.225 to master-hadoop:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Namenode 9000 port not listening
hadoop-2.5.0/bin$ sudo netstat -ntap | grep 9000 Terminal console doesn't output anything.
Hadoop config:
core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master-hadoop:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>file:/home/hadoop/hadoop/tmp</value> </property> </configuration>
HDFS site: XML
<configuration> <property> <name>dfs.namenode.rpc-address</name> <value>master-hadoop:9001</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>secondary-hadoop:50090</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/hadoop/hadoop/dfs/name</value> </property> <property> <name>dfs.namenode.data.dir</name> <value>file:/home/hadoop/hadoop/dfs/data</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration>
mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>master-hadoop:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>master-hadoop:19888</value> </property> </configuration>
yarn site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce-shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master-hadoop:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master-hadoop:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master-hadoop:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master-hadoop:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master-hadoop:8088</value> </property> </configuration>
Namenode / etc / hosts:
172.17.65.225 master-hadoop 127.0.0.1 master-hadoop ::1 master-hadoop localhost 172.17.65.151 slave1-hadoop 172.17.65.14 slave2-hadoop 172.17.65.117 secondary-hadoop
Namenode format log:
hadoop-2.5.0 / bin $. / hdfs namenode -format
14/08/15 10:16:16 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = master-hadoop/172.17.65.225 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.5.0 STARTUP_MSG: classpath = /home/hadoop/hadoop/hadoop-2.5.0/etc/hadoop:/home/hadoop/hadoop/hadoop-2.5.0/share/hadoop/common/lib/jersey-json-1.9.jar: ... :/home/hadoop/hadoop/hadoop-2.5.0/contrib/capacity-scheduler/*.jar STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common -r 1616291; compiled by 'jenkins' on 2014-08-06T17:31Z STARTUP_MSG: java = 1.7.0_21 ************************************************************/ 14/08/15 10:16:16 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 14/08/15 10:16:16 INFO namenode.NameNode: createNameNode [-format] 14/08/15 10:16:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Formatting using clusterid: CID-4d27991c-4852-407c-9c6b-70df76994d13 14/08/15 10:16:16 INFO namenode.FSNamesystem: fsLock is fair:true 14/08/15 10:16:16 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 14/08/15 10:16:16 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 14/08/15 10:16:16 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 14/08/15 10:16:16 INFO blockmanagement.BlockManager: The block deletion will start around 2014 Aug 15 10:16:16 14/08/15 10:16:16 INFO util.GSet: Computing capacity for map BlocksMap 14/08/15 10:16:16 INFO util.GSet: VM type = 32-bit 14/08/15 10:16:16 INFO util.GSet: 2.0% max memory 888.9 MB = 17.8 MB 14/08/15 10:16:16 INFO util.GSet: capacity = 2^22 = 4194304 entries 14/08/15 10:16:16 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 14/08/15 10:16:16 INFO blockmanagement.BlockManager: defaultReplication = 2 14/08/15 10:16:16 INFO blockmanagement.BlockManager: maxReplication = 512 14/08/15 10:16:16 INFO blockmanagement.BlockManager: minReplication = 1 14/08/15 10:16:16 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 14/08/15 10:16:16 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 14/08/15 10:16:16 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 14/08/15 10:16:16 INFO blockmanagement.BlockManager: encryptDataTransfer = false 14/08/15 10:16:16 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 14/08/15 10:16:16 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE) 14/08/15 10:16:16 INFO namenode.FSNamesystem: supergroup = supergroup 14/08/15 10:16:16 INFO namenode.FSNamesystem: isPermissionEnabled = true 14/08/15 10:16:16 INFO namenode.FSNamesystem: HA Enabled: false 14/08/15 10:16:16 INFO namenode.FSNamesystem: Append Enabled: true 14/08/15 10:16:17 INFO util.GSet: Computing capacity for map INodeMap 14/08/15 10:16:17 INFO util.GSet: VM type = 32-bit 14/08/15 10:16:17 INFO util.GSet: 1.0% max memory 888.9 MB = 8.9 MB 14/08/15 10:16:17 INFO util.GSet: capacity = 2^21 = 2097152 entries 14/08/15 10:16:17 INFO namenode.NameNode: Caching file names occuring more than 10 times 14/08/15 10:16:17 INFO util.GSet: Computing capacity for map cachedBlocks 14/08/15 10:16:17 INFO util.GSet: VM type = 32-bit 14/08/15 10:16:17 INFO util.GSet: 0.25% max memory 888.9 MB = 2.2 MB 14/08/15 10:16:17 INFO util.GSet: capacity = 2^19 = 524288 entries 14/08/15 10:16:17 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 14/08/15 10:16:17 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 14/08/15 10:16:17 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 14/08/15 10:16:17 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 14/08/15 10:16:17 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 14/08/15 10:16:17 INFO util.GSet: Computing capacity for map NameNodeRetryCache 14/08/15 10:16:17 INFO util.GSet: VM type = 32-bit 14/08/15 10:16:17 INFO util.GSet: 0.029999999329447746% max memory 888.9 MB = 273.1 KB 14/08/15 10:16:17 INFO util.GSet: capacity = 2^16 = 65536 entries 14/08/15 10:16:17 INFO namenode.NNConf: ACLs enabled? false 14/08/15 10:16:17 INFO namenode.NNConf: XAttrs enabled? true 14/08/15 10:16:17 INFO namenode.NNConf: Maximum size of an xattr: 16384 14/08/15 10:16:17 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1935486596-172.17.65.225-1408068977173 14/08/15 10:16:17 INFO common.Storage: Storage directory /home/hadoop/hadoop/dfs/name has been successfully formatted. 14/08/15 10:16:17 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 14/08/15 10:16:17 INFO util.ExitUtil: Exiting with status 0 14/08/15 10:16:17 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master-hadoop/172.17.65.225 ************************************************************/
Hadoop-Hadoop-NameNode-master-hadoop.log
2014-08-15 10:17:48,855 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = master-hadoop/172.17.65.225 STARTUP_MSG: args = [] STARTUP_MSG: version = 2.5.0 STARTUP_MSG: classpath = /home/hadoop/hadoop/hadoop-2.5.0/etc/hadoop:/home/hadoop/hadoop/hadoop-2.5.0/share/hadoop/common/lib/jersey-json-1.9.jar: ... ... :/contrib/capacity-scheduler/*.jar STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common -r 1616291; compiled by 'jenkins' on 2014-08-06T17:31Z STARTUP_MSG: java = 1.7.0_21 ************************************************************/ 2014-08-15 10:17:48,870 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2014-08-15 10:17:48,880 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [] 2014-08-15 10:17:49,117 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2014-08-15 10:17:49,209 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2014-08-15 10:17:49,209 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2014-08-15 10:17:49,211 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://master-hadoop:9000 2014-08-15 10:17:49,211 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use master-hadoop:9000 to access this namenode/service. 2014-08-15 10:17:49,389 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2014-08-15 10:17:54,555 INFO org.apache.hadoop.hdfs.DFSUtil: Starting web server as: ${dfs.web.authentication.kerberos.principal} 2014-08-15 10:17:54,556 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070 2014-08-15 10:17:54,605 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2014-08-15 10:17:54,609 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined 2014-08-15 10:17:54,620 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2014-08-15 10:17:54,622 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs 2014-08-15 10:17:54,622 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2014-08-15 10:17:54,623 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2014-08-15 10:17:54,653 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter) 2014-08-15 10:17:54,655 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* 2014-08-15 10:17:54,676 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070 2014-08-15 10:17:54,676 INFO org.mortbay.log: jetty-6.1.26 2014-08-15 10:17:54,883 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: 'signature.secret' configuration not set, using a random value as secret 2014-08-15 10:17:54,948 INFO org.mortbay.log: Started [email protected]`0.0.0.0`:50070 2014-08-15 10:17:59,984 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories! 2014-08-15 10:17:59,984 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories! 2014-08-15 10:18:00,023 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true 2014-08-15 10:18:00,062 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 2014-08-15 10:18:00,062 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 2014-08-15 10:18:00,065 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2014-08-15 10:18:00,066 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2014 Aug 15 10:18:00 2014-08-15 10:18:00,068 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap 2014-08-15 10:18:00,068 INFO org.apache.hadoop.util.GSet: VM type = 32-bit 2014-08-15 10:18:00,069 INFO org.apache.hadoop.util.GSet: 2.0% max memory 888.9 MB = 17.8 MB 2014-08-15 10:18:00,069 INFO org.apache.hadoop.util.GSet: capacity = 2^22 = 4194304 entries 2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false 2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 2 2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512 2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1 2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2 2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000 2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false 2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000 2014-08-15 10:18:00,092 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE) 2014-08-15 10:18:00,092 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup 2014-08-15 10:18:00,092 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true 2014-08-15 10:18:00,092 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false 2014-08-15 10:18:00,094 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true 2014-08-15 10:18:00,279 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap 2014-08-15 10:18:00,279 INFO org.apache.hadoop.util.GSet: VM type = 32-bit 2014-08-15 10:18:00,280 INFO org.apache.hadoop.util.GSet: 1.0% max memory 888.9 MB = 8.9 MB 2014-08-15 10:18:00,280 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries 2014-08-15 10:18:00,297 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 2014-08-15 10:18:00,305 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks 2014-08-15 10:18:00,305 INFO org.apache.hadoop.util.GSet: VM type = 32-bit 2014-08-15 10:18:00,306 INFO org.apache.hadoop.util.GSet: 0.25% max memory 888.9 MB = 2.2 MB 2014-08-15 10:18:00,306 INFO org.apache.hadoop.util.GSet: capacity = 2^19 = 524288 entries 2014-08-15 10:18:00,308 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 2014-08-15 10:18:00,308 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 2014-08-15 10:18:00,308 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 2014-08-15 10:18:00,310 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled 2014-08-15 10:18:00,310 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2014-08-15 10:18:00,312 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache 2014-08-15 10:18:00,312 INFO org.apache.hadoop.util.GSet: VM type = 32-bit 2014-08-15 10:18:00,312 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 888.9 MB = 273.1 KB 2014-08-15 10:18:00,312 INFO org.apache.hadoop.util.GSet: capacity = 2^16 = 65536 entries 2014-08-15 10:18:00,316 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false 2014-08-15 10:18:00,316 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true 2014-08-15 10:18:00,317 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384 2014-08-15 10:18:00,355 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hadoop/hadoop/dfs/name/in_use.lock acquired by nodename [email protected] 2014-08-15 10:18:00,433 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /home/hadoop/hadoop/dfs/name/current 2014-08-15 10:18:00,433 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected. 2014-08-15 10:18:00,488 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes. 2014-08-15 10:18:00,534 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds. 2014-08-15 10:18:00,534 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /home/hadoop/hadoop/dfs/name/current/fsimage_0000000000000000000 2014-08-15 10:18:00,542 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false) 2014-08-15 10:18:00,543 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1 2014-08-15 10:18:00,689 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups 2014-08-15 10:18:00,689 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 372 msecs 2014-08-15 10:18:00,902 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to master-hadoop:9001 2014-08-15 10:18:00,909 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue 2014-08-15 10:18:00,923 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9001 2014-08-15 10:18:00,954 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean 2014-08-15 10:18:00,963 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks under construction: 0 2014-08-15 10:18:00,963 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks under construction: 0 2014-08-15 10:18:00,963 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues 2014-08-15 10:18:00,963 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs 2014-08-15 10:18:00,963 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes 2014-08-15 10:18:00,964 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks 2014-08-15 10:18:00,982 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 0 2014-08-15 10:18:00,994 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0 2014-08-15 10:18:00,994 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0 2014-08-15 10:18:00,994 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0 2014-08-15 10:18:00,994 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 0 2014-08-15 10:18:00,994 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 31 msec 2014-08-15 10:18:01,010 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2014-08-15 10:18:01,011 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9001: starting 2014-08-15 10:18:01,131 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: master-hadoop/172.17.65.225:9001 2014-08-15 10:18:01,132 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state 2014-08-15 10:18:01,142 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds 2014-08-15 10:18:01,142 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning because of pending operations 2014-08-15 10:18:01,147 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 5 millisecond(s). 2014-08-15 10:18:02,566 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(172.17.65.14, datanodeUuid=e3b6ade5-3534-4f5f-99fa-959bbbd9dce9, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-4d27991c-4852-407c-9c6b-70df76994d13;nsid=995055688;c=0) storage e3b6ade5-3534-4f5f-99fa-959bbbd9dce9 2014-08-15 10:18:02,571 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/172.17.65.14:50010 2014-08-15 10:18:02,648 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-8dee85fa-82b5-40a6-98e4-db44cca23371 for DN 172.17.65.14:50010 2014-08-15 10:18:02,698 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* processReport: Received first block report from DatanodeStorage[DS-8dee85fa-82b5-40a6-98e4-db44cca23371,DISK,NORMAL] after starting up or becoming active. Its block contents are no longer considered stale 2014-08-15 10:18:02,698 INFO BlockStateChange: BLOCK* processReport: from storage DS-8dee85fa-82b5-40a6-98e4-db44cca23371 node DatanodeRegistration(172.17.65.14, datanodeUuid=e3b6ade5-3534-4f5f-99fa-959bbbd9dce9, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-4d27991c-4852-407c-9c6b-70df76994d13;nsid=995055688;c=0), blocks: 0, hasStaleStorages: false, processing time: 3 msecs 2014-08-15 10:18:05,783 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. 2014-08-15 10:18:09,235 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. 2014-08-15 10:18:14,099 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. 2014-08-15 10:18:14,578 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(172.17.65.151, datanodeUuid=43bc6f34-b8ad-4355-9fe4-9951f40e982a, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-4d27991c-4852-407c-9c6b-70df76994d13;nsid=995055688;c=0) storage 43bc6f34-b8ad-4355-9fe4-9951f40e982a 2014-08-15 10:18:14,578 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/172.17.65.151:50010 2014-08-15 10:18:14,628 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-1e42ed67-0da7-476a-9e67-d778cd56b2b1 for DN 172.17.65.151:50010 2014-08-15 10:18:14,660 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* processReport: Received first block report from DatanodeStorage[DS-1e42ed67-0da7-476a-9e67-d778cd56b2b1,DISK,NORMAL] after starting up or becoming active. Its block contents are no longer considered stale 2014-08-15 10:18:14,660 INFO BlockStateChange: BLOCK* processReport: from storage DS-1e42ed67-0da7-476a-9e67-d778cd56b2b1 node DatanodeRegistration(172.17.65.151, datanodeUuid=43bc6f34-b8ad-4355-9fe4-9951f40e982a, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-4d27991c-4852-407c-9c6b-70df76994d13;nsid=995055688;c=0), blocks: 0, hasStaleStorages: false, processing time: 1 msecs 2014-08-15 10:18:19,074 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks. 2014-08-15 10:18:31,143 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds 2014-08-15 10:18:31,144 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s). 2014-08-15 10:19:01,143 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds 2014-08-15 10:19:01,144 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s). 2014-08-15 10:19:01,521 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 172.17.65.117 2014-08-15 10:19:01,521 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs 2014-08-15 10:19:01,521 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 1 2014-08-15 10:19:01,522 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 55 2014-08-15 10:19:01,536 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 69 2014-08-15 10:19:01,538 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop/hadoop/dfs/name/current/edits_inprogress_0000000000000000001 -> /home/hadoop/hadoop/dfs/name/current/edits_0000000000000000001-0000000000000000002 2014-08-15 10:19:01,542 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3 2014-08-15 10:19:02,094 WARN org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory: The property 'ssl.client.truststore.location' has not been set, no TrustStore will be loaded 2014-08-15 10:19:02,969 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.05s at 0.00 KB/s 2014-08-15 10:19:02,969 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000002 size 353 bytes. 2014-08-15 10:19:03,014 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 0 2014-08-15 10:19:31,143 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds 2014-08-15 10:19:31,145 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s). 2014-08-15 10:20:01,144 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds 2014-08-15 10:20:01,145 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
Namenode jps:
/hadoop-2.5.0/logs$ jps 21145 NameNode 21409 ResourceManager
Secondary jps:
hadoop-2.5.0$ jps 15534 SecondaryNameNode
Datanode1 jps:
/hadoop-2.5.0$ jps 7350 DataNode
Datanode2 jps:
/hadoop-2.5.0$ jps 11784 DataNode
source share
I am facing the same problem.
When you use hdfs namenode -format
, you need to check what kind of naming information you SHUTDOWN_MSG
like
I solve the problem - change the IP as an image in defaultFS
core-site.xml
.
But I think this is not a good solution, maybe there is a better way to solve.
source share