Disconnecting HBase Connection

Hi I am using hasoop and HBase. When I tried to run hasoop everything started fine, but when I tried to run HBase it displays an exception in the log files. In the log file, hasoop refuses to connect on port 54310 localhost. The logs are listed below:

Mon Apr  9 12:28:15 PKT 2012 Starting master on hbase
ulimit -n 1024
2012-04-09 12:28:17,685 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC Metrics with hostName=HMaster, port=60000
2012-04-09 12:28:18,180 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server Responder: starting
2012-04-09 12:28:18,190 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 60000: starting
2012-04-09 12:28:18,197 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: starting
2012-04-09 12:28:18,200 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: starting
2012-04-09 12:28:18,202 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: starting
2012-04-09 12:28:18,206 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: starting
2012-04-09 12:28:18,210 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: starting
2012-04-09 12:28:18,278 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: starting
2012-04-09 12:28:18,279 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: starting
2012-04-09 12:28:18,284 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: starting
2012-04-09 12:28:18,285 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: starting
2012-04-09 12:28:18,285 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: starting
2012-04-09 12:28:18,369 INFO org.apache.zookeeper.ZooKeeper: Client environment:zookeeper.version=3.3.2-1031432, built on 11/05/2010 05:32 GMT
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:host.name=hbase.com.com
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.version=1.6.0_20
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-6-openjdk/jre
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.class.path=/opt/com/hbase-0.90.4/bin/../conf:/usr/lib/jvm/java-6-openjdk/lib/tools.jar:/opt/com/hbase-0.90.4/bin/..:/opt/com/hbase-0.90.4/bin/../hbase-0.90.4.jar:/opt/com/hbase-0.90.4/bin/../hbase-0.90.4-tests.jar:/opt/com/hbase-0.90.4/bin/../lib/activation-1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/asm-3.1.jar:/opt/com/hbase-0.90.4/bin/../lib/avro-1.3.3.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-cli-1.2.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-codec-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-configuration-1.6.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-el-1.0.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-httpclient-3.1.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-lang-2.5.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-logging-1.1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-net-1.4.1.jar:/opt/com/hbase-0.90.4/bin/../lib/core-3.1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/guava-r06.jar:/opt/com/hbase-0.90.4/bin/../lib/hadoop-core-0.20.205.0.jar:/opt/com/hbase-0.90.4/bin/../lib/hadoop-gpl-compression-0.2.0-dev.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-core-asl-1.5.5.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-jaxrs-1.5.5.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-mapper-asl-1.4.2.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-xc-1.5.5.jar:/opt/com/hbase-0.90.4/bin/../lib/jasper-compiler-5.5.23.jar:/opt/com/hbase-0.90.4/bin/../lib/jasper-runtime-5.5.23.jar:/opt/com/hbase-0.90.4/bin/../lib/jaxb-api-2.1.jar:/opt/com/hbase-0.90.4/bin/../lib/jaxb-impl-2.1.12.jar:/opt/com/hbase-0.90.4/bin/../lib/jersey-core-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/jersey-json-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/jersey-server-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/jettison-1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/jetty-6.1.26.jar:/opt/com/hbase-0.90.4/bin/../lib/jetty-util-6.1.26.jar:/opt/com/hbase-0.90.4/bin/../lib/jruby-complete-1.6.0.jar:/opt/com/hbase-0.90.4/bin/../lib/jsp-2.1-6.1.14.jar:/opt/com/hbase-0.90.4/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/com/hbase-0.90.4/bin/../lib/jsr311-api-1.1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/log4j-1.2.16.jar:/opt/com/hbase-0.90.4/bin/../lib/protobuf-java-2.3.0.jar:/opt/com/hbase-0.90.4/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/com/hbase-0.90.4/bin/../lib/slf4j-api-1.5.8.jar:/opt/com/hbase-0.90.4/bin/../lib/slf4j-log4j12-1.5.8.jar:/opt/com/hbase-0.90.4/bin/../lib/stax-api-1.0.1.jar:/opt/com/hbase-0.90.4/bin/../lib/thrift-0.2.0.jar:/opt/com/hbase-0.90.4/bin/../lib/xmlenc-0.52.jar:/opt/com/hbase-0.90.4/bin/../lib/zookeeper-3.3.2.jar
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.library.path=/usr/lib/jvm/java-6-openjdk/jre/lib/i386/client:/usr/lib/jvm/java-6-openjdk/jre/lib/i386:/usr/lib/jvm/java-6-openjdk/jre/../lib/i386:/usr/java/packages/lib/i386:/usr/lib/jni:/lib:/usr/lib
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.arch=i386
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.version=2.6.32-40-generic
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.name=com
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.home=/home/com
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.dir=/opt/com/hbase-0.90.4/bin
2012-04-09 12:28:18,372 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=master:60000
2012-04-09 12:28:18,436 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181
2012-04-09 12:28:18,484 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
2012-04-09 12:28:18,676 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1369600cac10000, negotiated timeout = 180000
2012-04-09 12:28:18,740 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=Master, sessionId=hbase.com.com:60000
2012-04-09 12:28:18,803 INFO org.apache.hadoop.hbase.metrics: MetricsString added: revision
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUser
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsDate
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUrl
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: date
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsRevision
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: user
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsVersion
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: url
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: version
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
2012-04-09 12:28:18,810 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
2012-04-09 12:28:18,810 INFO org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized
2012-04-09 12:28:18,940 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Master=hbase.com.com:60000
2012-04-09 12:28:21,342 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 0 time(s).
2012-04-09 12:28:22,343 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 1 time(s).
2012-04-09 12:28:23,344 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 2 time(s).
2012-04-09 12:28:24,345 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 3 time(s).
2012-04-09 12:28:25,346 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 4 time(s).
2012-04-09 12:28:26,347 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 5 time(s).
2012-04-09 12:28:27,348 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 6 time(s).
2012-04-09 12:28:28,349 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 7 time(s).
2012-04-09 12:28:29,350 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 8 time(s).
2012-04-09 12:28:30,351 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 9 time(s).
2012-04-09 12:28:30,356 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call to hbase/192.168.15.20:54310 failed on connection exception: java.net.ConnectException: Connection refused
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1095)
    at org.apache.hadoop.ipc.Client.call(Client.java:1071)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at $Proxy6.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:222)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:187)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
    at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:364)
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:81)
    at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:346)
    at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:282)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:604)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560)
    at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1202)
    at org.apache.hadoop.ipc.Client.call(Client.java:1046)
    ... 17 more
2012-04-09 12:28:30,361 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
2012-04-09 12:28:30,361 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping service threads
2012-04-09 12:28:30,361 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60000
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: exiting
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: exiting
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: exiting
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: exiting
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: exiting
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: exiting
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: exiting
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: exiting
2012-04-09 12:28:30,364 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: exiting
2012-04-09 12:28:30,364 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: exiting
2012-04-09 12:28:30,364 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on 60000
2012-04-09 12:28:30,369 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
2012-04-09 12:28:30,450 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
2012-04-09 12:28:30,450 INFO org.apache.zookeeper.ZooKeeper: Session: 0x1369600cac10000 closed
2012-04-09 12:28:30,450 INFO org.apache.hadoop.hbase.master.HMaster: HMaster main thread exiting
Mon Apr  9 12:28:40 PKT 2012 Stopping hbase (via master)

      


(hadoop conf) core-site.xml

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/tmp</value>
</property><property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>
</configuration>

      

HDFS-site.xml

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>

      

mapred-site.xml

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
</property>
</configuration>

      

(hbase conf) HBase-site.xml

<configuration>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:54310/hbase</value>
</property>
<!--added-->
<property>
<name>hbase.master</name>
<value>127.0.0.1:60000</value>
<description>The host and port that the HBase master runs at.
</description>
</property>
</configuration>

      

+3


source to share


5 answers


try it

Comment 127.0.1.1 in / etc / hosts file using # then put your ip and computer name on a new line if you want to use localhost make sure your hosts file has 127.0.0.1 localhost then replace all ip events in config files replace to localhost



if you want to use ip instead of localhost then make sure ip and equivalent domain name are in your hosts file and replace all localhost infestations with your ip.

namenode issue usually occurs due to incorrect host or IP configuration

+3


source


Try to find the / etc / hosts file and / or assign localhost 127.0.0.1. In your example, it connects to 192.168.15.20:54310 and not 127.0.0.1:54310



+1


source


First, check that the habse-site.xml

property is hbase.rootdir

trying to connect to the same port as defined in core-site.xml

for hadoop as fs.default.name

.

Is it hbase.rootdir

installed at some /tmp/hadoop

location? (because it hurts) change it to point to where your hdfs are.

And first of all try http://localhost:50070

and check something like Namenode: --IP -: - port--. Give me this port.

0


source


Take a look at java.io.FileNotFoundException: / hadoop / tmp / dfs / name / current / VERSION (Permission denied)

So first of all - look at what you set as hbase.rootdir to see if it really points to HDFS or local filesystem. My example (with localhost for pseudo-distributed mode):

    <configuration>
        <property>
            <name>hbase.rootdir</name>
            <value>hdfs://localhost:54310/hbase</value>
        </property>
        <property>
            <name>hbase.master</name>
            <value>127.0.0.1:60000</value>
        </property>
    </configuration>

      

Then, looking at your log, it seems that most likely you are using the local filesystem and you do not have read / write access to the directory where HBase stores its data - check it with

mcbatyuk:/ bam$ ls -l / |grep hadoop
drwxr-xr-x   3 bam   wheel       102 Feb 29 21:34 hadoop

      

If your base.rootdir is on HDFS you seem to have hacked permissions, so you will need to change it with

# hadoop fs -chmod -R MODE /hadoop/

      

or change the dfs.permissions property to false in your $ HADOOP_HOME / conf / hdfs-site.xml file

0


source


Instead of using temp dir, configure "dfs.name.dir" in hdfs-site.xml to a directory where you have read / write permission. Then run the namenode after formatting (isoop namenode -format command). Once that's done, try running hbase.

0


source







All Articles