Spark or mapreduce don't mess with hbase for a while

I am writing a spark program in which using HBase to input data. Run this program about 1-3 times. But try after more than 3 (in most cases) times it throws an exception. Find below ..

    14/09/23 19:18:45 INFO util.RegionSizeCalculator: Calculating region sizes for table "Input-data".
    Exception in thread "main" java.lang.NullPointerException
at org.apache.hadoop.net.DNS.reverseDns(DNS.java:92)
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.reverseDNS(TableInputFormatBase.java:228)
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:191)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:94)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1135)
at org.apache.spark.rdd.RDD.count(RDD.scala:904)
at org.apache.spark.api.java.JavaRDDLike$class.count(JavaRDDLike.scala:368)
at org.apache.spark.api.java.JavaRDD.count(JavaRDD.scala:32)
at com.my.HbaseNaiveBayes.main(HbaseNaiveBayes.java:116)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

      

And I am trying to connect using hbase shell. It works great. I execute the scan and count command, giving an accurate result.

Please help me to solve this problem.

+3
hbase mapreduce hadoop apache-zookeeper apache-spark


source to share


No one has answered this question yet

Check out similar questions:

6
Accesing Hdfs from Spark gives TokenCache error Cannot get Master Kerberos to use as producer
2
Connecting from Spark to ElasticSearch using Hadoop doesn't work
2
Spark reads s3 using sc.textFile ("s3a: // bucket / filePath"). java.lang.NoSuchMethodError: com.amazonaws.services.s3.transfer.TransferManager
1
Spark cassandra connector error in Spark java.lang.NoClassDefFoundError: com / datastax / driver / core / ProtocolOptions $ Compression
1
Spark ML- not loading model using MatrixFactorizationModel
0
Error reading file using spark
0
Launch of Spark Application in Yarn Mode on EMP Cluster
0
Can't check Hbase under S3 from Spark
0
SQL Exception HBase Data from Apache Phoenix Using Spark
-3
How to ftp a file with SparkContext.textFile?



All Articles
Loading...
X
Show
Funny
Dev
Pics