EOFException while connecting to HDFS in hadoop

In the attached test program, I am trying to copy a file from a local drive to HDFS. The code looks like this:

package foo.foo1.foo2.test;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;

public class TestTestTest {

    public static void main(String[] args) {

    String srcLocation = "foo";
    String destination = "hdfs:///tmp/";

    FileSystem hdfs = null;

    Configuration configuration = new Configuration();
    configuration.set("fs.default.name", "hdfs://namenode:54310/");

    try {
        hdfs = FileSystem.get(configuration);
    } catch (IOException e2) {

    Path srcpath = new Path(srcLocation);
    Path dstpath = new Path(destination);

    try {
        hdfs.copyFromLocalFile(srcpath, dstpath);
    } catch (IOException e) {




This fails with the following exception:

java.io.IOException: Call to namenode/ failed on local exception:     java.io.EOFException
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
    at org.apache.hadoop.ipc.Client.call(Client.java:743)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy0.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
    at foo.foo1.foo2.test.TestTestTest.main(TestTestTest.java:22)
Caused by: java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:375)
    at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)


My question is deceptively simple: what is causing this, and how can I get this program to work? From what little information I could find, I understand that the problem is with HDFS and that it has something to do with the fs.default.name property in the configuration. Below is the relevant section of my core-site.xml file:




Perhaps of particular interest is the fact that if I concatenate all the jars in my classpath into one megabyte and run this program with the hadoop command, it works fine. So what am I doing wrong?


source to share

4 answers

Make sure you are compiling the same version of Hadoop that you are using on your cluster.



Make sure of the correct serializable interface exception and execute the correct version of hadoop.



Code for the problem, maybe the file path "foo" on error



I have faced similar issue in the past. But the problem was mine, I had two different versions of howop. I started daemons from an earlier version and bash_profile was pointing to the new one and this problem occurred. So make sure you don't play with version mismatch.



All Articles