Hadoop Hive cannot move source to destination

I am trying to use Hive 1.2.0 on top of Hadoop 2.6.0. I created a table employee

. However, when I run the following request:

hive> load data local inpath '/home/abc/employeedetails' into table employee;

      

I am getting the following error:

Failed with exception Unable to move source file:/home/abc/employeedetails to destination hdfs://localhost:9000/user/hive/warehouse/employee/employeedetails_copy_1
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask

      

What am I doing here? Are there any specific permissions I need to set? Thanks in advance!

+4


source to share


5 answers


As Rio mentioned, the issue is related to the lack of permissions to load data into the hive table. I am finding out that the following command solves my problems:



hadoop fs -chmod g+w /user/hive/warehouse

      

+4


source


See permission for HDFS directory:

hdfs dfs -ls /user/hive/warehouse/employee/employeedetails_copy_1

      



It looks like you may not have permission to load data into the hive table.

+3


source


The error can occur due to a problem with the rights to the local file system.

Change permission for local filesystem:

sudo chmod -R 777 /home/abc/employeedetails

      

Now run:

hive> load data local inpath '/home/abc/employeedetails' into table employee;

      

0


source


If we face the same error After running the above command in distributed mode, we can try the bottom control panel for all superusers of all nodes. sudo usermod -a -G hdfs yarn  Note: we get this error after restarting all YARN services (in AMBARI). My problem was resolved. This admin team is best taken care of when you are working.

0


source


I am facing the same problems and have been looking for them for two days. Finally I find the reason is that the datenode starts a moment and closes.

solve steps:

  1. hadoop fs -chmod -R 777/home/abc/employeedetails

  2. hadoop fs -chmod -R 777/user/hive/warehouse/employee/employeedetails_copy_1

  3. vi hdfs-site.xml

    and add the following information:

    dfs.permissions.enabled false

  4. hdfs --daemon start datanode

  5. vi hdfs-site.xml

    define the location 'dfs.datanode.data.dir'and'dfs.namenode.name.dir'. If it is the same place, you have to change it, so I can't start a datanode reason.

  6. follow 'dfs.datanode.data.dir' / data / current, edit VERSION and copy clusterID to 'dfs.namenode.name.dir' / data / current clusterID VERSION。

  7. start-all.sh

  8. if above it is not resolved, to be careful to follow below steps due to data security, but I already solved the problem because follow below steps.

  9. stop-all.sh

  10. delete the data folder "dfs.datanode.data.dir" and the data folder "dfs.namenode.name.dir" and the tmp folder.

  11. hdfs namenode -format

  12. start-all.sh

  13. to solve the problem

maybe you run into another problem like this.

Problems:

org.apache.hadoop.ipc.RemoteException (org.apache.hadoop.hdfs.server.namenode.SafeModeException): Unable to create directory / opt / hive / tmp / root / 1be8676a-56ac-47aa-ab1c-aa63b21ce1fc. Hostname is in safe mode

methods: hdfs dfsadmin -safemode leave

0


source







All Articles