Hadoop Hive cannot move source to destination
I am trying to use Hive 1.2.0 on top of Hadoop 2.6.0. I created a table employee
. However, when I run the following request:
hive> load data local inpath '/home/abc/employeedetails' into table employee;
I am getting the following error:
Failed with exception Unable to move source file:/home/abc/employeedetails to destination hdfs://localhost:9000/user/hive/warehouse/employee/employeedetails_copy_1
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask
What am I doing here? Are there any specific permissions I need to set? Thanks in advance!
As Rio mentioned, the issue is related to the lack of permissions to load data into the hive table. I am finding out that the following command solves my problems:
hadoop fs -chmod g+w /user/hive/warehouse
See permission for HDFS directory:
hdfs dfs -ls /user/hive/warehouse/employee/employeedetails_copy_1
It looks like you may not have permission to load data into the hive table.
The error can occur due to a problem with the rights to the local file system.
Change permission for local filesystem:
sudo chmod -R 777 /home/abc/employeedetails
Now run:
hive> load data local inpath '/home/abc/employeedetails' into table employee;
If we face the same error After running the above command in distributed mode, we can try the bottom control panel for all superusers of all nodes. sudo usermod -a -G hdfs yarn Note: we get this error after restarting all YARN services (in AMBARI). My problem was resolved. This admin team is best taken care of when you are working.
I am facing the same problems and have been looking for them for two days. Finally I find the reason is that the datenode starts a moment and closes.
solve steps:
-
hadoop fs -chmod -R 777/home/abc/employeedetails
-
hadoop fs -chmod -R 777/user/hive/warehouse/employee/employeedetails_copy_1
-
vi hdfs-site.xml
and add the following information:dfs.permissions.enabled false
-
hdfs --daemon start datanode
-
vi hdfs-site.xml
define the location 'dfs.datanode.data.dir'and'dfs.namenode.name.dir'. If it is the same place, you have to change it, so I can't start a datanode reason. -
follow 'dfs.datanode.data.dir' / data / current, edit VERSION and copy clusterID to 'dfs.namenode.name.dir' / data / current clusterID VERSION。
-
start-all.sh
-
if above it is not resolved, to be careful to follow below steps due to data security, but I already solved the problem because follow below steps.
-
stop-all.sh
-
delete the data folder "dfs.datanode.data.dir" and the data folder "dfs.namenode.name.dir" and the tmp folder.
-
hdfs namenode -format
-
start-all.sh
-
to solve the problem
maybe you run into another problem like this.
Problems:
org.apache.hadoop.ipc.RemoteException (org.apache.hadoop.hdfs.server.namenode.SafeModeException): Unable to create directory / opt / hive / tmp / root / 1be8676a-56ac-47aa-ab1c-aa63b21ce1fc. Hostname is in safe mode
methods: hdfs dfsadmin -safemode leave