Why do I get "Too many failed attempts" every other day

From one or the other Task Trackers, I get this error whenever we ever run two big pig tasks that compress about 400GB of data. We found that after killing the job and saving the cluster for a while, everything is fine again. Please imagine what could be the real problem?

+3


source to share


1 answer


Solution, change datanode node / etc / hosts. Hosts in short format: Each line is divided into three parts: the first part of the network IP address, the second part of the hostname or domain name, the third part of the detailed host alias steps is as follows: 1, check the hostname first:

cat / proc / sys / kernel / hostname

You will see the HOSTNAME attribute, change the IP value to OK and then exit. 2, use the command:

hostname *. ... ... *



Asterisk is replaced with the corresponding IP address. 3, change your hosts configuration as follows:

127.0.0.1 localhost.localdomain localhost :: 1 localhost6.localdomain6 localhost6 10.200.187.77 10.200.187.77 hadoop-datanode

If the IP address is configured to change successfully or displays the hostname, there is a problem, keep changing the hosts file,

0


source







All Articles