Httpfs error READ operation category not supported pending
I am working on apoop apache 2.7.1 and I have a cluster consisting of 3 nodes
nn1
NN2
DN1
nn1 is dfs.default.name, so this is the name of the master node.
I installed httpfs and started it, of course after restarting all services. When nn1 is active and nn2 is standby, I can send this request
http://nn1:14000/webhdfs/v1/aloosh/oula.txt?op=open&user.name=root
from my browser and an open or save dialog appears for that file, but when I kill the name of the node running on nn1 and run it again as usual, then due to high availability, nn1 becomes fallback and nn2 becomes active.
So, httpfs should work even if nn1 gets to stand, but sends the same request now
http://nn1:14000/webhdfs/v1/aloosh/oula.txt?op=open&user.name=root
gives error
{"RemoteException":{"message":"Operation category READ is not supported in state standby","exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException"}}
Shouldn't httpfs overcome the nn1 wait state and fetch the file? Is this due to a misconfiguration or is there another reason?
My main site
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
source to share
It looks like it HttpFs
is not yet highly available. This may be due to the lack of configurations required to connect Clients to the currently active Namenode.
Make sure the property fs.defaultFS
is core-site.xml
configured with the correct one nameservice ID
.
If you have below in hdfs-site.xml
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
then in core-site.xml
it should be
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
Also configure the Java class name that will be used by the DFS client to determine which NameNode is currently active and serving client requests.
Add this property to hdfs-site.xml
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
Restart Namenodes and HttpF after adding properties in all nodes.
source to share