Status_Code 403 while setting up Spark Component on the HDP 2.3 cluster


Recently I was trying to setup Spark components on the Hortonworks 2.3 cluster. While installation I found the something like this : http://<serverName>:50070/webhdfs/v1/user/spark?op=MKDIRS&” returned status_code=403

Complete stack trace is like this from Ambari Server:

Traceback (most recent call last):
File “/var/lib/ambari-agent/cache/common-services/SPARK/”, line 90, in <module>
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/”, line 218, in execute
File “/var/lib/ambari-agent/cache/common-services/SPARK/”, line 54, in start
File “/var/lib/ambari-agent/cache/common-services/SPARK/”, line 48, in configure
setup_spark(env, ‘server’, action = ‘config’)
File “/var/lib/ambari-agent/cache/common-services/SPARK/”, line 44, in setup_spark
File “/usr/lib/python2.6/site-packages/resource_management/core/”, line 157, in __init__
File “/usr/lib/python2.6/site-packages/resource_management/core/”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/”, line 118, in run_action
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/”, line 390, in action_create_on_execute
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/”, line 387, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/”, line 246, in action_delayed
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/”, line 256, in _create_resource
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/”, line 280, in _create_directory
self.util.run_command(target, ‘MKDIRS’, method=’PUT’)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/”, line 201, in run_command
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of ‘curl -sS -L -w ‘%{http_code}’ -X PUT ‘http://serverName:50070/webhdfs/v1/user/spark?op=MKDIRS&; returned status_code=403.

Well this issue is relatively simple.

Check your name node status most of the time this issue comes due to Name node in the safe mode.

Either you can wait for NN to leave the safe mode or force the NN to come out of safe mode.  Generally when you have configured the NN in High Availability mode, it take time to come out of Safe mode on the cluster restart because of existing number of blocks and the block report needs to be sent to both the active and stand by name nodes.

Once the name node is out of safe mode, try restarting the install process for Spark component it should be smooth.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: