Status_Code 403 while setting up Spark Component on the HDP 2.3 cluster

Hi,

Recently I was trying to setup Spark components on the Hortonworks 2.3 cluster. While installation I found the something like this : http://<serverName>:50070/webhdfs/v1/user/spark?op=MKDIRS&user.name=hdfs” returned status_code=403

Complete stack trace is like this from Ambari Server:

Traceback (most recent call last):
File “/var/lib/ambari-agent/cache/common-services/SPARK/1.2.0.2.2/package/scripts/job_history_server.py”, line 90, in <module>
JobHistoryServer().execute()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 218, in execute
method(env)
File “/var/lib/ambari-agent/cache/common-services/SPARK/1.2.0.2.2/package/scripts/job_history_server.py”, line 54, in start
self.configure(env)
File “/var/lib/ambari-agent/cache/common-services/SPARK/1.2.0.2.2/package/scripts/job_history_server.py”, line 48, in configure
setup_spark(env, ‘server’, action = ‘config’)
File “/var/lib/ambari-agent/cache/common-services/SPARK/1.2.0.2.2/package/scripts/setup_spark.py”, line 44, in setup_spark
mode=0775
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 157, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 152, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 118, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 390, in action_create_on_execute
self.action_delayed(“create”)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 387, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 246, in action_delayed
self._create_resource()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 256, in _create_resource
self._create_directory(self.main_resource.resource.target)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 280, in _create_directory
self.util.run_command(target, ‘MKDIRS’, method=’PUT’)
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py”, line 201, in run_command
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of ‘curl -sS -L -w ‘%{http_code}’ -X PUT ‘http://serverName:50070/webhdfs/v1/user/spark?op=MKDIRS&user.name=hdfs&#8221; returned status_code=403.

Well this issue is relatively simple.

Check your name node status most of the time this issue comes due to Name node in the safe mode.

Either you can wait for NN to leave the safe mode or force the NN to come out of safe mode.  Generally when you have configured the NN in High Availability mode, it take time to come out of Safe mode on the cluster restart because of existing number of blocks and the block report needs to be sent to both the active and stand by name nodes.

Once the name node is out of safe mode, try restarting the install process for Spark component it should be smooth.

Cheers.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: