Category Archives: Hadoop

Full authentication is required to access this resource while deleting a service/component from Ambari

Hi,

We had setup an component/service which was no longer required by the customer hence decided to get rid of it. Checked the Ambari documentation did few trial error approach but did not help much.

Tried many permutation combinations, referred to Ambari Documentation to delete the service, referred to Stack-overflow but finally stuck up with

HTTP ERROR: 403

Problem accessing /api/v1/hostname.

Reason: Full authentication is required to access this resource

Finally after spending much time I could fix it. Sharing the same RESTful command with community in order to save others’ time 🙂

curl -v -u admin:@dminPassword -H “X-Requested-By:ambari” -X DELETE “http://<Ambari_Host>/api/v1/clusters/<clustername>/ervices/<SerivceName >”

 

 

I recently learnt that it deletes the services only from the Ambari GUI but it does not delete from the database.

So you need to login to Ambari database. (Default is postgres) And execute the update table commands. For example : For Knox service I executed following DMLs.

psql ambari ambari
Password for user ambari: #default password is “bigdata”
delete from servicedesiredstate where service_name like ‘%KNOX%’;
delete from clusterservices where service_name like ‘KNOX’;
delete from hostcomponentstate where component_name like ‘%KNOX%’;
delete from hostcomponentdesiredstate where component_name like ‘%KNOX%’;
delete from servicecomponentdesiredstate where component_name like ‘%KNOX%’;

Note : If you are copy pasting the above command make sure you replace – and ” as they are encoded by the wordpress blog.

Namenode doesn’t start after upgrading Hadoop version

I have copied all the files correctly, all my jars are in place. I’ve set all the important properties correctly (i.e. namenode address,etc). I have formatted the HDFS .Now I am trying to start the cluster.

If you are running the cluster which is 0.20 version or before, and if you upgrade to 1.0.4 or above, while you restart the cluster by saying start-all.sh. All the demons should have started on the respective machines. (masters and slaves). But my namenode is not starting……..

This is due to file system changes in the HDFS itself. If you check the log you will see :

File system image contains an old layout version -18. An upgrade to version -32 is required.

Solution :

Very simple :

No need to stop other demons (you can stop if you want but it is not required).

Use start-dfs.sh -upgrade   (Here upgrade is mandatory).

Using jps you can see that now namenode is also running!

Common problem while copying data from source to HDFS using Flume

FLUME JAVA.LANG.CLASSNOTFOUNDEXCEPTION: ORG.APACHE.HADOOP.IO.SEQUENCEFILE$COMPRESSIONTYPE

Scenario: I wanted to copy the logs from the scource to HDFS. HDFS demons are up and running on the cluster. I’ve pointed the sink to hdfs but when I am trying to start the agent it is not starting. On checking the log files I see the stacktrace like this :

Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.io.SequenceFile$CompressionType
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)

It is very clear that is not able to find the expected class on the classpath, hence the solution :
Copy your hadoop-core-xyz.jar to $FLUME_HOME/lib directory.

Note : If you are running your hadoop cluster on 0.20 versions, after copying this file, FileNotFound Exception will be solved, but you will end up getting authentication errors. Try using 1.0.x stable versions.