I have copied all the files correctly, all my jars are in place. I’ve set all the important properties correctly (i.e. namenode address,etc). I have formatted the HDFS .Now I am trying to start the cluster.
If you are running the cluster which is 0.20 version or before, and if you upgrade to 1.0.4 or above, while you restart the cluster by saying start-all.sh. All the demons should have started on the respective machines. (masters and slaves). But my namenode is not starting……..
This is due to file system changes in the HDFS itself. If you check the log you will see :
File system image contains an old layout version -18. An upgrade to version -32 is required.
Very simple :
No need to stop other demons (you can stop if you want but it is not required).
Use start-dfs.sh -upgrade (Here upgrade is mandatory).
Using jps you can see that now namenode is also running!
Tagged: troubleshooting hadoop namenode