Use the following steps to prepare your system for the upgrade.
If you are upgrading Ambari as well as the stack, you must know the location of the Nagios servers for that process. Use the Services->Nagios-> Summary panel to locate the hosts on which they are running.
Use the Services view on the Ambari Web UI to stop all services, including all clients, running on HDFS. Do not stop HDFS yet.
Finalize any prior upgrade if you have not done so already.
su $HDFSUSER hadoop namenode -finalize
Create the following logs and other files.
Because the upgrade to 2.0.6 includes a version upgrade of HDFS, creating these logs allows you to check the integrity of the file system post upgrade.
Run
fsck
with the following flags and send the results to a log. The resulting file contains a complete block map of the file system. You use this log later to confirm the upgrade.su $HDFS_USER hadoop fsck / -files -blocks -locations > /tmp/dfs-old-fsck-1.log
where
$HDFS_USER
is the HDFS Service user (by default,hdfs
).Capture the complete namespace of the filesystem. (The following command does a recursive listing of the root file system.)
su $HDFS_USER hadoop dfs -lsr / > /tmp/dfs-old-lsr-1.log
where
$HDFS_USER
is the HDFS Service user (by default,hdfs
).Create a list of all the DataNodes in the cluster.
su $HDFS_USER hadoop dfsadmin -report > /tmp/dfs-old-report-1.log
where
$HDFS_USER
is the HDFS Service user (by default,hdfs
).Optional: copy all or unrecoverable only data stored in HDFS to a local file system or to a backup instance of HDFS.
Optional: create the logs again and check to make sure the results are identical.
Save the namespace. You must be the HDFS service user to do this and you must put the cluster in Safe Mode.
Important This is a critical step. If you do not do this step before you do the upgrade, the NameNode will not start afterwards.
su $HDFS_USER hadoop dfsadmin -safemode enter hadoop dfsadmin -saveNamespace
Copy the following checkpoint files into a backup directory. You can find the directory by using the Services View in the UI. Select the HDFS service, the Configs tab, in the Namenode section, look up the property NameNode Directories. It will be on your NameNode host.
dfs.name.dir/edits
dfs.name.dir/image/fsimage
dfs.name.dir/current/fsimage
On the JobTracker host, copy
/etc/hadoop/conf
to a backup directory.Note If you have deployed a custom version of
capacity-scheduler.xml
andmapred-queue-acls.xml
, after the upgrade you will need to use Ambari Web to edit the default Capacity Scheduler. Select Services view ->YARN->Configs->Scheduler->Capacity Scheduler.Important Fair Scheduler is not supported for use with HDP 2.x.
Store the layoutVersion for the NameNode. Make a copy of the file at
where$dfs.name.dir
/current/VERSION
is the value of the config parameter$dfs.name.dir
NameNode directories
. This file will be used later to verify that the layout version is upgraded.Stop HDFS. Make sure all services in the cluster are completely stopped.
If you are upgrading Hive, back up the Hive database.
Stop Ambari Server. On the Server host:
ambari-server stop
Stop Ambari Agents. On each host:
ambari-agent stop