Stop all services (including MapReduce) and client applications deployed on HDFS using the instructions provided here.
Run the
fsck
command as instructed below and fix any errors. (The resulting file will contain a complete block map of the file system.)su $HDFS_USER hadoop fsck / -files -blocks -locations > dfs-old-fsck-1.log
where
$HDFS_USER
is the HDFS Service user. For example,hdfs
.Use the following instructions to compare the status before and after the upgrade:
Note The following commands must be executed by the user running the HDFS service (by default, the user is
hdfs
).Capture the complete namespace of the file system. Run recursive listing of the root file system: )
su $HDFS_USER hadoop dfs -lsr / > dfs-old-lsr-1.log
where
$HDFS_USER
is the HDFS Service user. For example,hdfs
.Run report command to create a list of DataNodes in the cluster.
su $HDFS_USER hadoop dfsadmin -report > dfs-old-report-1.log
where
$HDFS_USER
is the HDFS Service user. For example,hdfs
.Copy all or unrecoverable data stored in HDFS to a local file system or to a backup instance of HDFS.
Optionally, repeat the steps 3 (a) through 3 (c) and compare the results with the previous run to verify that the state of the file system remains unchanged.
As an HDFS user, execute the following command to save namespace:
su $HDFS_USER hadoop dfsadmin -safemode enter hadoop dfsadmin -saveNamespace
where
$HDFS_USER
is the HDFS Service user. For example,hdfs
.Copy the following checkpoint files into a backup directory:
dfs.name.dir/edits
dfs.name.dir/image/fsimage
Stop the HDFS service using the instructions provided here. Verify that all the HDP services in the cluster are stopped.
If you are upgrading Hive, back up the Hive database.
Configure the local repositories.
The standard HDP install fetches the software from a remote yum repository over the Internet. To use this option, you must set up access to the remote repository and have an available Internet connection for each of your hosts.
Note If your cluster does not have access to the Internet, or you are creating a large cluster and you want to conserve bandwidth, you can instead provide a local copy of the HDP repository that your hosts can access. For more information, see Deployment Strategies for Data Centers with Firewalls, a separate document in this set.
The file you download is named
hdp.repo
. To function properly in the system, it must be namedHDP.repo
. Once you have completed themv
of the new repo file to therepos.d
folder, make sure there is no file namedhdp.repo
anywhere in yourrepos.d
folder.Upgrade the HDP repository on all hosts and replace the old repo file with the new file.
From a terminal window, type:
For RHEL and CentOS 5
wget http://docs.hortonworks.com/HDP/centos5/1.x/GA/hdp.repo -O /etc/yum.repos.d/hdp.repo
For RHEL and CentOS 6
wget http://docs.hortonworks.com/HDP/centos6/1.x/GA/hdp.repo -O /etc/yum.repos.d/hdp.repo
For SLES 11
wget http://docs.hortonworks.com/HDP/suse11/1.x/GA/hdp.repo -O /etc/zypp/repos.d/hdp.repo
Confirm that the HDP repository is configured by checking the repo list.
For RHEL/CentOS:
yum repolist
For SLES:
zypper repos