Update the stack version in the Server database, depending on if you are using a local repository:
ambari-server upgradestack HDP-1.3.3
Upgrade the HDP repository on all hosts and replace the old repo file with the new file:
Important The file you download is named
hdp.repo
. To function properly in the system, it must be namedHDP.repo
. After you have completed the "mv" of the new repo file to the repos.d folder, make sure there is no file namedhdp.repo
anywhere in your repos.d folder.For RHEL/CentOS/Oracle Linux 5
wget http://public-repo-1.hortonworks.com/HDP/centos5/1.x/updates/1.3.3.0/hdp.repo mv hdp.repo /etc/yum.repos.d/HDP.repo
For RHEL/CentOS/Oracle Linux 6
wget http://public-repo-1.hortonworks.com/HDP/centos6/1.x/updates/1.3.3.0/hdp.repo mv hdp.repo /etc/yum.repos.d/HDP.repo
For SLES 11
wget http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/hdp.repo mv hdp.repo /etc/zypp/repos.d/HDP.repo
Upgrade the stack on all Agent hosts. Skip any components your installation does not use:
For RHEL/CentOS/Oracle Linux
Upgrade the following components:
yum upgrade "collectd*" "epel-release*" "gccxml*" "pig*" "hadoop*" "sqoop*" "zookeeper*" "hbase*" "hive*" "hcatalog*" "webhcat-tar*" "oozie*" hdp_mon_nagios_addons
Check to see if those components have been upgraded:
yum list installed | grep HDP-$old-stack-version-number
The only non-upgraded component you may see in this list is
extjs
, which does not need to be upgraded.
For SLES
Upgrade the following components:
zypper up collectd gccxml* pig* hadoop* sqoop* hive* hcatalog* webhcat-tar* zookeeper* oozie* hbase* hdp_mon_nagios_addons* yast --update hadoop hcatalog hive
Start the Ambari Server. On the Server host:
ambari-server start
Start each Ambari Agent. On all Agent hosts:
ambari-agent start
Because the file system version has now changed you must start the NameNode manually. On the NameNode host:
sudo su -l $HDFS_USER -c "/usr/lib/hadoop/bin/hadoop-daemon.sh start namenode -upgrade"
Depending on the size of your system, this step may take up to 10 minutes.
Track the status of the upgrade:
hadoop dfsadmin -upgradeProgress status
Continue tracking until you see:
Upgrade for version -44 has been completed. Upgrade is not finalized.
Note You finalize the upgrade later.
Open the Ambari Web GUI. If you have continued to run the Ambari Web GUI, do a hard reset on your browser. Use Services View to start the HDFS service. This starts the SecondaryNameNode and the DataNodes.
After the DataNodes are started, HDFS exits safemode. To monitor the status:
hadoop dfsadmin -safemode get
Depending on the size of your system, this may take up to 10 minutes or so. When HDFS exits safemode, this is displayed as a response to the command:
Safe mode is OFF
Make sure that the HDFS upgrade succeeded. Go through steps 2 and 3 in Section 9.1 to create new versions of the logs and reports. Substitute "
new
" for "old
" in the file names as necessaryCompare the old and new versions of the following files:
dfs-old-fsck-1.log
versusdfs-new-fsck-1.log
.The files should be identical unless the
hadoop fsck
reporting format has changed in the new version.dfs-old-lsr-1.log
versusdfs-new-lsr-1.log
.The files should be identical unless the the format of
hadoop fs -lsr
reporting or the data structures have changed in the new version.dfs-old-report-1.log
versusfs-new-report-1.log
Make sure all DataNodes previously belonging to the cluster are up and running.
Use the Ambari Web Services view-> Services Navigation->Start All to start services back up.
The upgrade is now fully functional but not yet finalized. Using the
finalize
comand removes the previous version of the NameNode and DataNode's storage directories.Important Once the upgrade is finalized, the system cannot be rolled back. Usually this step is not taken until a thorough testing of the upgrade has been performed.
The upgrade must be finalized, however, before another upgrade can be performed.
Note Directories used by Hadoop 1 services set in /etc/hadoop/conf/taskcontroller.cfg are not automatically deleted after upgrade. Administrators can choose to delete these directories after the upgrade.
To finalize the upgrade:
sudo su -l $HDFS_USER -c "hadoop dfsadmin -finalizeUpgrade"
where
$HDFS_USER
is the HDFS Service user (by default,hdfs
).