Stop Ambari Server. On the Ambari Server host:
ambari-server stop
Update the stack version in the Server database. Use the command appropriate for a remote, or local repository, as described in this step.
Important Make sure you delete the old MapReduce version before you run
upgradestack
.ambari-server upgradestack HDP-2.1
Note When upgrading the Stack, you may use a local repository. To upgrade the Stack using a local repository, provide the local HDP repo URL and target OS type as parameters in the upgradestack command, as shown in the following example:
$ ambari-server upgradestack HDP-2.1 {HDP.Base.URL} {$os}
For more infomation about upgrading from a local repository, see Setting Up a Local Repository.
Upgrade the HDP repository on all hosts and replace the old repo file with the new file:
Important The file you download is named
hdp.repo
. To function properly in the system, it must be namedHDP.repo
. Once you have completed the "mv" of the new repo file to the repos.d folder, make sure there is no file namedhdp.repo
anywhere in your repos.d folder.For RHEL/CentOS/Oracle Linux 5
wget http://public-repo-1.hortonworks.com/HDP/centos5/2.x/GA/2.1-latest/hdp.repo /etc/yum.repos.d/HDP.repo
For RHEL/CentOS/Oracle Linux 6
wget http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.1-latest/hdp.repo /etc/yum.repos.d/HDP.repo
For SLES 11
wget http://public-repo-1.hortonworks.com/HDP/suse11/2.x/GA/2.1-latest/hdp.repo /etc/zypp/repos.d/HDP.repo
Back up the files in following directories to a tmp folder:
/etc/webhcat/conf
/etc/oozie/conf
Remove the old oozie directories
rm -rf /etc/oozie/conf
rm -rf /var/lib/oozie/conf
rm -rf /var/lib/oozie/oozie-server/conf
Upgrade the stack on all Agent hosts.
Note For each host, identify the HDP components installed on each host. Use Ambari Web, as described here, to view components on each host in your cluster. Based on the HDP components installed, tailor the following upgrade commands for each host to upgrade only components residing on that host. For example, if you know that a host has no HBase service or client packages installed, then you can adapt the command to not include HBase, as follows:
yum upgrade "collectd*" "gccxml*" "pig*" "hadoop*" "sqoop*" "zookeeper*" "hive*"
For RHEL/CentOS/Oracle Linux
Remove remaining MapReduce components on all hosts:
yum erase hadoop-pipes hadoop-sbin hadoop-native
yum erase "webhcat*" "hcatalog*" "oozie*"
Upgrade the following components:
yum upgrade "collectd*" "epel-release*" "gccxml*" "pig*" "hadoop*" "sqoop*" "zookeeper*" "hbase*" "hive*" hdp_mon_nagios_addons
yum install webhcat-tar-hive webhcat-tar-pig
yum install hive*
yum install oozie oozie-client
rpm -e --nodeps bigtop-jsvc
yum install bigtop-jsvc
Verify that the components were upgraded:
yum list installed | grep HDP-$old-stack-version-number
None of the components from that list should appear in the returned list.
For SLES
Remove remaining MapReduce components on all hosts:
zypper remove hadoop-pipes hadoop-sbin hadoop-native
zypper remove webhcat\* hcatalog\* oozie\*
Upgrade the following components:
zypper up "collectd*" "epel-release*" "gccxml*" "pig*" "hadoop*" "sqoop*" "zookeeper*" "hbase*" "hive*" hdp_mon_nagios_addons
zypper install webhcat-tar-hive webhcat-tar-pig
zypper up -r HDP-2.1.1.0
zypper install hive\*
zypper install oozie oozie-client
Verify that the components were upgraded:
rpm -qa | grep hadoop, rpm -qa | grep hive and rpm -qa | grep hcatalog
If components were not upgraded, upgrade them as follows:
yast --update hadoop hcatalog hive