Stop Ambari Server. On the Ambari Server host:
ambari-server stop
Update the stack version in the Ambari Server database. Use the command appropriate for a remote, or local repository, as described in this step.
Important Make sure you delete the old MapReduce version before you run
upgradestack
.ambari-server upgradestack HDP-2.1
Upgrade the HDP repository on all hosts and replace the old repo file with the new file:
Important The file you download is named
hdp.repo
. To function properly in the system, it must be namedHDP.repo
. Once you have completed the "mv" of the new repo file to the repos.d folder, make sure there is no file namedhdp.repo
anywhere in your repos.d folder.For RHEL/CentOS/Oracle Linux 5
wget http://public-repo-1.hortonworks.com/HDP/centos5/2.x/updates/2.1.5.0/hdp.repo -O /etc/yum.repos.d/HDP.repo
For RHEL/CentOS/Oracle Linux 6
wget http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.1.5.0/hdp.repo -O /etc/yum.repos.d/HDP.repo
For SLES 11 sp1
wget http://public-repo-1.hortonworks.com/HDP/sles11sp1/2.x/updates/2.1.5.0/hdp.repo -O /etc/zypp/repos.d/hdp.repo
Back up the files in following directories on the Oozie server host and make sure that all files, including *site.xml files are copied.
mkdir oozie-conf-bak cp -R /etc/oozie/conf/* oozie-conf-bak
Remove the old oozie directories on all Oozie server and client hosts
rm -rf /etc/oozie/conf
rm -rf /usr/lib/oozie/
rm -rf /var/lib/oozie/
Upgrade the stack on all Agent hosts.
Note For each host, identify the HDP components installed on each host. Use Ambari Web, as described here, to view components on each host in your cluster. Based on the HDP components installed, tailor the following upgrade commands for each host to upgrade only components residing on that host. For example, if you know that a host has no HBase service or client packages installed, then you can adapt the command to not include HBase, as follows:
yum upgrade "collectd*" "gccxml*" "pig*" "hadoop*" "sqoop*" "zookeeper*" "hive*"
For RHEL/CentOS/Oracle Linux
Remove remaining MapReduce, and WebHCat, HCatalog, and Oozie components on all hosts:
yum erase hadoop-pipes hadoop-sbin hadoop-native
yum erase "webhcat*" "hcatalog*" "oozie*"
Upgrade the following components:
yum upgrade "collectd*" "gccxml*" "pig*" "hadoop*" "sqoop*" "zookeeper*" "hbase*" "hive*" hdp_mon_nagios_addons
yum install webhcat-tar-hive webhcat-tar-pig
yum install hive*
yum install oozie oozie-client
rpm -e --nodeps bigtop-jsvc
yum install bigtop-jsvc
Verify that the components were upgraded:
yum list installed | grep HDP-$old-stack-version-number
None of the components from that list should appear in the returned list.
For SLES
Remove remaining MapReduce, and WebHCat, HCatalog, and Oozie components on all hosts:
zypper remove hadoop-pipes hadoop-sbin hadoop-native
zypper remove webhcat\* hcatalog\* oozie\*
Upgrade the following components:
zypper up "collectd*" "epel-release*" "gccxml*" "pig*" "hadoop*" "sqoop*" "zookeeper*" "hbase*" "hive*" hdp_mon_nagios_addons
zypper install webhcat-tar-hive webhcat-tar-pig
zypper up -r HDP-2.1.2.0
zypper install hive\*
zypper install oozie oozie-client
Verify that the components were upgraded:
rpm -qa | grep hadoop, rpm -qa | grep hive and rpm -qa | grep hcatalog
If components were not upgraded, upgrade them as follows:
yast --update hadoop hcatalog hive
On the Hive metastore host, stop the Hive Metastore service, if you have not done so already.
Note Make sure that the Hive metastore database is running.
Upgrade the Hive metastore database schema.
$HIVE_HOME/bin/schematool -upgradeSchema -dbType <$databaseType> -userName <$connectionUserName> -passWord <$connectionPassWord>
The value for $databaseType can be derby, mysql, oracle, or postgres.