2. Upgrade the 2.0 Stack to 2.2

  1. Upgrade the HDP repository on all hosts and replace the old repository file with the new file:

    [Important]Important

    Be sure to replace GA/2.2.x.x in the following instructions with the appropriate maintenance version, such as GA/2.2.0.0 for the HDP 2.2 GA release, or updates/2.2.4.2 for an HDP 2.2 maintenance release.

    • For RHEL/CentOS/Oracle Linux 6:

      wget -nv http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.x.x/hdp.repo -O /etc/yum.repos.d/HDP.repo 
    • For SLES 11 SP3:

      wget -nv http://public-repo-1.hortonworks.com/HDP/suse11sp3/2.x/GA/2.2.x.x/hdp.repo -O /etc/zypp/repos.d/HDP.repo
    • For SLES 11 SP1:

      wget -nv http://public-repo-1.hortonworks.com/HDP/sles11sp1/2.x/GA/2.2.x.x/hdp.repo -O /etc/zypp/repos.d/HDP.repo
    • For UBUNTU12:

      wget -nv http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.x/GA/2.2.x.x/hdp.list -O /etc/apt/sourceslist.d/HDP.list
    • For RHEL/CentOS/Oracle Linux 5:

      wget -nv http://public-repo-1.hortonworks.com/HDP/centos5/2.x/GA/2.2.x.x/hdp.repo -O /etc/yum.repos.d/HDP.repo 
    [Important]Important

    Make sure to download the HDP.repo file under /etc/yum.repos on ALL hosts.

  2. Update the Stack version in the Ambari Server database. On the Ambari Server host, use the following command to update the Stack version to HDP-2.2:

    ambari-server upgradestack HDP-2.2

  3. Back up the files in following directories on the Oozie server host and make sure that all files, including *site.xml files are copied.

    mkdir oozie-conf-bak cp -R /etc/oozie/conf/* oozie-conf-bak

  4. Remove the old oozie directories on all Oozie server and client hosts.

    • rm -rf /etc/oozie/conf

    • rm -rf /usr/lib/oozie/

    • rm -rf /var/lib/oozie/

  5. Upgrade the Stack on all Ambari Agent hosts.

    [Note]Note

    For each host, identify the HDP components installed on each host. Use Ambari Web, to view components on each host in your cluster.

    Based on the HDP components installed, tailor the following upgrade commands for each host to upgrade only components residing on that host. For example, if you know that a host has no HBase service or client packages installed, then you can adapt the command to not include HBase, as follows:

    yum install "collectd*""gccxml*""pig*""hadoop*""sqoop*""zookeeper*""hive*"

    [Important]Important

    If you are writing to multiple systems using a script, do not use "" with the run command. You can use "" with pdsh -y.

    • For RHEL/CentOS/Oracle Linux:

      • On all hosts, clean the yum repository.

        yum clean all

      • Remove all components that you want to upgrade. At least, WebHCat, HCatlaog, and Oozie components. This command un-installs the HDP 2.0 component bits. It leaves the user data and metadata, but removes your configurations.

        yum erase "hadoop*""webhcat*""hcatalog*""oozie*""pig*""hdfs*""sqoop*"
        "zookeeper*""hbase*""hive*""phoenix*""accumulo*""mahout*""hue*""flume*"
        "hdp_mon_nagios_addons"
      • Install the following components:

        yum install "hadoop_2_2_x_0_*""oozie_2_2_x_0_*""pig_2_2_x_0_*""sqoop_2_2_x_0_*"
        "zookeeper_2_2_x_0_*""hbase_2_2_x_0_*""hive_2_2_x_0_*""flume_2_2_x_0_*"
        "phoenix_2_2_x_0_*""accumulo_2_2_x_0_*""mahout_2_2_x_0_*"
        
        rpm -e --nodeps hue-shell 
        yum install hue hue-common hue-beeswax hue-hcatalog hue-pig hue-oozie
      • Verify that the components were upgraded.

        yum list installed | grep HDP-<old-stack-version-number>

        Nothing should appear in the returned list.

    • For SLES:

      • On all hosts, clean the zypper repository.

        zypper clean --all

      • Remove WebHCat, HCatalog, and Oozie components. This command uninstalls the HDP 2.0 component bits. It leaves the user data and metadata, but removes your configurations.

        zypper remove "hadoop*""webhcat*""hcatalog*""oozie*""pig*""hdfs*""sqoop*"
        "zookeeper*""hbase*""hive*""phoenix*""accumulo*""mahout*""hue*""flume*"
        "hdp_mon_nagios_addons"
      • Install the following components:

        zypper install "hadoop\_2_2_x_0_*""oozie\_2_2_x_0_*""pig\_2_2_x_0_*""sqoop\_2_2_x_0_*"
        "zookeeper\_2_2_x_0_*""hbase\_2_2_x_0_*""hive\_2_2_x_0_*""flume\_2_2_x_0_*"
        "phoenix\_2_2_x_0_*""accumulo\_2_2_x_0_*""mahout\_2_2_x_0_*" 
        rpm -e --nodeps hue-shell 
        zypper install hue hue-common hue-beeswax hue-hcatalog hue-pig hue-oozie
      • Verify that the components were upgraded.

        rpm -qa | grep hadoop, && rpm -qa | grep hive && rpm -qa | grep hcatalog

        No 2.0 components should appear in the returned list.

      • If components were not upgraded, upgrade them as follows:

        yast --update hadoop hcatalog hive

  6. Symlink directories, using hdp-select.

    [Warning]Warning

    To prevent version-specific directory issues for your scripts and updates, Hortonworks provides hdp-select, a script that symlinks directories to hdp-current and modifies paths for configuration directories.

    Check that the hdp-select package installed:

    rpm -qa | grep hdp-select

    You should see: hdp-select-2.2.4.4-2.el6.noarch

    If not, then run:

    yum install hdp-select

    Run hdp-select as root, on every node. In /usr/bin: hdp-select set all 2.2.x.x-<$version>where <$version> is the build number. For the HDP 2.2.4.2 release <$version> = 2.

    Check that the hdp-select package installed:

    rpm -qa | grep hdp-select

    You should see: hdp-select-2.2.4.2-2.el6.noarch

    If not, then run:

    yum install hdp-select

    Run hdp-select as root, on every node. In /usr/bin: hdp-select set all 2.2.x.x-<$version> where <$version> is the build number. For the HDP 2.2.4.2 release <$version> = 2.

  7. Verify that all components are on the new version. The output of this statement should be empty,

    hdp-select status | grep -v 2\.2\.x\.x | grep -v None

  8. If you are using Hue, you must upgrade Hue manually. For more information, see Configure and Start Hue.

  9. On the Hive Metastore database host, stop the Hive Metastore service, if you have not done so already. Make sure that the Hive Metastore database is running.

  10. Upgrade the Hive metastore database schema from v12 to v14, using the following instructions:

    • Set java home:

      export JAVA_HOME=/path/to/java

    • Copy (rewrite) old Hive configurations to new conf dir:

      cp -R /etc/hive/conf.server/* /etc/hive/conf/

    • Copy the jdbc connector to /usr/hdp/2.2.x.x-<$version>/hive/lib, if it not there, yet.

    • <HIVE_HOME>/bin/schematool -upgradeSchema -dbType<databaseType>

      where <HIVE_HOME> is the Hive installation directory.

      For example, on the Hive Metastore host:

      /usr/hdp/2.2.x.x-<$version>/hive/bin/schematool -upgradeSchema -dbType <databaseType> where <$version> is the 2.2.x build number and <databaseType> is derby, mysql, oracle, or postgres.