Command Line Upgrade
Also available as:
PDF
loading table of contents...

Getting Ready to Upgrade

HDP Stack upgrade involves upgrading from HDP 2.3 to HDP 2.5.5 versions and adding the new HDP 2.5.5 services. These instructions change your configurations.

[Note]Note

You must use kinit before running the commands as any particular user.

Hardware recommendations

Although there is no single hardware requirement for installing HDP, there are some basic guidelines. The HDP packages for a complete installation of HDP 2.5 consumes about 6.5 GB of disk space.

The first step is to ensure you keep a backup copy of your HDP 2.3 configurations.

[Note]Note

The su commands in this section use keywords to represent the Service user. For example, "hdfs" is used to represent the HDFS Service user. If you are using another name for your Service users, you need to substitute your Service user name in each of the su commands.

  1. Back up the HDP directories for any hadoop components you have installed.

    The following is a list of all HDP directories:

    • /etc/hadoop/conf

    • /etc/hbase/conf

    • /etc/hive-hcatalog/conf

    • /etc/hive-webhcat/conf

    • /etc/accumulo/conf

    • /etc/hive/conf

    • /etc/pig/conf

    • /etc/sqoop/conf

    • /etc/flume/conf

    • /etc/mahout/conf

    • /etc/oozie/conf

    • /etc/hue/conf

    • /etc/knox/conf

    • /etc/zookeeper/conf

    • /etc/tez/conf

    • /etc/storm/conf

    • /etc/falcon/conf

    • /etc/slider/conf/

    • /etc/ranger/admin/conf, /etc/ranger/usersync/conf (If Ranger is installed, also take a backup of install.properties for all the plugins, ranger admin & ranger usersync.)

    • Optional - Back up your userlogs directories, ${mapred.local.dir}/userlogs.

  2. Oozie runs a periodic purge on the shared library directory. The purge can delete libraries that are needed by jobs that started before the upgrade began and that finish after the upgrade. To minimize the chance of job failures, you should extend the oozie.service.ShareLibService.purge.interval and oozie.service.ShareLibService.temp.sharelib.retention.days settings.

    Add the following content to the the oozie-site.xml file prior to performing the upgrade:

    <property>
    <name>oozie.service.ShareLibService.purge.interval</name>
    <value>1000</value><description>
    How often, in days, Oozie should check for old ShareLibs and LauncherLibs to purge from HDFS.
    </description>
    </property>
    
    <property>
    <name>oozie.service.ShareLibService.temp.sharelib.retention.days</name>
    <value>1000</value>
    <description>
    ShareLib retention time in days.</description>
    </property>
  3. Stop all long-running applications deployed using Slider:

    su - yarn -c "/usr/hdp/current/slider-client/bin/slider list"

    For all applications returned in previous command, run su - yarn "/usr/hdp/current/slider-client/bin/slider stop <app_name>"

  4. Stop all services (including MapReduce) except HDFS, ZooKeeper, and Ranger, and client applications deployed on HDFS.

    See Stopping HDP Services for more information.

    Component

    Command

    Accumulo

    /usr/hdp/current/accumulo-client/bin/stop-all.sh

    Knox

    cd $GATEWAY_HOME su - knox -c "bin/gateway.sh stop"

    Falcon

    su - falcon "/usr/hdp/current/falcon-server/bin/falcon-stop"

    Oozie

    su - oozie -c "/usr/hdp/current/oozie-server/bin/oozie-stop.sh

    WebHCat

    su - webhcat -c "/usr/hdp/hive-webhcat/sbin/webhcat_server.sh stop"

    Hive

    Run this command on the Hive Metastore and Hive Server2 host machine:

    ps aux | awk '{print $1,$2}' | grep hive | awk '{print $2}' | xargs kill >/dev/null 2>&1

    Or you can use the following:

    Killall -u hive -s 15 java

    HBase RegionServers

    su - hbase -c "/usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh --config /etc/hbase/conf stop regionserver"

    HBase Master host machine

    su - hbase -c "/usr/hdp/current/hbase-master/bin/hbase-daemon.sh --config /etc/hbase/conf stop master"

    YARN & Mapred History

    Run this command on all NodeManagers:

    su - yarn -c "export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh --config /etc/hadoop/conf stop nodemanager"

    Run this command on the History Server host machine:

    su - mapred -c "export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-mapreduce-historyserver/sbin/mr-jobhistory-daemon.sh --config /etc/hadoop/conf stop historyserver"

    Run this command on the ResourceManager host machine(s):

    su - yarn -c "export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-yarn-resourcemanager/sbin/yarn-daemon.sh --config /etc/hadoop/conf stop resourcemanager"

    Run this command on the ResourceManager host machine:

    su - yarn -c "export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-yarn-timelineserver/sbin/yar-daemon.sh --config /etc/hadoop/conf stop timelineserver"

    Run this command on the YARN Timeline Server node:

    su -l yarn -c "export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config /etc/hadoop/conf stop timelineserver"

    Storm

    storm kill topology-name

    sudo service supervisord stop

    Spark (History server)

    su - spark -c "/usr/hdp/current/spark-client/sbin/stop-history-server.sh"

  5. If you have the Hive component installed, back up the Hive Metastore database.

    The following instructions are provided for your convenience. For the latest backup instructions, see your database documentation.

    Table 2.1. Hive Metastore Database Backup and Restore

    Database Type BackupRestore

    MySQL

    mysqldump $dbname > $outputfilename.sqlsbr

    For example:

    mysqldump hive > /tmp/mydir/backup_hive.sql

    mysql $dbname < $inputfilename.sqlsbr

    For example:

    mysql hive < /tmp/mydir/backup_hive.sql

    PostgreSQL

    sudo -u $username pg_dump $databasename > $outputfilename.sql sbr

    For example:

    sudo -u postgres pg_dump hive > /tmp/mydir/backup_hive.sql

    sudo -u $username psql $databasename < $inputfilename.sqlsbr

    For example:

    sudo -u postgres psql hive < /tmp/mydir/backup_hive.sql

    Oracle

    Export the database:

    exp username/password@database full=yes file=output_file.dmp

    Import the database:

    imp username/password@database file=input_file.dmp


  6. If you have the Oozie component installed, back up the Oozie metastore database.

    These instructions are provided for your convenience. Check your database documentation for the latest backup instructions.

    Table 2.2. Oozie Metastore Database Backup and Restore

    Database Type BackupRestore

    MySQL

    mysqldump $dbname > $outputfilename.sql

    For example:

    mysqldump oozie > /tmp/mydir/backup_hive.sql

    mysql $dbname < $inputfilename.sql

    For example:

    mysql oozie < /tmp/mydir/backup_oozie.sql

    PostgreSQL

    sudo -u $username pg_dump $databasename > $outputfilename.sql

    For example:

    sudo -u postgres pg_dump oozie > /tmp/mydir/backup_oozie.sql

    sudo -u $username psql $databasename < $inputfilename.sql

    For example:

    sudo -u postgres psql oozie < /tmp/mydir/backup_oozie.sql

    Oracle

    Export the database:

    exp username/password@database full=yes file=output_file.dmp

    Import the database:

    imp username/password@database file=input_file.dmp


  7. Optional: Back up the Hue database.

    The following instructions are provided for your convenience. For the latest backup instructions, please see your database documentation. For database types that are not listed below, follow your vendor-specific instructions.

    Table 2.3. Hue Database Backup and Restore

    Database Type BackupRestore

    MySQL

    mysqldump $dbname > $outputfilename.sqlsbr

    For example:

    mysqldump hue > /tmp/mydir/backup_hue.sql

    mysql $dbname < $inputfilename.sqlsbr

    For example:

    mysql hue < /tmp/mydir/backup_hue.sql

    PostgreSQL

    sudo -u $username pg_dump $databasename > $outputfilename.sql sbr

    For example:

    sudo -u postgres pg_dump hue > /tmp/mydir/backup_hue.sql

    sudo -u $username psql $databasename < $inputfilename.sqlsbr

    For example:

    sudo -u postgres psql hue < /tmp/mydir/backup_hue.sql

    Oracle

    Connect to the Oracle database using sqlplus. Export the database.

    For example:

    exp username/password@database full=yes file=output_file.dmp mysql $dbname < $inputfilename.sqlsbr

    Import the database:

    For example:

    imp username/password@database file=input_file.dmp

    SQLite

    /etc/init.d/hue stop

    su $HUE_USER

    mkdir ~/hue_backup

    sqlite3 desktop.db .dump > ~/hue_backup/desktop.bak

    /etc/init.d/hue start

    /etc/init.d/hue stop

    cd /var/lib/hue

    mv desktop.db desktop.db.old

    sqlite3 desktop.db < ~/hue_backup/desktop.bak

    /etc/init.d/hue start


  8. Back up the Knox data/security directory.

    cp -RL /etc/knox/data/security ~/knox_backup

  9. Save the namespace by executing the following commands:

    su - hdfs

    hdfs dfsadmin -safemode enter

    hdfs dfsadmin -saveNamespace

    [Note]Note

    In secure mode, you must have Kerberos credentials for the hdfs user.

  10. Run the fsck command as the HDFS Service user and fix any errors. (The resulting file contains a complete block map of the file system.)

    su - hdfs -c "hdfs fsck / -files -blocks -locations > dfs-old-fsck-1.log"

    [Note]Note

    In secure mode, you must have Kerberos credentials for the hdfs user.

  11. Use the following instructions to compare status before and after the upgrade.

    The following commands must be executed by the user running the HDFS service (by default, the user is hdfs).

    1. Capture the complete namespace of the file system. (The following command does a recursive listing of the root file system.)

      [Important]Important

      Make sure the namenode is started.

      su - hdfs -c "hdfs dfs -ls -R / > dfs-old-lsr-1.log"

      [Note]Note

      In secure mode you must have Kerberos credentials for the hdfs user.

    2. Run the report command to create a list of DataNodes in the cluster.

      su - hdfs dfsadmin -c "-report > dfs-old-report-1.log"

    3. Run the report command to create a list of DataNodes in the cluster.

      su - hdfs -c "hdfs dfsadmin -report > dfs-old-report-l.log"

    4. Optional: You can copy all or unrecoverable only data storelibext-customer directory in HDFS to a local file system or to a backup instance of HDFS.

    5. Optional: You can also repeat the steps 3 (a) through 3 (c) and compare the results with the previous run to ensure the state of the file system remained unchanged.

  12. Finalize any prior HDFS upgrade, if you have not done so already.

    su - hdfs -c "hdfs dfsadmin -finalizeUpgrade"

    [Note]Note

    In secure mode, you must have Kerberos credentials for the hdfs user.

  13. Stop remaining services (HDFS, ZooKeeper, and Ranger).

    See Stopping HDP Services for more information.

    Component

    Command

    HDFS

    On all DataNotes:

    If you are running secure cluster, run following command as root:

    /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop datanode

    Else:

    su - hdfs -c "usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop datanode"

    If you are not running a highly available HDFS cluster, stop the Secondary NameNode by executing this command on the Secondary NameNode host machine:

    su - hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop secondarynamenode"

    On the NameNode host machine(s)

    su - hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop namenode"

    If you are running NameNode HA, stop the ZooKeeper Failover Controllers (ZKFC) by executing this command on the NameNode host machine:

    su - hdfs -c "/usr/hdp/current/hadoop-clinent/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop zkfc"

    If you are running NameNode HA, stop the JournalNodes by executing these command on the JournalNode host machines:

    su - hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh ==config /etc/hadoop/conf stop journalnode"

    ZooKeeper Host machines

    su - zookeeper -c "/usr/hdp/current/zookeeper-server/bin/zookeeper-server stop"

    Ranger (XA Secure)

    service ranger-admin stop

    service ranger-usersync stop

  14. Back up your NameNode metadata.

    [Note]Note

    It's recommended to take a backup of the full /hadoop.hdfs/namenode path.

    1. Copy the following checkpoint files into a backup directory.

      The NameNode metadata is stored in a directory specified in the hdfs-site.xml configuration file under the configuration value "dfs.namenode.name.dir".

      For example, if the configuration value is:

      <property>
        <name>dfs.namenode.name.dir</name>
        <value>/hadoop/hdfs/namenode</value>
      </property>

      Then, the NameNode metadata files are all housed inside the directory /hadooop.hdfs/namenode.

    2. Store the layoutVersion of the namenode.

      ${dfs.namenode.name.dir}/current/VERSION

  15. Verify that edit logs in ${dfs.namenode.name.dir}/current/edits* are empty.

    1. Run: hdfs oev -i ${dfs.namenode.name.dir}/current/edits_inprogress_* -o edits.out

    2. Verify the edits.out file. It should only have OP_START_LOG_SEGMENT transaction. For example:

      <?xml version="1.0" encoding="UTF-8"?>
      <EDITS>
      <EDITS_VERSION>-56</EDITS_VERSION>
      <RECORD>
      <OPCODE>OP_START_LOG_SEGMENT</OPCODE>
      <DATA>
      <TXID>5749</TXID>
      </DATA>
      </RECORD>
    3. If edits.out has transactions other than OP_START_LOG_SEGMENT, run the following steps and then verify edit logs are empty.

      • Start the existing version NameNode.

      • Ensure there is a new FS image file.

      • Shut the NameNode down:

        hdfs dfsadmin – saveNamespace

  16. Rename or delete any paths that are reserved in the new version of HDFS.

    When upgrading to a new version of HDFS, it is necessary to rename or delete any paths that are reserved in the new version of HDFS. If the NameNode encounters a reserved path during upgrade, it prints an error such as the following:

    /.reserved is a reserved path and .snapshot is a reserved path 
    component in this version of HDFS. 
    
    Please rollback and delete or rename this path, or upgrade with the 
    -renameReserved key-value pairs option to automatically rename these 
    paths during upgrade.

    Specifying -upgrade -renameReserved optional key-value pairs causes the NameNode to automatically rename any reserved paths found during startup.

    For example, to rename all paths named .snapshot to .my-snapshot and change paths named .reserved to .my-reserved, specify -upgrade -renameReserved .snapshot=.my-snapshot,.reserved=.my-reserved.

    If no key-value pairs are specified with -renameReserved, the NameNode then suffixes reserved paths with:

    .<LAYOUT-VERSION>.UPGRADE_RENAMED

    For example: .snapshot.-51.UPGRADE_RENAMED.

    [Note]Note

    We recommend that you perform a -saveNamespace before renaming paths (running -saveNamespace appears in a previous step in this procedure). This is because a data inconsistency can result if an edit log operation refers to the destination of an automatically renamed file.

    Also note that running -renameReserved renames all applicable existing files in the cluster. This may impact cluster applications.

  17. Upgrade the JDK on all nodes to JDK 7 or JDK 8 before upgrading HDP.

  18. Optional: If you plan to use the Falcon service, you must install the Berkeley DB JAR file on all Falcon server hosts on the cluster, prior to upgrading to HDP 2.5.3 or later.

    1. Log in to the Falcon server as user falcon.

      su - falcon

    2. Download the required Berkeley DB implementation file.

      wget –O je-5.0.73.jar http://search.maven.org/remotecontent?filepath=com/sleepycat/je/5.0.73/je-5.0.73.jar

    3. Copy the file to the Falcon library folder.

      cp je-5.0.73.jar /usr/hdp/<version>/falcon/webapp/falcon/WEB-INF/lib

    4. Set permissions on the file to owner=read/write, group=read, other=read.

      chmod 644 /usr/hdp/<version>/falcon/webapp/falcon/WEB-INF/lib/je-5.0.73.jar