1. Prepare the Stack for Upgrade

Perform steps 1 through 8 on the NameNode host. In a HA NameNode configuration, you should execute these on the primary NameNode. The primary NameNode is the first NameNode configured in hdfs-site.xml.

  1. IF the Oozie service is installed in your cluster, list all current jobs.

    oozie jobs -oozie http://localhost:11000/oozie -len 100 -filter status=RUNNING
  2. Stop all jobs in a RUNNING or SUSPENDED state on your Oozie server host. For example:

    oozie job -oozie <your.oozie.server.host>:11000/oozie -kill <oozie.job.id>
  3. Use the Services view on the Ambari Web UI to stop all services except HDFS and ZooKeeper. Also stop any client programs that access HDFS.

  4. Finalize any prior HDFS upgrade, if you have not done so already.

    su -l <HDFS_USER>
    hdfs dfsadmin -finalizeUpgrade

    where <HDFS_USER> is the HDFS Service user. For example, hdfs.

  5. Check the namenode directory to ensure that there is no snapshot of any prior HDFS upgrade.

    Specifically, examine the $dfs.namenode.name.dir or the $dfs.name.dir directory on the NameNode host. Make sure that only a "\current" directory and no "\previous" directory exists on the NameNode host.

  6. Create the following logs and other files that let you to check the integrity of the file system, post-upgrade.

    su -l <HDFS_USER>

    where <HDFS_USER> is the HDFS Service user. For example, hdfs.

    1. Run fsck with the following flags and send the results to a log. The resulting file contains a complete block map of the file system. You use this log later to confirm the upgrade.

      hdfs fsck / -files -blocks -locations > dfs-old-fsck-1.log 
    2. Optional: Capture the complete namespace of the file system. The following command does a recursive listing of the root file system.

      hdfs dfs -ls -R / > dfs-old-lsr-1.log 
    3. Create a list of all the DataNodes in the cluster.

      hdfs dfsadmin -report > dfs-old-report-1.log
    4. Optional: Copy all unrecoverable data stored in HDFS to a local file system or to a backup instance of HDFS.

  7. Save the namespace. You must be the HDFS service user to do this and you must put the cluster in Safe Mode.

    hdfs dfsadmin -safemode enter
    hdfs dfsadmin -saveNamespace
    [Note]Note

    In a HA NameNode configuration, the command hdfs dfsadmin -saveNamespace does checkpoint in the first NameNode specified in the configuration, in dfs.ha.namenodes.[nameservice ID]. You can also use the dfsadmin -fs option to specify which NameNode to connect. For example, to force a checkpoint in namenode 2:

    hdfs dfsadmin -fs hdfs://namenode2-hostname:namenode2-port -saveNamespace
  8. Copy the following checkpoint files into a backup directory. You can find the directory by using the Services View in Ambari Web. Select the HDFS service, the Configs tab, in the Namenode section, look up the property NameNode Directories. It will be on your primary NameNode host.

    $dfs.name.dir/current
    [Note]Note

    In a HA NameNode configuration, the location of the checkpoint depends on where the saveNamespace command is sent, as defined in the preceding step.

  9. Store the layoutVersion for the NameNode. Make a copy of the file at $dfs.name.dir/current/VERSION where $dfs.name.dir is the value of the config parameter NameNode directories. This file will be used later to verify that the layout version is upgraded.

  10. Stop HDFS. Make sure all services in the cluster are completely stopped.

  11. On the Hive metastore host, stop the Hive metastore service, if you have not done so already.

    [Note]Note

    Make sure that the Hive metastore database is running. For more information about Administering the Hive metastore database, see the Hive Metastore Administrator documentation.

  12. If you are upgrading Hive and Oozie, back up the Hive and Oozie metastore databases on the Hive and Oozie database host machines, respectively.

    1. Optional - Back up the Hive Metastore database.

      [Note]Note

      These instructions are provided for your convenience. Please check your database documentation for the latest back up instructions.

       

      Table 2.1. Hive Metastore Database Backup and Rstore

      Database Type BackupRestore

      MySQL

      mysqldump $dbname > $outputfilename.sql For example: mysqldump hive > /tmp/mydir/backup_hive.sql mysql $dbname < $inputfilename.sql For example: mysql hive < /tmp/mydir/backup_hive.sql

      Postgres

      sudo -u $username pg_dump $databasename > $outputfilename.sql For example: sudo -u postgres pg_dump hive > /tmp/mydir/backup_hive.sqlsudo -u $username psql $databasename < $inputfilename.sql For example: sudo -u postgres psql hive < /tmp/mydir/backup_hive.sql
      Oracle Connect to the Oracle database using sqlplus export the database: exp username/password@database full=yes file=output_file.dmp Import the database: imp username/password@database ile=input_file.dmp

    2. Optional - Back up the Oozie Metastore database.

      [Note]Note

      These instructions are provided for your convenience. Please check your database documentation for the latest back up instructions.

       

      Table 2.2. Oozie Metastore Database Backup and Restore

      Database Type BackupRestore

      MySQL

      mysqldump $dbname > $outputfilename.sql For example: mysqldump oozie > /tmp/mydir/backup_oozie.sql mysql $dbname < $inputfilename.sql For example: mysql oozie < /tmp/mydir/backup_oozie.sql

      Postgres

      sudo -u $username pg_dump $databasename > $outputfilename.sql For example: sudo -u postgres pg_dump oozie > /tmp/mydir/backup_oozie.sqlsudo -u $username psql $databasename < $inputfilename.sql For example: sudo -u postgres psql oozie < /tmp/mydir/backup_oozie.sql

  13. On the Ambari Server host, stop Ambari Server and confirm that it is stopped.

    ambari-server stop
    ambari-server status
  14. Stop all Ambari Agents. On every host in your cluster known to Ambari:

    ambari-agent stop