Apache Ambari Upgrade
Also available as:
PDF

Prepare to Upgrade

Make sure that you have reviewed and completed all prerequisites described in previous chapters.

It is strongly recommended that you perform backups of your databases before beginning upgrade.

  • Ambari database

  • Hive Metastore database

  • Oozie Server database

  • Ranger Admin database

  • Ranger Audit database

[Important]Important

If you use MySQL 5.6, you must use the Inno DB engine.

If your current MySQL engine is MyISAM, you must migrate to Inno DB before upgrading to Ambari 2.5.

  • Export your current, MySQL (MyISAM) data.

  • Drop the MySQL (MyISAM) database schema.

  • Create a MySQL (Inno DB) database.

  • Import your data into the MySQL (Inno DB) database instance.

For example:

mysqldump -U ambari -p ambari > /tmp/ambari.original.mysql 
cp /tmp/ambari.original.mysql /tmp/ambari.innodb.mysql 
sed -ie 's/MyISAM/INNODB/g' /tmp/ambari.innodb.mysql 
mysql -u ambari -p ambari 
DROP DATABASE ambari; 
CREATE DATABASE ambari; 
mysql -u ambari "-pbigdata" --force ambari < /tmp/ambari.innodb.mysql

Please contact Hortonworks customer support if you have issues running your cluster using MySQL 5.6.

Checkpoint HDFS

  1. Perform the following steps on the NameNode host. If you are configured for NameNode HA, perform the following steps on the Active NameNode. You can locate the Active NameNode from Ambari Web > Services > HDFS in the Summary area.

  2. Check the NameNode directory to ensure that there is no snapshot of any prior HDFS upgrade. Specifically, using Ambari Web, browse to Services > HDFS > Configs, and examine the dfs.namenode.name.dir in the NameNode Directories property. Make sure that only a /current directory and no /previous directory exists on the NameNode host.

  3. Create the following log and other files. Creating these logs allows you to check the integrity of the file system after the Stack upgrade.

    As the HDFS user,

    "su -l <HDFS_USER>"

    run the following (where <HDFS_USER> is the HDFS Service user, for example, hdfs):

    • Run fsck with the following flags and send the results to a log. The resulting file contains a complete block map of the file system. You use this log later to confirm the upgrade.

      hdfs fsck / -files -blocks -locations > dfs-old-fsck-1.log
    • Create a list of all the DataNodes in the cluster.

      hdfs dfsadmin -report > dfs-old-report-1.log
    • Optional: Capture the complete namespace of the file system. The following command does a recursive listing of the root file system:

      hdfs dfs -ls -R / > dfs-old-lsr-1.log
    • Optional: Copy all unrecoverable data stored in HDFS to a local file system or to a backup instance of HDFS.

  4. Save the namespace. As the HDFS user, " su -l <HDFS_USER> ", you must put the cluster in Safe Mode.

    hdfs dfsadmin -safemode enter
    hdfs dfsadmin -saveNamespace
    [Note]Note

    In a highly-available NameNode configuration, the command

    hdfs dfsadmin -saveNamespace

    sets a checkpoint in the first NameNode specified in the configuration, in dfs.ha.namenodes.[nameserviceID].

    You can also use the

    dfsadmin -fs

    option to specify which NameNode to connect. For example, to force a checkpoint in NameNode2:

    hdfs dfsadmin -fs hdfs://namenode2-hostname:namenode2-port -saveNamespace
  5. Copy the checkpoint files located in ${dfs.namenode.name.dir}/current into a backup directory.

    [Note]Note

    In a highly-available NameNode configuration, the location of the checkpoint depends on where the

    saveNamespace

    command is sent, as defined in the preceding step.

  6. Store the layoutVersion for the NameNode located at

    ${dfs.namenode.name.dir}/current/VERSION, into a backup directory where

    ${dfs.namenode.name.dir} is the value of the config parameter NameNode directories.

    This file will be used later to verify that the layout version is upgraded.

  7. As the HDFS user, " su -l <HDFS_USER> ", take the NameNode out of Safe Mode.

    hdfs dfsadmin -safemode leave
  8. Finalize any prior HDFS upgrade, if you have not done so already. As the HDFS user, " su -l <HDFS_USER> ", run the following:

    hdfs dfsadmin -finalizeUpgrade
[Important]Important

In HDP-2.5, Audit to DB is no longer supported in Apache Ranger. If you are upgrading from HDP-2.3 or HDP-2.4 to HDP-2.5, you should use Apache Solr for Ranger audits. You should also migrate your audit logs from DB to Solr.

Next Steps

Register and Install Target Version.

More Information

Getting Ready to Upgrade Ambari and HDP

Using Apache Solr for Ranger audits

Migrate your audit logs from DB to Solr