Rolling Back a CDH 5 to CDH 6 Upgrade

You can roll back an upgrade from CDH 5 to CDH 6. The rollback restores your CDH cluster to the state it was in before the upgrade, including Kerberos and TLS/SSL configurations.

In a typical upgrade, you first upgrade Cloudera Manager from version 5.x to version 6.x, and then you use the upgraded version of Cloudera Manager 6 to upgrade CDH 5 to CDH 6. (See Upgrading the CDH Cluster.) If you want to roll back this upgrade, follow these steps to roll back your cluster to its state prior to the upgrade.

You can roll back to CDH 5 after upgrading to CDH 6 only if the HDFS upgrade has not been finalized. The rollback restores your CDH cluster to the state it was in before the upgrade, including Kerberos and TLS/SSL configurations.

Review Limitations

The rollback procedure has the following limitations:
  • HDFS – If you have finalized the HDFS upgrade, you cannot roll back your cluster.
  • Configuration changes, including the addition of new services or roles after the upgrade, are not retained after rolling back Cloudera Manager.

    Cloudera recommends that you not make configuration changes or add new services and roles until you have finalized the HDFS upgrade and no longer require the option to roll back your upgrade.

  • HBase – If your cluster is configured to use HBase replication, data written to HBase after the upgrade might not be replicated to peers when you start your rollback. This topic does not describe how to determine which, if any, peers have the replicated data and how to roll back that data. For more information about HBase replication, see HBase Replication.
  • Sqoop 1 – Because of the changes introduced in Sqoop metastore logic, the metastore database that is created by the CDH 6.x version of Sqoop cannot be used by earlier versions.
  • Sqoop 2 – As described in the upgrade process, Sqoop2 had to be stopped and deleted before the upgrade process and therefore will not be available after the rollback.
  • Kafka – Once the Kafka log format and protocol version configurations (the inter.broker.protocol.version and log.message.format.version properties) are set to the new version (or left blank, which means to use the latest version), Kafka rollback is not possible.

Stop the Cluster

  1. On the Home > Status tab, click to the right of the cluster name and select Stop.
  2. Click Stop in the confirmation screen. The Command Details window shows the progress of stopping services.

    When All services successfully stopped appears, the task is complete and you can close the Command Details window.

(Parcels) Downgrade the Software

Follow these steps only if your cluster was upgraded using Cloudera parcels.

  1. Log in to the Cloudera Manager Admin Console.
  2. Select Hosts > Parcels.

    A list of parcels displays.

  3. Locate the CDH 5 parcel and click Activate. (This automatically deactivates the CDH 6 parcel.) See Activating a Parcel for more information. If the parcel is not available, use the Download button to download the parcel.
  4. If you include any additional components in your cluster, such as Search or Impala, click Activate for those parcels.

Stop Cloudera Manager

  1. Stop the Cloudera Management Service.
    1. Log in to the Cloudera Manager Admin Console.
    2. Select Clusters > Cloudera Management Service.
    3. Select Actions > Stop.
  2. Stop the Cloudera Manager Server.
    RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
    sudo systemctl stop cloudera-scm-server
    RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
    sudo service cloudera-scm-server stop
  3. Hard stop the Cloudera Manager agents. Run the following command on all hosts:
    RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
    sudo systemctl stop supervisord
    RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
    sudo service cloudera-scm-agent hard_stop

(Packages) Downgrade the Software

Follow these steps only if your cluster was upgraded using packages.

Run Package Commands

  1. Log in as a privileged user to all hosts in your cluster.
  2. Back up the repository directory. You can create a top-level backup directory and an environment variable to reference the directory using the following commands. You can also substitute another directory path in the backup commands below:
    export CM_BACKUP_DIR="`date +%F`-CM"
    mkdir -p $CM_BACKUP_DIR
    RHEL / CentOS
    sudo -E tar -cf $CM_BACKUP_DIR/repository.tar /etc/yum.repos.d
    SLES
    sudo -E tar -cf $CM_BACKUP_DIR/repository.tar /etc/zypp/repos.d
    Debian / Ubuntu
    sudo -E tar -cf $CM_BACKUP_DIR/repository.tar /etc/apt/sources.list.d
  3. Restore the CDH 5 repository directory from the backup taken before upgrading to CDH 6. For example:
    tar -xf CM6CDH5/repository.tar -C CM6CDH5/
    RHEL
    rm -rf /etc/yum.repos.d/*
    cp -rp CM6CDH5/etc/yum.repos.d/* /etc/yum.repos.d/
    SLES
    rm -rf /etc/zypp/repos.d
    cp -rp CM6CDH5/etc/zypp/repos.d/* /etc/zypp/repos.d/
    Debian / Ubuntu
    rm -rf /etc/apt/sources.list.d/*
    cp -rp CM6CDH5/etc/apt/sources.list.d/* /etc/apt/sources.list.d/
  4. Run the following command to uninstall CDH 6 and reinstall the CDH 5 packages:
    RHEL / CentOS
    sudo yum clean all
    sudo yum remove avro-tools flume-ng avro-libs hadoop-hdfs-fuse hadoop-hdfs-nfs3 hadoop-httpfs hadoop-kms hbase-solr hive-hbase hive-webhcat hue impala impala-shell kafka kite kudu oozie pig search sentry sentry-hdfs-plugin solr-crunch solr-mapreduce spark-core spark-python sqoop zookeeper parquet hbase solr
    sudo yum -y install --setopt=timeout=180 bigtop-utils solr-doc oozie-client hue-spark kite crunch-doc sqoop hue-rdbms hbase-solr hue-plugins pig spark-python oozie hadoop-kms bigtop-tomcat hbase hue-sqoop sqoop2 spark-core hadoop-mapreduce avro-tools hadoop-hdfs avro-libs hadoop sqoop2-client mahout avro-doc hue-impala hbase-solr-doc hive-jdbc crunch zookeeper hadoop-hdfs-nfs3 bigtop-jsvc hue-common hue-hbase hadoop-client hive-webhcat parquet-format hue-beeswax keytrustee-keyprovider hue-pig llama hive-hcatalog kudu kafka solr hue-search hive-hbase search solr-crunch flume-ng hadoop-httpfs hue-security sentry hive sentry-hdfs-plugin hadoop-yarn hadoop-hdfs-fuse parquet hadoop-0.20-mapreduce impala-shell impala hue-zookeeper solr-mapreduce
    SLES
    sudo zypper clean --all
    sudo zypper remove avro-tools flume-ng avro-libs hadoop-hdfs-fuse hadoop-hdfs-nfs3 hadoop-httpfs hadoop-kms hbase-solr hive-hbase hive-webhcat hue impala impala-shell kafka kite kudu oozie pig search sentry sentry-hdfs-plugin solr-crunch solr-mapreduce spark-core spark-python sqoop zookeeper parquet hbase solr
    sudo zypper install solr-doc oozie-client hue-spark kite crunch-doc sqoop hue-rdbms hbase-solr hue-plugins pig spark-python oozie hadoop-kms bigtop-tomcat hbase hue-sqoop sqoop2 spark-core hadoop-mapreduce avro-tools hadoop-hdfs avro-libs hadoop sqoop2-client mahout avro-doc hue-impala hbase-solr-doc hive-jdbc crunch zookeeper hadoop-hdfs-nfs3 bigtop-jsvc hue-common hue-hbase hadoop-client hive-webhcat parquet-format hue-beeswax keytrustee-keyprovider hue-pig llama hive-hcatalog kudu kafka solr hue-search hive-hbase search solr-crunch flume-ng hadoop-httpfs hue-security sentry hive sentry-hdfs-plugin hadoop-yarn hadoop-hdfs-fuse parquet hadoop-0.20-mapreduce impala-shell impala hue-zookeeper solr-mapreduce
    Debian / Ubuntu
    sudo apt-get update
    sudo apt-get remove avro-tools flume-ng avro-libs hadoop-hdfs-fuse hadoop-hdfs-nfs3 hadoop-httpfs hadoop-kms hbase-solr hive-hbase hive-webhcat hue impala impala-shell kafka kite kudu oozie pig search sentry sentry-hdfs-plugin solr-crunch solr-mapreduce spark-core spark-python sqoop zookeeper parquet hbase solr
    sudo apt-get update
    sudo apt-get install solr-doc oozie-client hue-spark kite crunch-doc sqoop hue-rdbms hbase-solr hue-plugins pig spark-python oozie hadoop-kms bigtop-tomcat hbase hue-sqoop sqoop2 spark-core hadoop-mapreduce avro-tools hadoop-hdfs avro-libs hadoop sqoop2-client mahout avro-doc hue-impala hbase-solr-doc hive-jdbc crunch zookeeper hadoop-hdfs-nfs3 bigtop-jsvc hue-common hue-hbase hadoop-client hive-webhcat parquet-format hue-beeswax keytrustee-keyprovider hue-pig llama hive-hcatalog kudu kafka solr hue-search hive-hbase search solr-crunch flume-ng hadoop-httpfs hue-security sentry hive sentry-hdfs-plugin hadoop-yarn hadoop-hdfs-fuse parquet hadoop-0.20-mapreduce impala-shell impala hue-zookeeper solr-mapreduce

Restore Cloudera Manager Databases

Restore Cloudera Manager Server

Use the backup of CDH that was taken before the upgrade to restore Cloudera Manager Server files and directories. Substitute the path to your backup directory for cm6_cdh5 in the following steps:

  1. On the host where the Event Server role is configured to run, restore the Events Server directory from the CM 6/CDH 5 backup.
    rm -rf /var/lib/cloudera-scm-eventserver/*
    cp -rp /var/lib/cloudera-scm-eventserver_cm6_cdh5/* /var/lib/cloudera-scm-eventserver/
  2. Remove the Agent runtime state. Run the following command on all hosts:
    rm -rf /var/run/cloudera-scm-agent /var/lib/cloudera-scm-agent/response.avro

    This command may return a message similar to: rm: cannot remove ‘/var/run/cloudera-scm-agent/process’: Device or resource busy. You can ignore this message.

  3. On the host where the Service Monitor is running, restore the Service Monitor directory:
    rm -rf /var/lib/cloudera-service-monitor/*
    cp -rp /var/lib/cloudera-service-monitor_cm6_cdh5/* /var/lib/cloudera-service-monitor/
  4. On the host where the Host Monitor is running, restore the Host Monitor directory:
    rm -rf /var/lib/cloudera-host-monitor/*
    cp -rp /var/lib/cloudera-host-monitor_cm6_cdh5/* /var/lib/cloudera-host-monitor/
  5. Restore the Cloudera Navigator storage directory from the CM 6/CDH 5 backup.
    rm -rf /var/lib/cloudera-scm-navigator/*
    cp -rp /var/lib/cloudera-scm-navigator_cm6_cdh5/* /var/lib/cloudera-scm-navigator/

Start Cloudera Manager

  1. Log in to the Cloudera Manager Server host.
    ssh my_cloudera_manager_server_host
  2. Start the Cloudera Manager Server.
    RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
    sudo systemctl start cloudera-scm-server
    If the Cloudera Manager Server starts without errors, no response displays.
    RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
    sudo service cloudera-scm-server start
    You should see the following:
    Starting cloudera-scm-server: [ OK ]
  3. Start the Cloudera Manager Agent.

    Run the following commands on all cluster hosts:

    RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
    sudo systemctl start cloudera-scm-agent
    If the agent starts without errors, no response displays.
    RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
    sudo service cloudera-scm-agent start
    You should see the following:
    Starting cloudera-scm-agent: [ OK ]
  4. Start the Cloudera Management Service.
    1. Log in to the Cloudera Manager Admin Console.
    2. Select Clusters > Cloudera Management Service.
    3. Select Actions > Start.

Roll Back ZooKeeper

  1. Using the backup of Zookeeper that you created when backing up your CDH 5.x cluster, restore the contents of the dataDir on each ZooKeeper server. These files are located in a directory specified with the dataDir property in the ZooKeeper configuration. The default location is /var/lib/zookeeper. For example:
    rm -rf /var/lib/zookeeper/*
    cp -rp /var/lib/zookeeper_cm6_cdh5/* /var/lib/zookeeper/
  2. Make sure that the permissions of all the directories and files are as they were before the upgrade.
  3. Start ZooKeeper using Cloudera Manager.

Roll Back HDFS

You cannot roll back HDFS while high availability is enabled. The rollback procedure in this topic creates a temporary configuration without high availability. Regardless of whether high availability is enabled, follow the steps in this section.

  1. Roll back all of the Journal Nodes. (Only required for clusters where high availability is enabled for HDFS). Use the JournalNode backup you created when you backed up HDFS before upgrading to CDH 6.
    1. Log in to each Journal Node host and run the following commands:
      rm -rf /dfs/jn/ns1/current/*
      cp -rp /dfs/jn/ns1/previous/* /dfs/jn/ns1/current/
    2. Start the JournalNodes using Cloudera Manager:
      1. Go to the HDFS service.
      2. Select the Instances tab.
      3. Select all JournalNode roles from the list.
      4. Click Actions for Selected > Start.
  2. Roll back all of the NameNodes. Use the NameNode backup directory you created before upgrading CDH (/etc/hadoop/conf.rollback.namenode) to perform the following steps on all NameNode hosts:
    1. (Clusters with TLS enabled only) Edit the /etc/hadoop/conf.rollback.namenode/ssl-server.xml file on all NameNode hosts (located in the temporary rollback directory) and update the keystore passwords with the actual cleartext passwords.
      The passwords will have values that look like this:
      <property>
          <name>ssl.server.keystore.password</name>
          <value>********</value>
        </property>
        <property>
          <name>ssl.server.keystore.keypassword</name>
          <value>********</value>
        </property>
      
    2. (TLS only) Edit the /etc/hadoop/conf.rollback.namenode/ssl-server.xml file and remove the hadoop.security.credential.provider.path property.
  3. Edit the /etc/hadoop/conf.rollback.namenode/hdfs-site.xml file on all NameNode hosts and make the following changes:
    1. Delete the cloudera.navigator.client.config property.
    2. Delete the dfs.namenode.audit.loggers property.
    3. Change the path in the dfs.hosts property to the value shown in the example below. The file name, dfs_all_hosts.txt, may have been changed by a user. If so, substitute the correct file name.
      # Original version of the dfs.hosts property:
      <property>
      <name>dfs.hosts</name>
      <value>/var/run/cloudera-scm-agent/process/63-hdfs-NAMENODE/dfs_all_hosts.txt</value>
      </property>
      # New version of the dfs.hosts property:
      <property>
      <name>dfs.hosts</name>
      <value>/etc/hadoop/conf.rollback.namenode/dfs_all_hosts.txt</value>
      </property>
    4. Edit the /etc/hadoop/conf.rollback.namenode/core-site.xml and change the value of the net.topology.script.file.name property to /etc/hadoop/conf.rollback.namenode. For example:
      # Original property
      <property>
      <name>net.topology.script.file.name</name>
      <value>/var/run/cloudera-scm-agent/process/63-hdfs-NAMENODE/topology.py</value>
      </property>
      # New property
      <property>
      <name>net.topology.script.file.name</name>
      <value>/etc/hadoop/conf.rollback.namenode/topology.py</value>
      </property>
    5. Edit the /etc/hadoop/conf.rollback.namenode/topology.py file and change the value of MAP_FILE to /etc/hadoop/conf.rollback.namenode. For example:
      MAP_FILE = '/etc/hadoop/conf.rollback.namenode/topology.map'
    6. (TLS-enabled clusters only) Run the following command:
      sudo -u hdfs kinit hdfs/<NameNode Host name> -l 7d -kt /etc/hadoop/conf.rollback.namenode/hdfs.keytab
    7. Run the following command:
      sudo -u hdfs hdfs --config /etc/hadoop/conf.rollback.namenode namenode -rollback
    8. Restart the NameNodes and JournalNodes using Cloudera Manager:
      1. Go to the HDFS service.
      2. Select the Instances tab, and then select all Failover Controller, NameNode, and JournalNode roles from the list.
      3. Click Actions for Selected > Restart.
  4. Rollback the DataNodes.
    Use the DataNode rollback directory you created before upgrading CDH (/etc/hadoop/conf.rollback.datanode) to perform the following steps on all DataNode hosts:
    1. (Clusters with TLS enabled only) Edit the /etc/hadoop/conf.rollback.datanode/ssl-server.xml file on all DataNode hosts (Located in the temporary rollback directory.) and update the keystore passwords (ssl.server.keystore.password and ssl.server.keystore.keypassword) with the actual passwords.
      The passwords will have values that look like this:
      <property>
          <name>ssl.server.keystore.password</name>
          <value>********</value>
        </property>
        <property>
          <name>ssl.server.keystore.keypassword</name>
          <value>********</value>
        </property>
      
    2. (TLS only) Edit the /etc/hadoop/conf.rollback.datanode/ssl-server.xml file and remove the hadoop.security.credential.provider.path property.
    3. Edit the /etc/hadoop/conf.rollback.datanode/hdfs-site.xml file and remove the dfs.datanode.max.locked.memory property.
    4. Run the following command:
      cd /etc/hadoop/conf.rollback.datanode
      sudo -u hdfs hdfs --config /etc/hadoop/conf.rollback.datanode datanode -rollback

      After rolling back the DataNodes, terminate the console session by typing Control-C.

    5. If High Availability for HDFS is enabled, restart the HDFS service. In the Cloudera Manager Admin Console, go to the HDFS service and select Actions > Restart.
    6. If high availability is not enabled for HDFS, use the Cloudera Manager Admin Console to restart all NameNodes and DataNodes.
      1. Go to the HDFS service.
      2. Select the Instances tab
      3. Select all DataNode and NameNode roles from the list.
      4. Click Actions for Selected > Restart.
  5. If high availability is not enabled for HDFS, roll back the Secondary NameNode.
    1. (Clusters with TLS enabled only) Edit the /etc/hadoop/conf.rollback.secondarynamenode/ssl-server.xml file on all Secondary NameNode hosts (Located in the temporary rollback directory.) and update the keystore passwords with the actual cleartext passwords.
      The passwords will have values that look like this:
      <property>
          <name>ssl.server.keystore.password</name>
          <value>********</value>
        </property>
        <property>
          <name>ssl.server.keystore.keypassword</name>
          <value>********</value>
        </property>
      
    2. (TLS only) Edit the /etc/hadoop/conf.rollback.secondarynamenode/ssl-server.xml file and remove the hadoop.security.credential.provider.path property.
    3. Log in to the Secondary NameNode host and run the following commands:
      rm -rf /dfs/snn/*
      cd /etc/hadoop/conf.rollback.secondarynamenode/
      sudo -u hdfs hdfs --config /etc/hadoop/conf.rollback.secondarynamenode secondarynamenode -format
      
  6. Restart the HDFS service. Open the Cloudera Manager Admin Console, go to the HDFS service page, and select Actions > Restart.

    The Restart Command page displays the progress of the restart. Wait for the page to display the Successfully restarted service message before continuing.

Start the Key Management Server

Restart the Key Management Server. Open the Cloudera Manager Admin Console, go to the KMS service page, and select Actions > Start.

Start the HBase Service

Restart the HBase Service. Open the Cloudera Manager Admin Console, go to the HBase service page, and select Actions > Start.

If you encounter errors when starting HBase, delete the znode in ZooKeeper and then start HBase again:
  1. In Cloudera Manager, look up the value of the zookeeper.znode.parent property. The default value is /hbase.
  2. Connect to the ZooKeeper ensemble by running the following command from any HBase gateway host:
    zookeeper-client -server zookeeper_ensemble

    To find the value to use for zookeeper_ensemble, open the /etc/hbase/conf.cloudera.<HBase service name>/hbase-site.xml file on any HBase gateway host. Use the value of the hbase.zookeeper.quorum property.

    The ZooKeeper command-line interface opens.

  3. Enter the following command:
    rmr /hbase

Restore CDH Databases

Restore the following databases from the CDH 5 backups:
  • Hive Metastore
  • Hue
  • Oozie
  • Sentry Server

The steps for backing up and restoring databases differ depending on the database vendor and version you select for your cluster and are beyond the scope of this document.

Start the Sentry Service

  1. Log in to the Cloudera Manager Admin Console.
  2. Go to the Sentry service.
  3. Click Actions > Start.

Roll Back Cloudera Search

  1. Start the HDFS, Zookeeper and Sentry services.
  2. Re-initialize the configuration metadata in Zookeeper by running the following commands:
    1. export ZKCLI_JVM_FLAGS="-Djava.security.auth.login.config=~/solr-jaas.conf -DzkACLProvider=org.apache.solr.common.cloud.ConfigAwareSaslZkACLProvider"
    2. sudo -u solr mkdir /tmp/c5-config-backup
    3. sudo -u solr chmod 755 /tmp/c5-config-backup
    4. sudo -u solr hdfs dfs -copyToLocal /user/solr/upgrade_backup/zk_backup/* /tmp/c5-config-backup
    5. Parcel installations:
      export CDH_SOLR_HOME=/opt/cloudera/parcels/CDH/lib/solr
      Package installations:
      export CDH_SOLR_HOME=/usr/lib/solr
    6. /opt/cloudera/cm/solr-upgrade/solr-rollback.sh zk-meta -c /tmp/c5-config-backup
  3. Re-initialize configuration metadata in the local file system:
    • On each host configured with SOLR_SERVER role, run the following commands:
      1. rm -rf <solr_data_directory>/*
      2. The value of <solr_data_directory> is configured via CM parameter named “Solr Data Directory” (the default is /var/lib/solr).
    • Inspect the sub-directories present inside <backup_location>/localfs_backup directory (where <backup_location> is the value configured as part of “Upgrade Backup Directory” configuration parameter for Solr in CM). For each of the sub-directories:

      1. The sub-directory name refers to the internal role_id of the Solr server on a particular host in Cloudera Manager. Identify the corresponding hostname by querying Cloudera Manager database. To find the role_id:
        1. Log in to the Cloudera Manager Admin Console.
        2. Go to the HDFS File browser.
        3. Open the solr/upgrade_backup/localfs_backup file. The role_id is within this file.
      2. Copy the contents of this sub-directory on the identified host (e.g. H1) at location specified by “Solr Data Directory” parameter in CM. The default value for this parameter is /var/lib/solr
        • Login to host H1.

        • Run the following command:
          sudo -u solr hdfs dfs -copyToLocal /user/solr/upgrade_backup/localfs_backup/<role_id> /var/lib/solr
  4. Start the Solr service.

Roll Back Hue

  1. Restore the file, app.reg, from your backup:
    • Parcel installations
      rm -rf /opt/cloudera/parcels/CDH/lib/hue/app.reg
      cp -rp app.reg_cm5_cdh5_backup /opt/cloudera/parcels/CDH/lib/hue/app.reg
    • Package Installations
      rm -rf /usr/lib/hue/app.reg
      cp -rp app.reg_cm5_cdh5_backup /usr/lib/hue/app.reg

Roll Back Kafka

A CDH 6 cluster that is running Kafka can be rolled back to the previous CDH5/CDK versions as long as theinter.broker.protocol.version and log.message.format.version properties have not been set to the new version or removed from the configuration.

To perform the rollback using Cloudera Manager:
  1. Activate the previous CDK parcel. Please note, that when rolling back Kafka from CDH 6 to CDH 5/CDK, the Kafka cluster will restart. Rolling restart is not supported for this scenario. See Activating a Parcel.
  2. Remove the following properties from the Kafka Broker Advanced Configuration Snippet (Safety Valve) configuration property.
    • Inter.broker.protocol.version
    • log.message.format.version

Roll Back Sqoop 2

Upgrading to CDH 6.x required you to delete the Sqoop 2 service before upgrading. To roll back your Sqoop 2 service:
  1. Add the Sqoop 2 service using Cloudera Manager.
  2. Restore the Sqoop 2 database from your backup. See the documentation for your database for details.

    If you are not using the default embedded Derby database for Sqoop 2, restore the database you have configured for Sqoop 2. Otherwise, restore the repository subdirectory of the Sqoop 2 metastore directory from your backup. This location is specified with the Sqoop 2 Server Metastore Directory property. The default location is /var/lib/sqoop2. For this default location, Derby database files are located in /var/lib/sqoop2/repository.

Deploy the Client Configuration

  1. On the Home > Status tab, click to the right of the cluster name and select Deploy Client Configuration.
  2. Click Deploy Client Configuration.

Restart the Cluster

  1. On the Home > Status tab, click to the right of the cluster name and select Restart.
  2. Click Restart that appears in the next screen to confirm. If you have enabled high availability for HDFS, you can choose Rolling Restart instead to minimize cluster downtime. The Command Details window shows the progress of stopping services.

    When All services successfully started appears, the task is complete and you can close the Command Details window.

Roll Back Cloudera Navigator Encryption Components

If you are rolling back any encryption components (Key Trustee Server, Key Trustee KMS, HSM KMS, Key HSM, or Navigator Encrypt), first refer to:

Roll Back Key Trustee Server

To roll back Key Trustee Server, replace the currently used parcel (for example, the parcel for version 6.0.0) with the parcel for the version to which you wish to roll back (for example, version 5.14.0). See Parcels for detailed instructions on using parcels.

Roll Back Key HSM

To roll back Key HSM:
  1. Install the version of Navigator Key HSM to which you wish to roll back
    Install the Navigator Key HSM package using yum:
    sudo yum downgrade keytrustee-keyhsm

    Cloudera Navigator Key HSM is installed to the /usr/share/keytrustee-server-keyhsm directory by default.

  2. Rename Previously-Created Configuration Files

    For Key HSM major version rollbacks, previously-created configuration files do not authenticate with the HSM and Key Trustee Server, so you must recreate these files by re-executing the setup and trust commands. First, navigate to the Key HSM installation directory and rename the applications.properties, keystore, and truststore files:

    cd /usr/share/keytrustee-server-keyhsm/
    mv application.properties application.properties.bak
    mv keystore keystore.bak
    mv truststore truststore.bak
  3. Initialize Key HSM
    Run the service keyhsm setup command in conjunction with the name of the target HSM distribution:
    sudo service keyhsm setup [keysecure|thales|luna]

    For more details, see Initializing Navigator Key HSM.

  4. Establish Trust Between Key HSM and the Key Trustee Server
    The Key HSM service must explicitly trust the Key Trustee Server certificate (presented during TLS handshake). To establish this trust, run the following command:
    sudo keyhsm trust /path/to/key_trustee_server/cert

    For more details, see Establish Trust from Key HSM to Key Trustee Server.

  5. Start the Key HSM Service
    Start the Key HSM service:
    sudo service keyhsm start
  6. Establish Trust Between Key Trustee Server and Key HSM
    Establish trust between the Key Trustee Server and the Key HSM by specifying the path to the private key and certificate:
    sudo ktadmin keyhsm --server https://keyhsm01.example.com:9090 \
    --client-certfile /etc/pki/cloudera/certs/mycert.crt \
    --client-keyfile /etc/pki/cloudera/certs/mykey.key --trust
    For a password-protected Key Trustee Server private key, add the --passphrase argument to the command (enter the password when prompted):
    sudo ktadmin keyhsm --passphrase \
    --server https://keyhsm01.example.com:9090 \
    --client-certfile /etc/pki/cloudera/certs/mycert.crt \
    --client-keyfile /etc/pki/cloudera/certs/mykey.key --trust

    For additional details, see Integrate Key HSM and Key Trustee Server.

  7. Remove Configuration Files From Previous Installation
    After completing the rollback, remove the saved configuration files from the previous installation:
    cd /usr/share/keytrustee-server-keyhsm/
    rm application.properties.bak
    rm keystore.bak
    rm truststore.bak

Roll Back Key Trustee KMS Parcels

To roll back Key Trustee KMS parcels, replace the currently used parcel (for example, the parcel for version 6.0.0) with the parcel for the version to which you wish to roll back (for example, version 5.14.0). See Parcels for detailed instructions on using parcels.

Roll Back Key Trustee KMS Packages

To roll back Key Trustee KMS packages:

  1. After Setting Up an Internal Repository configure the Key Trustee KMS host to use the repository. See Configuring Hosts to Use the Internal Repository for more information.
  2. Downgrade the keytrustee-provider package using the appropriate command for your operating system:

    RHEL-compatible

    sudo yum downgrade keytrustee-keyprovider

Roll Back HSM KMS Parcels

To roll back the HSM KMS parcels, replace the currently used parcel (for example, the parcel for version 6.0.0) with the parcel for the version to which you wish to roll back (for example, version 5.14.0). See Parcels for detailed instructions on using parcels.

See Upgrading HSM KMS Using Packages for detailed instructions on using packages.

Roll Back HSM KMS Packages

To roll back HSM KMS packages:
  1. After Setting Up an Internal Repository configure the HSM KMS host to use the repository. See Configuring Hosts to Use the Internal Repository for more information.
  2. Downgrade the keytrustee-provider package using the appropriate command for your operating system:

    RHEL-compatible

    sudo yum downgrade keytrustee-keyprovider

Roll Back Navigator Encrypt

To roll back Cloudera Navigator Encrypt:

  1. If you have configured and are using an RSA master key file with OAEP padding, then you must revert this setting to its original value:
    # navencrypt key --change
  2. Stop the Navigator Encrypt mount service:
    $ sudo /etc/init.d/navencrypt-mount stop
  3. To fully downgrade Navigator Encrypt, manually downgrade all of the associated Navigator Encrypt packages (in the order listed):
    1. navencrypt
    2. navencrypt-kernel-module
    3. cloudera-navencryptfs-kmp-<kernel_flavor> for SLES
    4. libkeytrustee
  4. Restart the Navigator Encrypt mount service:
    $ sudo /etc/init.d/navencrypt-mount start

(Optional) Cloudera Manager Rollback Steps

After you complete the rollback steps, your cluster is using Cloudera Manager 6 to manage your CDH 5 cluster. You can continue to use Cloudera Manager 6 to manage your CDH 5 cluster, or you can downgrade to Cloudera Manager 5 by following these steps:

Stop Cloudera Manager

  1. Stop the Cloudera Management Service.
    1. Log in to the Cloudera Manager Admin Console.
    2. Select Clusters > Cloudera Management Service.
    3. Select Actions > Stop.
  2. Stop the Cloudera Manager Server.
    RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
    sudo systemctl stop cloudera-scm-server
    RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
    sudo service cloudera-scm-server stop
  3. Hard stop the Cloudera Manager agents. Run the following command on all hosts:
    RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
    sudo systemctl stop supervisord
    RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
    sudo service cloudera-scm-agent hard_stop
  4. Back up the repository directory. You can create a top-level backup directory and an environment variable to reference the directory using the following commands. You can also substitute another directory path in the backup commands below:
    export CM_BACKUP_DIR="`date +%F`-CM"
    mkdir -p $CM_BACKUP_DIR
  5. Back up the existing repository directory.
    RHEL / CentOS
    sudo -E tar -cf $CM_BACKUP_DIR/repository.tar /etc/yum.repos.d
    SLES
    sudo -E tar -cf $CM_BACKUP_DIR/repository.tar /etc/zypp/repos.d
    Debian / Ubuntu
    sudo -E tar -cf $CM_BACKUP_DIR/repository.tar /etc/apt/sources.list.d

Restore the Cloudera Manager 5 Repository Files

Copy the repository directory from the backup taken before upgrading to Cloudera Manager 6.x.

rm -rf /etc/yum.repos.d/*
cp -rp /etc/yum.repos.d_cm5cdh5/* /etc/yum.repos.d/

Restore Packages

  1. Run the following commands on all hosts:
    Operating System Command
    RHEL
    sudo yum remove cloudera-manager-daemons cloudera-manager-agent
    sudo yum clean all
    sudo yum install cloudera-manager-agent
    SLES
    sudo zypper remove cloudera-manager-daemons cloudera-manager-agent
    sudo zypper refresh -s
    sudo zypper install cloudera-manager-agent
    Ubuntu or Debian
    sudo apt-get purge cloudera-manager-daemons cloudera-manager-agent
    sudo apt-get update
    sudo apt-get install cloudera-manager-agent
  2. Run the following commands on the Cloudera Manager server host:
    Operating System Command
    RHEL
    sudo yum remove cloudera-manager-server
    sudo yum install cloudera-manager-server
    SLES
    sudo zypper remove cloudera-manager-server
    sudo zypper install cloudera-manager-server 
    Ubuntu or Debian
    sudo apt-get purge cloudera-manager-server
    sudo apt-get install cloudera-manager-server

Restore Cloudera Manager Databases

Restore the Cloudera Manager databases from the backup of Cloudera Manager that was taken before upgrading to Cloudera Manager 6. See the procedures provided by your database vendor.

These databases include the following:
  • Cloudera Manager Server
  • Reports Manager
  • Navigator Audit Server
  • Navigator Metadata Server
  • Activity Monitor (Only used for MapReduce 1 monitoring).
Here is an sample command to restore a MySQL database:
mysql -u username -ppassword --host=hostname cm < backup.sql

Restore Cloudera Manager Server

Use the backup of Cloudera Manager 5.x taken before upgrading to Cloudera Manager 6.x for the following steps:

  1. If you used the backup commands provided in Backing Up Cloudera Manager, extract the Cloudera Manager 5 backup archives you created:
    tar -xf CM5CDH5/cloudera-scm-agent.tar -C CM5CDH5/
    tar -xf CM5CDH5/cloudera-scm-server.tar -C CM5CDH5/
  2. On the host where the Event Server role is configured to run, restore the Events Server directory from the Cloudera Manager 5 backup.
    rm -rf /var/lib/cloudera-scm-eventserver/*
    cp -rp /var/lib/cloudera-scm-eventserver_cm5cdh5/* /var/lib/cloudera-scm-eventserver/
  3. Remove the Agent runtime state. Run the following command on all hosts:
    rm -rf /var/run/cloudera-scm-agent /var/lib/cloudera-scm-agent/response.avro
  4. On the host where the Service Monitor is running, restore the Service Monitor directory:
    rm -rf /var/lib/cloudera-service-monitor/*
    cp -rp /var/lib/cloudera-service-monitor_cm5cdh5/* /var/lib/cloudera-service-monitor/
  5. On the host where the Host Monitor is running, restore the Host Monitor directory:
    rm -rf /var/lib/cloudera-host-monitor/*
    cp -rp /var/lib/cloudera-host-monitor_cm5cdh5/* /var/lib/cloudera-host-monitor/
  6. Restore the Cloudera Navigator Solr storage directory from the CM5/CDH 5 backup.
    rm -rf /var/lib/cloudera-scm-navigator/*
    cp -rp /var/lib/cloudera-scm-navigator_cm5cdh5/* /var/lib/cloudera-scm-navigator/
  7. On the Cloudera Manager Server, restore the /etc/cloudera-scm-server/db.properties file.
    rm -rf /etc/cloudera-scm-server/db.properties
    cp -rp cm5cdh5/etc/cloudera-scm-server/db.properties /etc/cloudera-scm-server/db.properties
  8. On each host in the cluster, restore the /etc/cloudera-scm-agent/config.ini file from your backup.
    rm -rf /etc/cloudera-scm-agent/config.ini
    cp -rp cm5cdh5/etc/cloudera-scm-agent/config.ini /etc/cloudera-scm-agent/config.ini

Start the Cloudera Manager Server and Agents

  • Start the Cloudera Manager Server.
    RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
    sudo systemctl start cloudera-scm-server
    If the Cloudera Manager Server starts without errors, no response displays.
    RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
    sudo service cloudera-scm-server start
    You should see the following:
    Starting cloudera-scm-server: [ OK ]
  • Hard Restart the Cloudera Manager Agent.
    RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
    sudo /etc/init.d/cloudera-scm-agent next_stop_hard
    sudo systemctl stop cloudera-scm-agent
    RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
    sudo service cloudera-scm-agent hard_restart
  • Start the Cloudera Management Service.
    1. Log in to the Cloudera Manager Admin Console.
    2. Select Clusters > Cloudera Management Service.
    3. Select Actions > Start.