Rolling Back a CDH 5 to CDH 6 Upgrade
In a typical upgrade, you first upgrade Cloudera Manager from version 5.x to version 6.x, and then you use the upgraded version of Cloudera Manager 6 to upgrade CDH 5 to CDH 6. (See Upgrading the CDH Cluster.) If you want to roll back this upgrade, follow these steps to roll back your cluster to its state prior to the upgrade.
You can roll back to CDH 5 after upgrading to CDH 6 only if the HDFS upgrade has not been finalized. The rollback restores your CDH cluster to the state it was in before the upgrade, including Kerberos and TLS/SSL configurations.
Continue reading:
- Review Limitations
- Stop the Cluster
- (Parcels) Downgrade the Software
- Stop Cloudera Manager
- (Packages) Downgrade the Software
- Restore Cloudera Manager Databases
- Restore Cloudera Manager Server
- Start Cloudera Manager
- Roll Back ZooKeeper
- Roll Back HDFS
- Start the Key Management Server
- Start the HBase Service
- Restore CDH Databases
- Start the Sentry Service
- Roll Back Cloudera Search
- Roll Back Hue
- Roll Back Kafka
- Roll Back Sqoop 2
- Deploy the Client Configuration
- Restart the Cluster
- Roll Back Cloudera Navigator Encryption Components
- (Optional) Cloudera Manager Rollback Steps
Review Limitations
- HDFS – If you have finalized the HDFS upgrade, you cannot roll back your cluster.
- Configuration changes, including the addition of new services or roles after the upgrade, are not retained after rolling back Cloudera Manager.
Cloudera recommends that you not make configuration changes or add new services and roles until you have finalized the HDFS upgrade and no longer require the option to roll back your upgrade.
- HBase – If your cluster is configured to use HBase replication, data written to HBase after the upgrade might not be replicated to peers when you start your rollback. This topic does not describe how to determine which, if any, peers have the replicated data and how to roll back that data. For more information about HBase replication, see HBase Replication.
- Sqoop 1 – Because of the changes introduced in Sqoop metastore logic, the metastore database that is created by the CDH 6.x version of Sqoop cannot be used by earlier versions.
- Sqoop 2 – As described in the upgrade process, Sqoop2 had to be stopped and deleted before the upgrade process and therefore will not be available after the rollback.
- Kafka – Once the Kafka log format and protocol version configurations (the inter.broker.protocol.version and log.message.format.version properties) are set to the new version (or left blank, which means to use the latest version), Kafka rollback is not possible.
Stop the Cluster
- On the tab, click to the right of the cluster name and select Stop.
- Click Stop in the confirmation screen. The Command Details window shows the progress of
stopping services.
When All services successfully stopped appears, the task is complete and you can close the Command Details window.
(Parcels) Downgrade the Software
Follow these steps only if your cluster was upgraded using Cloudera parcels.
- Log in to the Cloudera Manager Admin Console.
- Select
A list of parcels displays.
.
- Locate the CDH 5 parcel and click Activate. (This automatically deactivates the CDH 6 parcel.) See Activating a Parcel for more information. If the parcel is not available, use the Download button to download the parcel.
- If you include any additional components in your cluster, such as Search or Impala, click Activate for those parcels.
Stop Cloudera Manager
- Stop the Cloudera Management Service.
- Log in to the Cloudera Manager Admin Console.
- Select .
- Select .
- Stop the Cloudera Manager Server.
- RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
-
sudo systemctl stop cloudera-scm-server
- RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
-
sudo service cloudera-scm-server stop
- Hard stop the Cloudera Manager agents. Run the following command on all hosts:
- RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
-
sudo systemctl stop supervisord
- RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
-
sudo service cloudera-scm-agent hard_stop
(Packages) Downgrade the Software
Follow these steps only if your cluster was upgraded using packages.
Run Package Commands
- Log in as a privileged user to all hosts in your cluster.
- Back up the repository directory. You can create a top-level backup directory and an environment variable to reference the
directory using the following commands. You can also substitute another directory path in the backup commands below:
export CM_BACKUP_DIR="`date +%F`-CM" mkdir -p $CM_BACKUP_DIR
- RHEL / CentOS
-
sudo -E tar -cf $CM_BACKUP_DIR/repository.tar /etc/yum.repos.d
- SLES
-
sudo -E tar -cf $CM_BACKUP_DIR/repository.tar /etc/zypp/repos.d
- Debian / Ubuntu
-
sudo -E tar -cf $CM_BACKUP_DIR/repository.tar /etc/apt/sources.list.d
- Restore the CDH 5 repository directory from the backup taken before upgrading to CDH 6. For example:
tar -xf CM6CDH5/repository.tar -C CM6CDH5/
- RHEL
-
rm -rf /etc/yum.repos.d/* cp -rp CM6CDH5/etc/yum.repos.d/* /etc/yum.repos.d/
- SLES
-
rm -rf /etc/zypp/repos.d cp -rp CM6CDH5/etc/zypp/repos.d/* /etc/zypp/repos.d/
- Debian / Ubuntu
-
rm -rf /etc/apt/sources.list.d/* cp -rp CM6CDH5/etc/apt/sources.list.d/* /etc/apt/sources.list.d/
- Run the following command to uninstall CDH 6 and reinstall the CDH 5 packages:
- RHEL / CentOS
-
sudo yum clean all
sudo yum remove avro-tools flume-ng avro-libs hadoop-hdfs-fuse hadoop-hdfs-nfs3 hadoop-httpfs hadoop-kms hbase-solr hive-hbase hive-webhcat hue impala impala-shell kafka kite kudu oozie pig search sentry sentry-hdfs-plugin solr-crunch solr-mapreduce spark-core spark-python sqoop zookeeper parquet hbase solr
sudo yum -y install --setopt=timeout=180 bigtop-utils solr-doc oozie-client hue-spark kite crunch-doc sqoop hue-rdbms hbase-solr hue-plugins pig spark-python oozie hadoop-kms bigtop-tomcat hbase hue-sqoop sqoop2 spark-core hadoop-mapreduce avro-tools hadoop-hdfs avro-libs hadoop sqoop2-client mahout avro-doc hue-impala hbase-solr-doc hive-jdbc crunch zookeeper hadoop-hdfs-nfs3 bigtop-jsvc hue-common hue-hbase hadoop-client hive-webhcat parquet-format hue-beeswax keytrustee-keyprovider hue-pig llama hive-hcatalog kudu kafka solr hue-search hive-hbase search solr-crunch flume-ng hadoop-httpfs hue-security sentry hive sentry-hdfs-plugin hadoop-yarn hadoop-hdfs-fuse parquet hadoop-0.20-mapreduce impala-shell impala hue-zookeeper solr-mapreduce
- SLES
-
sudo zypper clean --all
sudo zypper remove avro-tools flume-ng avro-libs hadoop-hdfs-fuse hadoop-hdfs-nfs3 hadoop-httpfs hadoop-kms hbase-solr hive-hbase hive-webhcat hue impala impala-shell kafka kite kudu oozie pig search sentry sentry-hdfs-plugin solr-crunch solr-mapreduce spark-core spark-python sqoop zookeeper parquet hbase solr
sudo zypper install solr-doc oozie-client hue-spark kite crunch-doc sqoop hue-rdbms hbase-solr hue-plugins pig spark-python oozie hadoop-kms bigtop-tomcat hbase hue-sqoop sqoop2 spark-core hadoop-mapreduce avro-tools hadoop-hdfs avro-libs hadoop sqoop2-client mahout avro-doc hue-impala hbase-solr-doc hive-jdbc crunch zookeeper hadoop-hdfs-nfs3 bigtop-jsvc hue-common hue-hbase hadoop-client hive-webhcat parquet-format hue-beeswax keytrustee-keyprovider hue-pig llama hive-hcatalog kudu kafka solr hue-search hive-hbase search solr-crunch flume-ng hadoop-httpfs hue-security sentry hive sentry-hdfs-plugin hadoop-yarn hadoop-hdfs-fuse parquet hadoop-0.20-mapreduce impala-shell impala hue-zookeeper solr-mapreduce
- Debian / Ubuntu
-
sudo apt-get update sudo apt-get remove avro-tools flume-ng avro-libs hadoop-hdfs-fuse hadoop-hdfs-nfs3 hadoop-httpfs hadoop-kms hbase-solr hive-hbase hive-webhcat hue impala impala-shell kafka kite kudu oozie pig search sentry sentry-hdfs-plugin solr-crunch solr-mapreduce spark-core spark-python sqoop zookeeper parquet hbase solr
sudo apt-get update sudo apt-get install solr-doc oozie-client hue-spark kite crunch-doc sqoop hue-rdbms hbase-solr hue-plugins pig spark-python oozie hadoop-kms bigtop-tomcat hbase hue-sqoop sqoop2 spark-core hadoop-mapreduce avro-tools hadoop-hdfs avro-libs hadoop sqoop2-client mahout avro-doc hue-impala hbase-solr-doc hive-jdbc crunch zookeeper hadoop-hdfs-nfs3 bigtop-jsvc hue-common hue-hbase hadoop-client hive-webhcat parquet-format hue-beeswax keytrustee-keyprovider hue-pig llama hive-hcatalog kudu kafka solr hue-search hive-hbase search solr-crunch flume-ng hadoop-httpfs hue-security sentry hive sentry-hdfs-plugin hadoop-yarn hadoop-hdfs-fuse parquet hadoop-0.20-mapreduce impala-shell impala hue-zookeeper solr-mapreduce
Restore Cloudera Manager Databases
- MariaDB 5.5: http://mariadb.com/kb/en/mariadb/backup-and-restore-overview/
- MySQL 5.5: http://dev.mysql.com/doc/refman/5.5/en/backup-and-recovery.html
- MySQL 5.6: http://dev.mysql.com/doc/refman/5.6/en/backup-and-recovery.html
- PostgreSQL 8.4: https://www.postgresql.org/docs/8.4/static/backup.html
- PostgreSQL 9.2: https://www.postgresql.org/docs/9.2/static/backup.html
- PostgreSQL 9.3: https://www.postgresql.org/docs/9.3/static/backup.html
- Oracle 11gR2: http://docs.oracle.com/cd/E11882_01/backup.112/e10642/toc.htm
- HyperSQL: http://hsqldb.org/doc/guide/management-chapt.html#mtc_backup
Restore Cloudera Manager Server
Use the backup of CDH that was taken before the upgrade to restore Cloudera Manager Server files and directories. Substitute the path to your backup directory for cm6_cdh5 in the following steps:
- On the host where the Event Server role is configured to run, restore the Events Server directory from the CM 6/CDH 5 backup.
rm -rf /var/lib/cloudera-scm-eventserver/* cp -rp /var/lib/cloudera-scm-eventserver_cm6_cdh5/* /var/lib/cloudera-scm-eventserver/
- Remove the Agent runtime state. Run the following command on all hosts:
rm -rf /var/run/cloudera-scm-agent /var/lib/cloudera-scm-agent/response.avro
This command may return a message similar to: rm: cannot remove ‘/var/run/cloudera-scm-agent/process’: Device or resource busy. You can ignore this message.
- On the host where the Service Monitor is running, restore the Service Monitor directory:
rm -rf /var/lib/cloudera-service-monitor/* cp -rp /var/lib/cloudera-service-monitor_cm6_cdh5/* /var/lib/cloudera-service-monitor/
- On the host where the Host Monitor is running, restore the Host Monitor directory:
rm -rf /var/lib/cloudera-host-monitor/* cp -rp /var/lib/cloudera-host-monitor_cm6_cdh5/* /var/lib/cloudera-host-monitor/
- Restore the Cloudera Navigator storage directory from the CM 6/CDH 5 backup.
rm -rf /var/lib/cloudera-scm-navigator/* cp -rp /var/lib/cloudera-scm-navigator_cm6_cdh5/* /var/lib/cloudera-scm-navigator/
Start Cloudera Manager
- Log in to the Cloudera Manager Server host.
ssh my_cloudera_manager_server_host
- Start the Cloudera Manager Server.
- RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
-
sudo systemctl start cloudera-scm-server
If the Cloudera Manager Server starts without errors, no response displays. - RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
-
sudo service cloudera-scm-server start
You should see the following:Starting cloudera-scm-server: [ OK ]
- Start the Cloudera Manager Agent.
Run the following commands on all cluster hosts:
- RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
-
sudo systemctl start cloudera-scm-agent
If the agent starts without errors, no response displays. - RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
-
sudo service cloudera-scm-agent start
You should see the following:Starting cloudera-scm-agent: [ OK ]
- Start the Cloudera Management Service.
- Log in to the Cloudera Manager Admin Console.
- Select .
- Select .
Roll Back ZooKeeper
- Using the backup of Zookeeper that you created when backing up your CDH 5.x cluster, restore the contents of the dataDir on each ZooKeeper server. These files are located in a directory specified with the dataDir property in the ZooKeeper
configuration. The default location is /var/lib/zookeeper. For example:
rm -rf /var/lib/zookeeper/* cp -rp /var/lib/zookeeper_cm6_cdh5/* /var/lib/zookeeper/
- Make sure that the permissions of all the directories and files are as they were before the upgrade.
- Start ZooKeeper using Cloudera Manager.
Roll Back HDFS
You cannot roll back HDFS while high availability is enabled. The rollback procedure in this topic creates a temporary configuration without high availability. Regardless of whether high availability is enabled, follow the steps in this section.
- Roll back all of the Journal Nodes. (Only required for clusters where high availability is enabled for HDFS). Use the JournalNode backup you created when you backed up HDFS before upgrading to CDH 6.
- Log in to each Journal Node host and run the following commands:
rm -rf /dfs/jn/ns1/current/*
cp -rp /dfs/jn/ns1/previous/* /dfs/jn/ns1/current/
- Start the JournalNodes using Cloudera Manager:
- Go to the HDFS service.
- Select the Instances tab.
- Select all JournalNode roles from the list.
- Click .
- Log in to each Journal Node host and run the following commands:
- Roll back all of the NameNodes. Use the NameNode backup directory you created before upgrading CDH (/etc/hadoop/conf.rollback.namenode) to perform the
following steps on all NameNode hosts:
- (Clusters with TLS enabled only) Edit the /etc/hadoop/conf.rollback.namenode/ssl-server.xml file on all NameNode hosts (located in the temporary rollback
directory) and update the keystore passwords with the actual cleartext passwords.
The passwords will have values that look like this:
<property> <name>ssl.server.keystore.password</name> <value>********</value> </property> <property> <name>ssl.server.keystore.keypassword</name> <value>********</value> </property>
- (TLS only) Edit the /etc/hadoop/conf.rollback.namenode/ssl-server.xml file and remove the hadoop.security.credential.provider.path property.
- (Clusters with TLS enabled only) Edit the /etc/hadoop/conf.rollback.namenode/ssl-server.xml file on all NameNode hosts (located in the temporary rollback
directory) and update the keystore passwords with the actual cleartext passwords.
- Edit the /etc/hadoop/conf.rollback.namenode/hdfs-site.xml file on all NameNode hosts and make the following changes:
- Delete the cloudera.navigator.client.config property.
- Delete the dfs.namenode.audit.loggers property.
- Change the path in the dfs.hosts property to the value shown in the example below. The file name, dfs_all_hosts.txt, may
have been changed by a user. If so, substitute the correct file name.
# Original version of the dfs.hosts property: <property> <name>dfs.hosts</name> <value>/var/run/cloudera-scm-agent/process/63-hdfs-NAMENODE/dfs_all_hosts.txt</value> </property>
# New version of the dfs.hosts property: <property> <name>dfs.hosts</name> <value>/etc/hadoop/conf.rollback.namenode/dfs_all_hosts.txt</value> </property>
- Edit the /etc/hadoop/conf.rollback.namenode/core-site.xml and change the value of the net.topology.script.file.name
property to /etc/hadoop/conf.rollback.namenode. For example:
# Original property <property> <name>net.topology.script.file.name</name> <value>/var/run/cloudera-scm-agent/process/63-hdfs-NAMENODE/topology.py</value> </property>
# New property <property> <name>net.topology.script.file.name</name> <value>/etc/hadoop/conf.rollback.namenode/topology.py</value> </property>
- Edit the /etc/hadoop/conf.rollback.namenode/topology.py file and change the value of MAP_FILE to /etc/hadoop/conf.rollback.namenode. For example:
MAP_FILE = '/etc/hadoop/conf.rollback.namenode/topology.map'
- (TLS-enabled clusters only) Run the following command:
sudo -u hdfs kinit hdfs/<NameNode Host name> -l 7d -kt /etc/hadoop/conf.rollback.namenode/hdfs.keytab
- Run the following command:
sudo -u hdfs hdfs --config /etc/hadoop/conf.rollback.namenode namenode -rollback
- Restart the NameNodes and JournalNodes using Cloudera Manager:
- Go to the HDFS service.
- Select the Instances tab, and then select all Failover Controller, NameNode, and JournalNode roles from the list.
- Click .
- Rollback the DataNodes.
Use the DataNode rollback directory you created before upgrading CDH (/etc/hadoop/conf.rollback.datanode) to perform the following steps on all DataNode hosts:
- (Clusters with TLS enabled only) Edit the /etc/hadoop/conf.rollback.datanode/ssl-server.xml file on all DataNode hosts (Located in the temporary rollback
directory.) and update the keystore passwords (ssl.server.keystore.password and ssl.server.keystore.keypassword) with the actual
passwords.
The passwords will have values that look like this:
<property> <name>ssl.server.keystore.password</name> <value>********</value> </property> <property> <name>ssl.server.keystore.keypassword</name> <value>********</value> </property>
- (TLS only) Edit the /etc/hadoop/conf.rollback.datanode/ssl-server.xml file and remove the hadoop.security.credential.provider.path property.
- Edit the /etc/hadoop/conf.rollback.datanode/hdfs-site.xml file and remove the dfs.datanode.max.locked.memory property.
- Run the following command:
cd /etc/hadoop/conf.rollback.datanode sudo -u hdfs hdfs --config /etc/hadoop/conf.rollback.datanode datanode -rollback
After rolling back the DataNodes, terminate the console session by typing Control-C.
- If High Availability for HDFS is enabled, restart the HDFS service. In the Cloudera Manager Admin Console, go to the HDFS service and select .
- If high availability is not enabled for HDFS, use the Cloudera Manager Admin Console to restart all NameNodes and DataNodes.
- Go to the HDFS service.
- Select the Instances tab
- Select all DataNode and NameNode roles from the list.
- Click .
- (Clusters with TLS enabled only) Edit the /etc/hadoop/conf.rollback.datanode/ssl-server.xml file on all DataNode hosts (Located in the temporary rollback
directory.) and update the keystore passwords (ssl.server.keystore.password and ssl.server.keystore.keypassword) with the actual
passwords.
- If high availability is not enabled for HDFS, roll back the Secondary NameNode.
- (Clusters with TLS enabled only) Edit the /etc/hadoop/conf.rollback.secondarynamenode/ssl-server.xml file on all Secondary NameNode hosts (Located in the
temporary rollback directory.) and update the keystore passwords with the actual cleartext passwords.
The passwords will have values that look like this:
<property> <name>ssl.server.keystore.password</name> <value>********</value> </property> <property> <name>ssl.server.keystore.keypassword</name> <value>********</value> </property>
- (TLS only) Edit the /etc/hadoop/conf.rollback.secondarynamenode/ssl-server.xml file and remove the hadoop.security.credential.provider.path property.
- Log in to the Secondary NameNode host and run the following commands:
rm -rf /dfs/snn/* cd /etc/hadoop/conf.rollback.secondarynamenode/ sudo -u hdfs hdfs --config /etc/hadoop/conf.rollback.secondarynamenode secondarynamenode -format
- (Clusters with TLS enabled only) Edit the /etc/hadoop/conf.rollback.secondarynamenode/ssl-server.xml file on all Secondary NameNode hosts (Located in the
temporary rollback directory.) and update the keystore passwords with the actual cleartext passwords.
- Restart the HDFS service. Open the Cloudera Manager Admin Console, go to the HDFS service page, and select
The Restart Command page displays the progress of the restart. Wait for the page to display the Successfully restarted service message before continuing.
.
Start the Key Management Server
Restart the Key Management Server. Open the Cloudera Manager Admin Console, go to the KMS service page, and select
.Start the HBase Service
Restart the HBase Service. Open the Cloudera Manager Admin Console, go to the HBase service page, and select
.- In Cloudera Manager, look up the value of the zookeeper.znode.parent property. The default value is /hbase.
- Connect to the ZooKeeper ensemble by running the following command from any HBase gateway host:
zookeeper-client -server zookeeper_ensemble
To find the value to use for zookeeper_ensemble, open the /etc/hbase/conf.cloudera.<HBase service name>/hbase-site.xml file on any HBase gateway host. Use the value of the hbase.zookeeper.quorum property.
The ZooKeeper command-line interface opens.
- Enter the following command:
rmr /hbase
Restore CDH Databases
- Hive Metastore
- Hue
- Oozie
- Sentry Server
The steps for backing up and restoring databases differ depending on the database vendor and version you select for your cluster and are beyond the scope of this document.
- MariaDB 5.5: http://mariadb.com/kb/en/mariadb/backup-and-restore-overview/
- MySQL 5.5: http://dev.mysql.com/doc/refman/5.5/en/backup-and-recovery.html
- MySQL 5.6: http://dev.mysql.com/doc/refman/5.6/en/backup-and-recovery.html
- PostgreSQL 8.4: https://www.postgresql.org/docs/8.4/static/backup.html
- PostgreSQL 9.2: https://www.postgresql.org/docs/9.2/static/backup.html
- PostgreSQL 9.3: https://www.postgresql.org/docs/9.3/static/backup.html
- Oracle 11gR2: http://docs.oracle.com/cd/E11882_01/backup.112/e10642/toc.htm
Start the Sentry Service
- Log in to the Cloudera Manager Admin Console.
- Go to the Sentry service.
- Click .
Roll Back Cloudera Search
- Start the HDFS, Zookeeper and Sentry services.
- Re-initialize the configuration metadata in Zookeeper by running the following commands:
-
export ZKCLI_JVM_FLAGS="-Djava.security.auth.login.config=~/solr-jaas.conf -DzkACLProvider=org.apache.solr.common.cloud.ConfigAwareSaslZkACLProvider"
-
sudo -u solr mkdir /tmp/c5-config-backup
-
sudo -u solr chmod 755 /tmp/c5-config-backup
-
sudo -u solr hdfs dfs -copyToLocal /user/solr/upgrade_backup/zk_backup/* /tmp/c5-config-backup
- Parcel installations:
export CDH_SOLR_HOME=/opt/cloudera/parcels/CDH/lib/solr
Package installations:export CDH_SOLR_HOME=/usr/lib/solr
-
/opt/cloudera/cm/solr-upgrade/solr-rollback.sh zk-meta -c /tmp/c5-config-backup
-
- Re-initialize configuration metadata in the local file system:
- On each host configured with SOLR_SERVER role, run the following commands:
-
rm -rf <solr_data_directory>/*
- The value of <solr_data_directory> is configured via CM parameter named “Solr Data Directory” (the default is /var/lib/solr).
-
-
Inspect the sub-directories present inside <backup_location>/localfs_backup directory (where <backup_location> is the value configured as part of “Upgrade Backup Directory” configuration parameter for Solr in CM). For each of the sub-directories:
- The sub-directory name refers to the internal role_id of the Solr server on a particular host in Cloudera Manager. Identify the
corresponding hostname by querying Cloudera Manager database. To find the role_id:
- Log in to the Cloudera Manager Admin Console.
- Go to the HDFS File browser.
- Open the solr/upgrade_backup/localfs_backup file. The role_id is within this file.
- Copy the contents of this sub-directory on the identified host (e.g. H1) at location specified by “Solr Data Directory” parameter in CM. The default value for this parameter
is /var/lib/solr
-
Login to host H1.
- Run the following command:
sudo -u solr hdfs dfs -copyToLocal /user/solr/upgrade_backup/localfs_backup/<role_id> /var/lib/solr
-
- The sub-directory name refers to the internal role_id of the Solr server on a particular host in Cloudera Manager. Identify the
corresponding hostname by querying Cloudera Manager database. To find the role_id:
- On each host configured with SOLR_SERVER role, run the following commands:
-
Start the Solr service.
Roll Back Hue
- Restore the file, app.reg, from your backup:
- Parcel installations
rm -rf /opt/cloudera/parcels/CDH/lib/hue/app.reg cp -rp app.reg_cm5_cdh5_backup /opt/cloudera/parcels/CDH/lib/hue/app.reg
- Package Installations
rm -rf /usr/lib/hue/app.reg cp -rp app.reg_cm5_cdh5_backup /usr/lib/hue/app.reg
- Parcel installations
Roll Back Kafka
A CDH 6 cluster that is running Kafka can be rolled back to the previous CDH5/CDK versions as long as theinter.broker.protocol.version and log.message.format.version properties have not been set to the new version or removed from the configuration.
- Activate the previous CDK parcel. Please note, that when rolling back Kafka from CDH 6 to CDH 5/CDK, the Kafka cluster will restart. Rolling restart is not supported for this scenario. See Activating a Parcel.
- Remove the following properties from the Kafka Broker Advanced Configuration Snippet (Safety Valve) configuration property.
- Inter.broker.protocol.version
- log.message.format.version
Roll Back Sqoop 2
- Add the Sqoop 2 service using Cloudera Manager.
- Restore the Sqoop 2 database from your backup. See the documentation for your database for details.
If you are not using the default embedded Derby database for Sqoop 2, restore the database you have configured for Sqoop 2. Otherwise, restore the repository subdirectory of the Sqoop 2 metastore directory from your backup. This location is specified with the Sqoop 2 Server Metastore Directory property. The default location is /var/lib/sqoop2. For this default location, Derby database files are located in /var/lib/sqoop2/repository.
Deploy the Client Configuration
- On the tab, click to the right of the cluster name and select Deploy Client Configuration.
- Click Deploy Client Configuration.
Restart the Cluster
- On the tab, click to the right of the cluster name and select Restart.
- Click Restart that appears in the next screen to confirm. If you have enabled high availability
for HDFS, you can choose Rolling Restart instead to minimize cluster downtime. The
Command Details window shows the progress of stopping services.
When All services successfully started appears, the task is complete and you can close the Command Details window.
Roll Back Cloudera Navigator Encryption Components
Roll Back Key Trustee Server
To roll back Key Trustee Server, replace the currently used parcel (for example, the parcel for version 6.0.0) with the parcel for the version to which you wish to roll back (for example, version 5.14.0). See Parcels for detailed instructions on using parcels.
Roll Back Key HSM
- Install the version of Navigator Key HSM to which you wish to roll back
Install the Navigator Key HSM package using yum:
sudo yum downgrade keytrustee-keyhsm
Cloudera Navigator Key HSM is installed to the /usr/share/keytrustee-server-keyhsm directory by default.
- Rename Previously-Created Configuration Files
For Key HSM major version rollbacks, previously-created configuration files do not authenticate with the HSM and Key Trustee Server, so you must recreate these files by re-executing the setup and trust commands. First, navigate to the Key HSM installation directory and rename the applications.properties, keystore, and truststore files:
cd /usr/share/keytrustee-server-keyhsm/ mv application.properties application.properties.bak mv keystore keystore.bak mv truststore truststore.bak
- Initialize Key HSM
Run the service keyhsm setup command in conjunction with the name of the target HSM distribution:
sudo service keyhsm setup [keysecure|thales|luna]
For more details, see Initializing Navigator Key HSM.
- Establish Trust Between Key HSM and the Key Trustee Server
The Key HSM service must explicitly trust the Key Trustee Server certificate (presented during TLS handshake). To establish this trust, run the following command:
sudo keyhsm trust /path/to/key_trustee_server/cert
For more details, see Establish Trust from Key HSM to Key Trustee Server.
- Start the Key HSM Service
Start the Key HSM service:
sudo service keyhsm start
- Establish Trust Between Key Trustee Server and Key HSM
Establish trust between the Key Trustee Server and the Key HSM by specifying the path to the private key and certificate:
sudo ktadmin keyhsm --server https://keyhsm01.example.com:9090 \ --client-certfile /etc/pki/cloudera/certs/mycert.crt \ --client-keyfile /etc/pki/cloudera/certs/mykey.key --trust
For a password-protected Key Trustee Server private key, add the --passphrase argument to the command (enter the password when prompted):sudo ktadmin keyhsm --passphrase \ --server https://keyhsm01.example.com:9090 \ --client-certfile /etc/pki/cloudera/certs/mycert.crt \ --client-keyfile /etc/pki/cloudera/certs/mykey.key --trust
For additional details, see Integrate Key HSM and Key Trustee Server.
- Remove Configuration Files From Previous Installation
After completing the rollback, remove the saved configuration files from the previous installation:
cd /usr/share/keytrustee-server-keyhsm/ rm application.properties.bak rm keystore.bak rm truststore.bak
Roll Back Key Trustee KMS Parcels
To roll back Key Trustee KMS parcels, replace the currently used parcel (for example, the parcel for version 6.0.0) with the parcel for the version to which you wish to roll back (for example, version 5.14.0). See Parcels for detailed instructions on using parcels.
Roll Back Key Trustee KMS Packages
To roll back Key Trustee KMS packages:
- After Setting Up an Internal Repository configure the Key Trustee KMS host to use the repository. See Configuring Hosts to Use the Internal Repository for more information.
- Downgrade the keytrustee-provider package using the appropriate command for your operating system:
RHEL-compatible
sudo yum downgrade keytrustee-keyprovider
Roll Back HSM KMS Parcels
To roll back the HSM KMS parcels, replace the currently used parcel (for example, the parcel for version 6.0.0) with the parcel for the version to which you wish to roll back (for example, version 5.14.0). See Parcels for detailed instructions on using parcels.
See Upgrading HSM KMS Using Packages for detailed instructions on using packages.
Roll Back HSM KMS Packages
- After Setting Up an Internal Repository configure the HSM KMS host to use the repository. See Configuring Hosts to Use the Internal Repository for more information.
- Downgrade the keytrustee-provider package using the appropriate command for your operating system:
RHEL-compatible
sudo yum downgrade keytrustee-keyprovider
Roll Back Navigator Encrypt
To roll back Cloudera Navigator Encrypt:
- If you have configured and are using an RSA master key file with OAEP padding, then you must revert this setting to its original value:
# navencrypt key --change
- Stop the Navigator Encrypt mount service:
$ sudo /etc/init.d/navencrypt-mount stop
- To fully downgrade Navigator Encrypt, manually downgrade all of the associated Navigator Encrypt packages (in the order listed):
- navencrypt
- navencrypt-kernel-module
- cloudera-navencryptfs-kmp-<kernel_flavor> for SLES
- libkeytrustee
- Restart the Navigator Encrypt mount service:
$ sudo /etc/init.d/navencrypt-mount start
(Optional) Cloudera Manager Rollback Steps
After you complete the rollback steps, your cluster is using Cloudera Manager 6 to manage your CDH 5 cluster. You can continue to use Cloudera Manager 6 to manage your CDH 5 cluster, or you can downgrade to Cloudera Manager 5 by following these steps:
Stop Cloudera Manager
- Stop the Cloudera Management Service.
- Log in to the Cloudera Manager Admin Console.
- Select .
- Select .
- Stop the Cloudera Manager Server.
- RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
-
sudo systemctl stop cloudera-scm-server
- RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
-
sudo service cloudera-scm-server stop
- Hard stop the Cloudera Manager agents. Run the following command on all hosts:
- RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
-
sudo systemctl stop supervisord
- RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
-
sudo service cloudera-scm-agent hard_stop
- Back up the repository directory. You can create a top-level backup directory and an environment variable to reference the directory using the following commands. You can also
substitute another directory path in the backup commands below:
export CM_BACKUP_DIR="`date +%F`-CM" mkdir -p $CM_BACKUP_DIR
- Back up the existing repository directory.
- RHEL / CentOS
-
sudo -E tar -cf $CM_BACKUP_DIR/repository.tar /etc/yum.repos.d
- SLES
-
sudo -E tar -cf $CM_BACKUP_DIR/repository.tar /etc/zypp/repos.d
- Debian / Ubuntu
-
sudo -E tar -cf $CM_BACKUP_DIR/repository.tar /etc/apt/sources.list.d
Restore the Cloudera Manager 5 Repository Files
Copy the repository directory from the backup taken before upgrading to Cloudera Manager 6.x.
rm -rf /etc/yum.repos.d/* cp -rp /etc/yum.repos.d_cm5cdh5/* /etc/yum.repos.d/
Restore Packages
- Run the following commands on all hosts:
Operating System Command RHEL sudo yum remove cloudera-manager-daemons cloudera-manager-agent
sudo yum clean all sudo yum install cloudera-manager-agent
SLES sudo zypper remove cloudera-manager-daemons cloudera-manager-agent
sudo zypper refresh -s sudo zypper install cloudera-manager-agent
Ubuntu or Debian sudo apt-get purge cloudera-manager-daemons cloudera-manager-agent
sudo apt-get update sudo apt-get install cloudera-manager-agent
- Run the following commands on the Cloudera Manager server host:
Operating System Command RHEL sudo yum remove cloudera-manager-server
sudo yum install cloudera-manager-server
SLES sudo zypper remove cloudera-manager-server
sudo zypper install cloudera-manager-server
Ubuntu or Debian sudo apt-get purge cloudera-manager-server
sudo apt-get install cloudera-manager-server
Restore Cloudera Manager Databases
Restore the Cloudera Manager databases from the backup of Cloudera Manager that was taken before upgrading to Cloudera Manager 6. See the procedures provided by your database vendor.
- Cloudera Manager Server
- Reports Manager
- Navigator Audit Server
- Navigator Metadata Server
- Activity Monitor (Only used for MapReduce 1 monitoring).
- MariaDB 5.5: http://mariadb.com/kb/en/mariadb/backup-and-restore-overview/
- MySQL 5.5: http://dev.mysql.com/doc/refman/5.5/en/backup-and-recovery.html
- MySQL 5.6: http://dev.mysql.com/doc/refman/5.6/en/backup-and-recovery.html
- PostgreSQL 8.4: https://www.postgresql.org/docs/8.4/static/backup.html
- PostgreSQL 9.2: https://www.postgresql.org/docs/9.2/static/backup.html
- PostgreSQL 9.3: https://www.postgresql.org/docs/9.3/static/backup.html
- Oracle 11gR2: http://docs.oracle.com/cd/E11882_01/backup.112/e10642/toc.htm
- HyperSQL: http://hsqldb.org/doc/guide/management-chapt.html#mtc_backup
mysql -u username -ppassword --host=hostname cm < backup.sql
Restore Cloudera Manager Server
Use the backup of Cloudera Manager 5.x taken before upgrading to Cloudera Manager 6.x for the following steps:
- If you used the backup commands provided in Backing Up Cloudera Manager, extract the Cloudera Manager 5 backup
archives you created:
tar -xf CM5CDH5/cloudera-scm-agent.tar -C CM5CDH5/ tar -xf CM5CDH5/cloudera-scm-server.tar -C CM5CDH5/
- On the host where the Event Server role is configured to run, restore the Events Server directory from the Cloudera Manager 5 backup.
rm -rf /var/lib/cloudera-scm-eventserver/* cp -rp /var/lib/cloudera-scm-eventserver_cm5cdh5/* /var/lib/cloudera-scm-eventserver/
- Remove the Agent runtime state. Run the following command on all hosts:
rm -rf /var/run/cloudera-scm-agent /var/lib/cloudera-scm-agent/response.avro
- On the host where the Service Monitor is running, restore the Service Monitor directory:
rm -rf /var/lib/cloudera-service-monitor/* cp -rp /var/lib/cloudera-service-monitor_cm5cdh5/* /var/lib/cloudera-service-monitor/
- On the host where the Host Monitor is running, restore the Host Monitor directory:
rm -rf /var/lib/cloudera-host-monitor/* cp -rp /var/lib/cloudera-host-monitor_cm5cdh5/* /var/lib/cloudera-host-monitor/
- Restore the Cloudera Navigator Solr storage directory from the CM5/CDH 5 backup.
rm -rf /var/lib/cloudera-scm-navigator/* cp -rp /var/lib/cloudera-scm-navigator_cm5cdh5/* /var/lib/cloudera-scm-navigator/
- On the Cloudera Manager Server, restore the /etc/cloudera-scm-server/db.properties file.
rm -rf /etc/cloudera-scm-server/db.properties cp -rp cm5cdh5/etc/cloudera-scm-server/db.properties /etc/cloudera-scm-server/db.properties
- On each host in the cluster, restore the /etc/cloudera-scm-agent/config.ini file from your backup.
rm -rf /etc/cloudera-scm-agent/config.ini cp -rp cm5cdh5/etc/cloudera-scm-agent/config.ini /etc/cloudera-scm-agent/config.ini
Start the Cloudera Manager Server and Agents
- Start the Cloudera Manager Server.
- RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
-
sudo systemctl start cloudera-scm-server
If the Cloudera Manager Server starts without errors, no response displays. - RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
-
sudo service cloudera-scm-server start
You should see the following:Starting cloudera-scm-server: [ OK ]
- Hard Restart the Cloudera Manager Agent.
- RHEL 7, SLES 12, Debian 8, Ubuntu 16.04 and higher
-
sudo /etc/init.d/cloudera-scm-agent next_stop_hard sudo systemctl stop cloudera-scm-agent
- RHEL 5 or 6, SLES 11, Debian 6 or 7, Ubuntu 12.04 or 14.04
-
sudo service cloudera-scm-agent hard_restart
- Start the Cloudera Management Service.
- Log in to the Cloudera Manager Admin Console.
- Select .
- Select .