Procedure to Rollback from CDP 7.1.9

Perform the below procedure to rollback your cluster from 7.1.9 to 7.1.8 or 7.1.7 SP2 or 7.1.7 SP1.

Rollback

Rollback restores the software to the prior release and also restores the data and metadata to the pre-upgrade state. Service interruptions are expected as the cluster must be halted. After HDFS and/or Ozone is finalized, it is not possible to Rollback.

The following components do not support Rollback. If these components are in use, manual restore or recovery may be necessary. Work with Cloudera Support to devise a plan based on the deployed components.
  • Ozone
  • Kafka - It may be possible to restore from a replicated cluster if you are using Streams Replication Manager.

Pre rollback steps

Ozone

This procedure is applicable only if you are downgrading from CDP 7.1.9 to CDP 7.1.8.

  1. Stop the Ozone Recon Web UI. Within Cloudera Manager UI, navigate to the Ozone service > Ozone Recon > Actions > Stop this Ozone Recon.
  2. Navigate to Configuration within the Ozone service and collect the value of ozone.recon.db.dir (default value is /var/lib/hadoop-ozone/recon/data).
  3. SSH to the Ozone Recon Web UI host and move the ozone.recon.db.dir parent directory to a backup location: mv /var/lib/hadoop-ozone/recon /var/lib/hadoop-ozone/recon-backup-CDP.
HBase

Stop the HBase Master(s). Execute knit as the hbase user if kerberos is enabled.

  1. Stop Omid within Cloudera Manager UI
  2. Navigate to the HBase service > Instances within Cloudera Manager UI and note the hostname of the HBase Master instance(s). Login to the host(s) and execute the following: hbase master stop --shutDownCluster
  3. Stop the remaining HBase components. Navigate to the HBase service within Cloudera Manager UI > Actions > Stop

The following must be performed when downgrading to 7.1.7 SP2 from 7.1.9. You will need to kinit as the hbase user if kerberos is enabled.

  1. Contact support for the appropriate hbck2 jar
  2. Execute a dry run of the shortenTableinfo command and validate the appropriate files have been identified hbase --config /etc/hbase/conf hbck -j hbase-hbck2-X.Y.Z.jar shortenTableinfo
  3. Run the shortenTableinfo -fix command to fix the file format hbase --config /etc/hbase/conf hbck -j hbase-hbck2-X.Y.Z.jar shortenTableinfo -fix
Cruise Control

The following steps are applicable only if you downgrade from 7.1.9 to 7.1.7 SP2 and not from 7.1.9 to 7.1.8. You can skip this section if the Cruise Control Goal configurations were set to the default values before performing the upgrade. However, if the Cruise Control Goal configuration values were changed before performing the upgrade, then you must proceed with this section.

In Cruise Control, you must rename com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareDistributionGoal to com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal in Cloudera Manager > Clusters > Cruise Control > Configurations tab in every occurrences as described below during downgrade process.

In Cruise Control, from 7.1.8 and higher, com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal is renamed to com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareDistributionGoal.

Perform the below steps:

  1. Check the following goal sets if RackAwareDistributionGoal is present (Cloudera Manager > Clusters > Cruise Control > Configurations tab):
    1. default.goals
    2. goals
    3. self.healing.goals
    4. hard.goals
    5. anomaly.detection.goals
  2. Create a note for yourself about where RackAwareDistributionGoal were present
  3. Remove RackAwareDistributionGoal from all of the goal lists
  4. Perform the runtime downgrade process

Stop the Cluster

  1. On the Home> Status tab, click the Actions menu and select Stop.
  2. Click Stop in the confirmation screen. The Command Details window shows the progress of the stopping process.

    When the All services successfully stopped message appears, the task is complete and you can close the Command Details window.

Rolling back the Runtime parcel

  1. Navigate to Parcels within Cloudera Manager.
  2. Locate the CDP Private Cloud Base 7.1.7 SP2/7.1.8 parcel and click Upgrade.
  3. Follow the wizard and address any issues from the various inspectors.

The upgrade activates the parcel and restarts services.

Restore CDH Databases

Restore the following databases from the CDH backups. Follow the order below and restore only a single service at a time. Stop the service prior to restoring to the database. Start the service after restoring the database.
  • Ranger
  • Ranger KMS
  • Stream Messaging Manager
  • Schema Registry
  • Hue (only if you are rolling back to 7.1.7 SP2)

The steps for backing up and restoring databases differ depending on the database vendor and version you select for your cluster and are beyond the scope of this document.

Roll Back Cloudera Navigator Encryption Components

If you are rolling back any encryption components (Key Trustee Server, Key Trustee KMS, HSM KMS, Key HSM, or Navigator Encrypt), first refer to:

Roll Back Key Trustee Server

To roll back Key Trustee Server, replace the currently used parcel (for example, the parcel for version 7.1.9) with the parcel for the version to which you wish to roll back (for example, version 7.1.8). See Parcels for detailed instructions on using parcels.

The Keytrustee Server 7.1.9 upgrades the bundled Postgres engine from version 12.1 to 14.2. The upgrade happens automatically, however, downgrading to CDP 7.1.8, requires manual steps to roll back the database engine to version 12.1. Because the previously upgraded database is left unchanged, the database server will fail to start. Follow these steps to recreate the Postgres 12.1 compatible database:
  1. Open the Cloudera Manager Admin Console and go to the Key Trustee Server service. If you see that Key Trustee Server has stale configurations, click the yellow or blue button and follow the prompts.
  2. Make sure that the Keytrustee Server database roles are stopped. Then rename the folder containing Keytrustee Postgres database data (both on master and slave hosts):
    mv /var/lib/keytrustee/db /var/lib/keytrustee/db-14_2
  3. Open the Cloudera Manager Admin Console and go to the Key Trustee Server service.
  4. Select the Instances tab.
  5. Select the Active Database role type.
  6. Click Actions for Selected > Set Up the Key Trustee Server Database.
  7. Click Set Up the Key Trustee Server Database to confirm.

    Cloudera Manager sets up the Key Trustee Server database.

  8. Start the PostgreSQL server:
    sudo ktadmin db --start --pg-rootdir /var/lib/keytrustee/db --background
  9. On the master KTS node: running as user keytrustee,restore the keytrustee database on active hosts by running the following commands:
    sudo -su keytrustee
           export PATH=$PATH:/opt/cloudera/parcels/KEYTRUSTEE_SERVER/PG_DB/opt/postgres/12.1/bin/
           export LD_LIBRARY_PATH=/opt/cloudera/parcels/KEYTRUSTEE_SERVER/PG_DB/opt/postgres/12.1/lib/
           unzip -p keytrustee-db.zip | psql -p 11381 -d keytrustee
    If the zip file is encrypted, you are prompted for the password to decrypt the file.
  10. Restore the Key Trustee Server configuration directory, on Active hosts:
    su - keytrustee
           cd /var/lib/keytrustee
           unzip keytrustee-conf.zip
    If the zip file is encrypted, you are prompted for the password to decrypt the file.
  11. Stop the PostgreSQL server : Change the login user to root and run the command:
    sudo ktadmin db --stop --pg-rootdir /var/lib/keytrustee/db
  12. Remove the backup files (keytrustee-db.zip and keytrustee-conf.zip) from the Key Trustee Server’s host.
    su - keytrustee
           cd /var/lib/keytrustee
           rm keytrustee-conf.zip
           rm keytrustee-db.zip
  13. Start the Active Database role in Cloudera Manager by clicking Actions for Selected > Start.
  14. Click Start to confirm.
  15. Select the Active Database.
  16. Click Actions for Selected > Setup Enable Synchronous Replication in HA mode .
  17. Start the Passive Database instance: select the Passive Database, click Actions for Selected > Start.
  18. In the Cloudera Manager Admin Console, start the active KTS instance.
  19. In the Cloudera Manager Admin Console, start the passive KTS instance.

Start the Key Management Server

Restart the Key Management Server. Open the Cloudera Manager Admin Console, go to the KMS service page, and select Actions > Start.

Roll Back Key HSM

To roll back Key HSM:
  1. Install the version of Navigator Key HSM to which you wish to roll back
    Install the Navigator Key HSM package using yum:
    sudo yum downgrade keytrustee-keyhsm

    Cloudera Navigator Key HSM is installed to the /usr/share/keytrustee-server-keyhsm directory by default.

  2. Rename Previously-Created Configuration Files

    For Key HSM major version rollbacks, previously-created configuration files do not authenticate with the HSM and Key Trustee Server, so you must recreate these files by re-executing the setup and trust commands. First, navigate to the Key HSM installation directory and rename the applications.properties, keystore, and truststore files:

    cd /usr/share/keytrustee-server-keyhsm/
         mv application.properties application.properties.bak
         mv keystore keystore.bak
         mv truststore truststore.bak
  3. Initialize Key HSM
    Run the service keyhsm setup command in conjunction with the name of the target HSM distribution:
    sudo service keyhsm setup [keysecure|thales|luna]

    For more details, see Initializing Navigator Key HSM.

  4. Establish Trust Between Key HSM and the Key Trustee Server
    The Key HSM service must explicitly trust the Key Trustee Server certificate (presented during TLS handshake). To establish this trust, run the following command:
    sudo keyhsm trust /path/to/key_trustee_server/cert

    For more details, see Establish Trust from Key HSM to Key Trustee Server.

  5. Start the Key HSM Service
    Start the Key HSM service:
    sudo service keyhsm start
  6. Establish Trust Between Key Trustee Server and Key HSM
    Establish trust between the Key Trustee Server and the Key HSM by specifying the path to the private key and certificate:
    sudo ktadmin keyhsm --server https://keyhsm01.example.com:9090 \
           --client-certfile /etc/pki/cloudera/certs/mycert.crt \
           --client-keyfile /etc/pki/cloudera/certs/mykey.key --trust
    For a password-protected Key Trustee Server private key, add the --passphrase argument to the command (enter the password when prompted):
    sudo ktadmin keyhsm --passphrase \
             --server https://keyhsm01.example.com:9090 \
             --client-certfile /etc/pki/cloudera/certs/mycert.crt \
             --client-keyfile /etc/pki/cloudera/certs/mykey.key --trust

    For additional details, see Integrate Key HSM and Key Trustee Server.

  7. Remove Configuration Files From Previous Installation
    After completing the rollback, remove the saved configuration files from the previous installation:
    cd /usr/share/keytrustee-server-keyhsm/
          rm application.properties.bak
          rm keystore.bak
          rm truststore.bak

Roll Back Navigator Encrypt

To roll back Cloudera Navigator Encrypt:
  1. If you have configured and are using an RSA master key file with OAEP padding, then you must revert this setting to its original value:
    navencrypt key --change
  2. Stop the Navigator Encrypt mount service:
    sudo /etc/init.d/navencrypt-mount stop
  3. Confirm that the mount-stop command completed:
    sudo /etc/init.d/navencrypt-mount status
  4. If rolling back to a release lower than NavEncrypt 6.2:
    1. a. Print the existing ACL rules and save that output to a file:
      sudo navencrypt acl --print+ vim acls.txt
    2. b. Delete all existing ACLs, for example, if there are a total of 7 ACL rules run:
      sudo navencrypt acl --del --line=1,2,3,4,5,6,7
  5. To fully downgrade Navigator Encrypt, manually downgrade all of the associated Navigator Encrypt packages (in the order listed):
    1. navencrypt
    2. (Only required for operating systems other than SLES) navencrypt-kernel-module
    3. (Only required for the SLES operating system) cloudera-navencryptfs-kmp-<kernel_flavor>
    Note: Replace kernel_flavor with the kernel flavor for your system. Navigator Encrypt supports the default, xen, and ec2 kernel flavors.d. libkeytrustee
  6. If rolling back to a release less than NavEncrypt 6.2
    1. Reapply the ACL rules:
      sudo navencrypt acl --add --file=acls.txt
  7. Recompute process signatures:
    sudo navencrypt acl --update
  8. Restart the Navigator Encrypt mount service
    sudo /etc/init.d/navencrypt-mount start

Rollback ZooKeeper

Rollback of the data and service is not expected to be necessary unless dependent services are being rolled back. If it is determined that a ZooKeeper rollback is necessary, the steps are:
  1. Stop ZooKeeper
  2. Restore the data backup. For example: cp -rp /var/lib/zookeeper-backup-pre-upgrade-CM-CDH /var/lib/zookeeper/
  3. Start ZooKeeper

Rollback HDFS

You cannot roll back HDFS while high availability is enabled. The rollback procedure in this topic creates a temporary configuration without high availability. Regardless of whether high availability is enabled, follow the steps in this section.

  1. Roll back all of the NameNodes. Use the NameNode backup directory you created before upgrading to CDP Private Cloud Base. (/etc/hadoop/conf.rollback.namenode) to perform the following steps on all NameNode hosts:
    1. Start the JournalNodes using Cloudera Manager:
      1. Go to the HDFS service.
      2. Select the Instances tab.
      3. Select all JournalNode roles from the list.
      4. Click Actions for Selected > Start.
    2. (Clusters with TLS enabled only) Edit the /etc/hadoop/conf.rollback.namenode/ssl-server.xml file on all NameNode hosts (located in the temporary rollback directory) and update the keystore passwords with the actual cleartext passwords. The passwords will have values that look like this:
      <property>
                   <name>ssl.server.keystore.password</name>
                   <value>********</value>
                   </property>
                   <property>
                   <name>ssl.server.keystore.keypassword</name>
                   <value>********</value>
                   </property>
    3. (TLS only) Edit the /etc/hadoop/conf.rollback.namenode/ssl-server.xml file and remove the hadoop.security.credential.provider.path property.
    4. (TLS only) Edit the /etc/hadoop/conf.rollback.namenode/ssl-server.xml file and change the value of ssl.server.keystore.location to /etc/hadoop/conf.rollback.namenode/cm-auto-host_keystore.jks
    5. Edit the /etc/hadoop/conf.rollback.namenode/core-site.xml and change the value of the net.topology.script.file.name property to /etc/hadoop/conf.rollback.namenode. For example:
      # Original property
      <property>
      <name>net.topology.script.file.name</name>
      <value>/var/run/cloudera-scm-agent/process/63-hdfs-NAMENODE/topology.py</value>
      </property>
      # New property
      <property>
      <name>net.topology.script.file.name</name>
      <value>/etc/hadoop/conf.rollback.namenode/topology.py</value>
      </property>
    6. Edit the /etc/hadoop/conf.rollback.namenode/topology.py file and change the value of DATA_FILE_NAME to /etc/hadoop/conf.rollback.namenode. For example:
      DATA_FILE_NAME = '/etc/hadoop/conf.rollback.namenode/topology.map'
    7. (TLS-enabled clusters only) Run the following command:
      sudo -u hdfs kinit hdfs/<NameNode Host name> -l 7d -kt /etc/hadoop/conf.rollback.namenode/hdfs.keytab
    8. Start active Namenode with the following command:
      sudo -u hdfs hdfs --config /etc/hadoop/conf.rollback.namenode namenode -rollingUpgrade rollback
    9. Log in to the other Namenode and start SBNN with the following command:
      sudo -u hdfs hdfs --config /etc/hadoop/conf.rollback.namenode namenode -bootstrapStandby
    10. Press Yes when prompted. This also exits the process after it is done.
    11. Select Ctrl + C for active Namenode to exit the process.
  2. Rollback the DataNodes.
    Use the DataNode rollback directory you created before upgrading to CDP Private Cloud Base (/etc/hadoop/conf.rollback.datanode) to perform the following steps on all DataNode hosts:
    1. (Clusters with TLS enabled only) Edit the /etc/hadoop/conf.rollback.datanode/ssl-server.xml file on all DataNode hosts (Located in the temporary rollback directory.) and update the keystore passwords (ssl.server.keystore.password and ssl.server.keystore.keypassword) with the actual passwords.
      The passwords will have values that look like this:
      <property>
               <name>ssl.server.keystore.password</name>
               <value>********</value>
               </property>
               <property>
               <name>ssl.server.keystore.keypassword</name>
               <value>********</value>
               </property>
    2. (TLS only) Edit the /etc/hadoop/conf.rollback.datanode/ssl-server.xml file and remove the hadoop.security.credential.provider.path property and change the value of property.
      ssl.server.keystore.location to /etc/hadoop/conf.rollback.datanode/cm-auto-host_keystore.jks
    3. Edit the /etc/hadoop/conf.rollback.datanode/hdfs-site.xml file and remove the dfs.datanode.max.locked.memory property.
    4. If you kerberos enabled cluster then make sure change the value of hdfs.keytab to the absolute path of conf.rollback.datanode folder in core-site.xmland hdfs-site.xml

    5. Run one of the following commands:
      • If the DataNode is running with privileged ports (usually 1004 and 1006):
        cd /etc/hadoop/conf.rollback.datanode
                 export HADOOP_SECURE_DN_USER=hdfs
                 export JSVC_HOME=/opt/cloudera/parcels/<parcel_filename>/lib/bigtop-utils
                 hdfs --config /etc/hadoop/conf.rollback.datanode datanode -rollback
      • If the DataNode is not running on privileged ports:
        sudo hdfs --config /etc/hadoop/conf.rollback.datanode datanode -rollback

      When the rolling back of the DataNodes is complete, terminate the console session by typing Control-C. Look for output from the command similar to the following that indicates when the DataNode rollback is complete:

      Rollback of /dataroot/ycloud/dfs/dn/current/BP-<Block Group number> is complete
      You may see the following error after issuing these commands:
      ERROR datanode.DataNode: Exception in secureMain java.io.IOException: 
               The path component: '/var/run/hdfs-sockets' in '/var/run/hdfs-sockets/dn' has permissions 0755 uid 39998 and gid 1006. 
               It is not protected because it is owned by a user who is not root and not the effective user: '0'.
      The error message will also include the following command to run:
      chown root /var/run/hdfs-sockets 
      After running this command, the DataNode will restart successfully. Rerun the DataNode rollback command:
      sudo hdfs --config /etc/hadoop/conf.rollback.datanode datanode -rollback
    6. If High Availability for HDFS is enabled, restart the HDFS service. In the Cloudera Manager Admin Console, go to the HDFS service and select Actions > Restart.
    7. If high availability is not enabled for HDFS, use the Cloudera Manager Admin Console to restart all NameNodes and DataNodes.
      1. Go to the HDFS service.
      2. Select the Instances tab
      3. Select all DataNode and NameNode roles from the list.
      4. Click Actions for Selected > Restart.
  3. If high availability is not enabled for HDFS, roll back the Secondary NameNode.
    1. (Clusters with TLS enabled only) Edit the /etc/hadoop/conf.rollback.secondarynamenode/ssl-server.xml file on all Secondary NameNode hosts (Located in the temporary rollback directory.) and update the keystore passwords with the actual cleartext passwords. The passwords will have values that look like this:
      <property> 
               <name>ssl.server.keystore.password</name> 
               <value>********</value> 
               </property> 
               <property> 
               <name>ssl.server.keystore.keypassword</name> 
               <value>********</value> 
               </property> 
              
    2. (TLS only) Edit the /etc/hadoop/conf.rollback.secondarynamenode/ssl-server.xml file and remove the hadoop.security.credential.provider.path property and change the value of property ssl.server.keystore.location to
      /etc/hadoop/conf.rollback.secondarynamenode/cm-auto-host_keystore.jks 
    3. Log in to the Secondary NameNode host and run the following commands:
      rm -rf /dfs/snn/*
              cd /etc/hadoop/conf.rollback.secondarynamenode/
              sudo -u hdfs hdfs --config /etc/hadoop/conf.rollback.secondarynamenode secondarynamenode -format
             

      When the rolling back of the Secondary NameNode is complete, terminate the console session by typing Control-C. Look for output from the command similar to the following that indicates when the Secondary NameNode rollback is complete:

      2020-12-21 17:09:36,239 INFO namenode.SecondaryNameNode: Web server init done
              
  4. Restart the HDFS service. Open the Cloudera Manager Admin Console, go to the HDFS service page, and select Actions > Restart.

    The Restart Command page displays the progress of the restart. Wait for the page to display the Successfully restarted service message before continuing.

For more information on HDFS, see HDFS troubleshooting documentation.

Start HBase

You might encounter other errors when starting HBase (for example, replication-related problems, region assignment related issues, and meta region assignment problems). In this case, you must delete the znode in ZooKeeper and then start HBase again. (This deletes the replication peer information and you need to re-configure your replication schedules)

  1. In Cloudera Manager, look up the value of the zookeeper.znode.parent property. The default value is /hbase.
  2. Connect to the ZooKeeper ensemble by running the following command from any HBase gateway host.
    zookeeper-client -server zookeeper_ensemble

    To find the value to use for zookeeper_ensemble, open the /etc/hbase/conf.cloudera.<HBase service name>/hbase-site.xml file on any HBase gateway host. Use the value of the hbase.zookeeper.quorum property.

  3. Specify the jaas.conf using the JVM flags by running the following commands in the ZooKeeper client.
    CLIENT_JVMFLAGS= 
     "-Djava.security.auth.login.config=/var/run/cloudera-scm 
    agent/process/HBase_process_directory/jaas.conf" 
    zookeeper-client -server <zookeeper_ensemble>

    The ZooKeeper command-line interface opens.

  4. Enter the following command.
    rmr /hbase

    If you have deployed a secure cluster, enter the following command:

    deleteall /hbase

    If you see the message Node not empty: /hbase/tokenauth, you must re-run the same command and restart the HBase service.

  5. Restart the HBase service.

After HBase is healthy, ensure that you restore the states of the Balancer and Normalizer (enable them if they were enabled before the rollback). Also re-enable the Merge and Split operations you disabled before the rollback to avoid the Master Procedure incompatibility problem.

Run the following commands in HBase Shell:

balance_switch true
normalizer_switch true
splitormerge_switch 'SPLIT', true
splitormerge_switch 'MERGE', true

Rollback Solr

  1. Start the HDFS, Zookeeper and Ranger services.
  2. Start the Solr service.
  3. Restart Lily HBase Indexer (ks_indexer).

Rollback Atlas

Rollback Atlas Solr Collections
Atlas has several collections in Solr that must be restored from the pre-upgrade backup - vertex_index, edge_index, and fulltext_index. These collections may already have been restored using the Rollback Solr documentation. If the collections are not yet restored, you must restore collections now using the Rollback Solr documentation.
Rollback Atlas HBase Tables
  1. From a client host, start the HBase shell hbase shell
  2. Within the HBase shell, list the snapshots, that must contain the pre-upgrade snapshots list_snapshots
  3. Within the HBase shell, disable the atlas_janus table, restore the snapshot, and enable the table

    disable 'atlas_janus'

    restore_snapshot '<name of atlas_janus snapshot from list_snapshots>'

    enable 'atlas_janus'

  4. Within the HBase shell, disable the ATLAS_ENTITY_AUDIT_EVENTS table, restore the snapshot, and enable the table

    disable 'ATLAS_ENTITY_AUDIT_EVENTS'

    restore_snapshot '<name of ATLAS_ENTITY_AUDIT_EVENTS snapshot from list_snapshots>'

    enable 'ATLAS_ENTITY_AUDIT_EVENTS'

  5. Restart Atlas.

Rollback Kudu

Rollback depends on which backup method was used. There are two forms of backup/restore in Kudu. Spark job to create or restore a full/incremental backup or backup the entire Kudu node and restore it later. Restoring the Kudu data differs between the Spark job and full node backup approaches.

See the Kudu backup and restore docs for data recovery steps. The OS level restore is not recommended if there is a backup.

Rollback YARN Queue Manager

You can rollback YARN Queue Manager using the pre-upgrade backup of config-service.mv.db and config-service.trace.db.

  1. Navigate to the YARN Queue Manager service in Cloudera Manager and record the configuration value for config_service_db_loc (or queuemanager_user_home_dir if blank) and the host where the YARN Queue Manager Store is running.
  2. Stop the YARN Queue Manager service.
  3. SSH to the YARN Queue Manager Store host and copy the pre-upgrade config-service.mv.db and config-service.trace.db to the config_service_db_loc obtained in the previous step.
  4. Start the YARN Queue Manager service.

Deploy the Client Configuration

  1. On the Cloudera Manager Home page, click the Actions menu and select Deploy Client Configuration.
  2. Click Deploy Client Configuration.

Restart the Cluster

You must restart the cluster using the following steps.

  1. On the Cloudera Manager Home page, click the Actions menu and select Restart.
  2. Click Restart that appears in the next screen to confirm. If you have enabled high availability for HDFS, you can choose Rolling Restart instead to minimize cluster downtime. The Command Details window shows the progress of stopping services.

    When All services successfully started appears, the task is complete and you can close the Command Details window.

Post rollback steps

Streams Replication Manager (SRM)
Reset the state of the internal Kafka Streams application. Run the following command on the hosts of the SRM Service role.
kafka-streams-application-reset \
        --bootstrap-servers [***SRM SERVICE HOST***] \
        --config-file [***PROPERTIES FILE***] \
        --application-id srm-service_v2

Replace [***PROPERTIES FILE***] with the location of a configuration file that contains all necessary security properties that are required to establish a connection with the Kafka service. This option is only required if your Kafka service is secured.

Cruise Control
  1. Insert the removed goal back to the relevant goal sets, but with the renamed goal name RackAwareGoal (Not RackAwareDistributionGoal)
  2. Restart Cruise Control
Oozie
Execute the Install Oozie ShareLib action through Cloudera Manager:
  1. Go to the Oozie service.
  2. Select Actions > Install Oozie ShareLib.
Finalize the HDFS Upgrade
This step should be performed only after all validation is completed. For more information, see Finalize the HDFS upgrade documentation.