Rolling Back a Cloudera Private Cloud Base Upgrade from version 7.1.8 to
CDH 6
You can roll back an upgrade from Cloudera Private Cloud Base 7 to CDH
6. The rollback restores your CDH cluster to the state it was in before the upgrade, including
Kerberos and TLS/SSL configurations.
Select services to roll back
Select all of the services deployed in the cluster you are rolling back. This page will only display rollback steps for these services.
Refreshing ContentFill out the form above before you proceed. Content Updated
To share this environment with others, click the icon next to the Select services to roll back heading to copy a link specific for this environment to the clipboard.
In a typical upgrade, you first upgrade Cloudera Manager from version 6.x to version 7.x, and
then you use the upgraded version of Cloudera Manager 7 to upgrade CDH 6 to Cloudera Private Cloud Base 7. (See Upgrading a CDH 6 Cluster.)
If you want to roll back this upgrade, follow these steps to roll back your cluster to its
state prior to the upgrade.
You can roll back to CDH 6 after upgrading to Cloudera Private Cloud Base 7
only if the HDFS upgrade has not been
finalized. The rollback restores your CDH cluster to the state it was in before the
upgrade, including Kerberos and TLS/SSL configurations.
Review Limitations🔗
The rollback procedure has the following limitations:
HDFS – If you have
finalized the HDFS upgrade, you cannot roll back your
cluster.
Compute clusters – Rollback for Compute clusters is not
supported. You must remove any compute clusters before rolling back.
Configuration changes, including the addition of new
services or roles after the upgrade, are not retained after rolling
back Cloudera Manager.
Cloudera recommends that you not make
configuration changes or add new services and roles until you have
finalized the HDFS upgrade and no longer require the option to
roll back your upgrade.
HBase – If your cluster is
configured to use HBase replication, data written to HBase after the
upgrade might not be replicated to peers when you start your
rollback. This topic does not describe how to determine which, if
any, peers have the replicated data and how to roll back that data.
For more information about HBase replication, see HBase Replication.
Sqoop 2 – As
described in the upgrade process, Sqoop2 had to be stopped and
deleted before the upgrade process and therefore will not be
available after the rollback.
Kafka – Once the Kafka log
format and protocol version configurations (the
inter.broker.protocol.version and
log.message.format.version properties) are set to
the new version (or left blank, which means to use the latest
version), Kafka rollback is not possible.
Stop the Cluster🔗
If HBase is deployed in the cluster
do the following before stopping the cluster:
The HBase
Master procedures changed between the two versions, so if a
procedure was started by HBase 2.2 (CDP 7.x) then the older HBase
2.1 won't be able to continue the procedure after the rollback. For
this reason the Procedure Store in HBase Master must be cleaned
before the rollback. If CDP 7.x HBase Master was never started, then
the rollback should be fine. But if HBase Master was running with
the new version and there is any ongoing (or stuck) HBase Master
Procedure present in the CDP 7 HBase Master, then the older CDH 6
HBase Master will fail to start after the rollback. If this happens,
HBase will need manual fix after the rollback (e.g. the sidelining
of the HBase Master Procedure WAL files and the potential fixing of
inconsistencies in HBase).
To avoid this problem,
you should try to verify that no unfinished procedure is present
before stopping HBase Master on the CDP 7.x Cluster. Please follow
these steps:
Make sure there was no traffic running against the
HBase Cluster recently (in the last 10 minutes) that can trigger
e.g. table creation or deletion, region assignment or split or
merge, etc.
Disable automatic Balancer and Normalizer in HBase.
Also disable Split and Merge procedures, before stopping the CDP
7 Cluster. All these tools in HBase can cause the starting of
new HBase Master Procedures, which we want to avoid now. Issue
the following commands in HBase Shell:
Check the list of procedures on the HBase Master Web
UI (In Cloudera Manager, go to the HBase service and open the
HBase Web
UI > Procedures &
Locks tab). Wait until you see
procedures only with final states like 'SUCCESS', 'FAILED' or
'ROLLEDBACK'.
Get the list of procedures from HBase shell using the
'list_procedures' command. Wait until you see procedures only
with final states like 'SUCCESS', 'FAILED' or 'ROLLEDBACK'. The
State appears in the third column of the table returned by the
'list_procedures' command.
If the HBase Master doesn't start after the rollback and some
procedure-related exceptions are found in the role logs (like
"BadProcedureException" or decode errors in the
"ProcedureWALFormatReader" class, or "ClassNotFoundException" for
procedure classes), then this is most likely caused by CDP 7
procedures that still remain in the procedure WAL files. In this
case, please open a ticket for Cloudera customer support, who will
help you to sideline the procedure WAL files and fix any potential
inconsistencies in HBase.
On the
Home > Status
tab, click the Actions menu and select
Stop.
Click Stop in the confirmation screen. The
Command Details window shows the progress of stopping
services.
When All services successfully stopped appears,
the task is complete and you can close the Command Details
window.
Go to the YARN service and click
Actions > Clean
NodeManager Recovery Directory. The CDH
6 NodeManager will not start up after the downgrade if it finds CDP
7.x data in the recovery directory. The format and content of the
NodeManager's recovery state store was changed between CDH 6.x and CDP
7.x. The recovery directory used by CDP 7.x must be cleaned up as part
of the downgrade to CDH 6.
(Parcels) Downgrade the Software🔗
Follow these steps only if your cluster was upgraded using Cloudera
parcels.
Log in to the Cloudera Manager Admin Console.
Select
Hosts > Parcels.
A
list of parcels displays.
Locate the CDH 6 parcel and click Activate. (This automatically
deactivates the Cloudera Private Cloud Base 7 parcel.) See Activating a Parcel for more information. If the
parcel is not available, use the Download button to download the
parcel.
If you include any additional components in your cluster, such as
Search or Impala, click Activate for those
parcels.
If the Ranger service is deployed in the cluster, disable
the Ranger plugin from the services below, if they are deployed in the
cluster:
HDFS: Go to the HDFS service > Configurations and
disable the Enable Ranger Authorization
configuration property.
Hive: Go to the Hive service > Configurations and
delete the Ranger Service configuration
property.
Kafka: Go to the Kafka service > Configurations and
delete the Ranger Service configuration
property.
Impala Go to the Impala service > Configurations >
delete the Ranger Service configuration
property.
After performing the above steps to disable the plugin, Stop the
Ranger service and delete it.
The Sentry service will be added when you perform the
Restore Cloudera Manager Databases steps,
later in this rollback procedure. The Sentry service will be added in
Cloudera Manager and will continue to use the database configuration
saved in Cloudera Manager.
Restore the Cloudera Manager databases from the backup of Cloudera Manager that was taken
before upgrading the cluster toCloudera Private Cloud Base 7. See the
procedures provided by your database vendor. Show
Use the backup of CDH that was taken before the upgrade to restore
Cloudera Manager Server files and directories. Substitute the path to
your backup directory for cm7_cdh6
in the following steps:
On the host where the Event Server role is configured to run,
restore the Events Server directory from the CM 7/CDH 6 backup.
This
command may return a message similar to: rm: cannot remove
‘/var/run/cloudera-scm-agent/process’: Device or resource
busy. You can ignore this message.
On the host where the Service Monitor is
running, restore the Service Monitor
directory:
Start the Cloudera Management Service. Show Me How
Log in to the Cloudera Manager Admin Console.
Select
Clusters > Cloudera
Management Service.
Select
Actions > Start.
The cluster page may indicate that services are in bad health. This
is normal.
Stop the cluster. In the Cloudera Manager Admin Console, click the
Actions menu for the cluster and select
Stop.
Roll Back ZooKeeper🔗
Using the backup of Zookeeper that you created when backing up your
CDH 6.x cluster, restore the contents of the
dataDir on each ZooKeeper
server. These files are located in a directory specified with the
dataDir property in the ZooKeeper configuration.
The default location is /var/lib/zookeeper. For
example:
Using the backup of Zookeeper that you created when backing up your
CDH 6.x cluster, restore the contents of the
Transaction Log Direcgtory on
each ZooKeeper server. These files are located in a directory
specified with the Transaction Log Direcgtory
property in the ZooKeeper configuration. For
example:
Make sure that the permissions of all the directories and files are
as they were before the upgrade.
Start ZooKeeper using Cloudera Manager.
Roll Back HDFS🔗
You cannot roll back HDFS while high availability is enabled. The
rollback procedure in this topic creates a temporary configuration
without high availability. Regardless of whether high availability is
enabled, follow the steps in this section.
Roll back all of the Journal Nodes. (Only required for clusters where high
availability is enabled for HDFS). Use the JournalNode backup you
created when you backed up HDFS before upgrading to Cloudera Private Cloud Base.
Log in to each Journal Node host and run the following
commands:
Roll back all of the NameNodes. Use the NameNode backup directory you
created before upgrading to Cloudera Private Cloud Base.
(/etc/hadoop/conf.rollback.namenode) to perform the following steps on
all NameNode hosts:
(Clusters with TLS enabled only) Edit the
/etc/hadoop/conf.rollback.namenode/ssl-server.xml file on all
NameNode hosts (located in the temporary rollback directory) and update the keystore
passwords with the actual cleartext passwords.
The passwords will have values that look like
this:
(TLS only) Edit the
/etc/hadoop/conf.rollback.namenode/ssl-server.xml file and remove
the hadoop.security.credential.provider.path property.
(TLS only) Edit the
/etc/hadoop/conf.rollback.namenode/ssl-server.xml file and update
the ssl.server.keystore.location
property:
# Original version of the keystore.location property:
<property>
<name>ssl.server.keystore.location</name>
<value>/var/run/cloudera-scm-agent/process/879-hdfs-NAMENODE/cm-auto-host_keystore.jks</value>
</property>
# New version of the keystore.location property:
<property>
<name>ssl.server.keystore.location</name>
<value>/etc/hadoop/conf.rollback.namenode/cm-auto-host_keystore.jks</value>
</property>
Edit the
/etc/hadoop/conf.rollback.namenode/hdfs-site.xml
file on all NameNode hosts and make the following changes:
Update the
dfs.namenode.inode.attributes.provider.class
property. If Sentry was installed prior to the upgrade, change the
value of the property from
org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer
to
"org.apache.sentry.hdfs.SentryINodeAttributesProvider.
If Sentry was not installed, remove this property.
Change the path in the dfs.hosts property to
the value shown in the example below. The file name,
dfs_all_hosts.txt, may have been changed by a
user. If so, substitute the correct file
name.
# Original version of the dfs.hosts property:
<property>
<name>dfs.hosts</name>
<value>/var/run/cloudera-scm-agent/process/63-hdfs-NAMENODE/dfs_all_hosts.txt</value>
</property>
# New version of the dfs.hosts property:
<property>
<name>dfs.hosts</name>
<value>/etc/hadoop/conf.rollback.namenode/dfs_all_hosts.txt</value>
</property>
Edit the
/etc/hadoop/conf.rollback.namenode/core-site.xml
and change the value of the
net.topology.script.file.name property to
/etc/hadoop/conf.rollback.namenode. For example:
# Original property
<property>
<name>net.topology.script.file.name</name>
<value>/var/run/cloudera-scm-agent/process/63-hdfs-NAMENODE/topology.py</value>
</property>
# New property
<property>
<name>net.topology.script.file.name</name>
<value>/etc/hadoop/conf.rollback.namenode/topology.py</value>
</property>
Edit the
/etc/hadoop/conf.rollback.namenode/topology.py file
and change the value of MAP_FILE to
/etc/hadoop/conf.rollback.namenode. For example:
Restart the NameNodes and JournalNodes using Cloudera Manager:
Go to the HDFS service.
Select the Instances tab, and then select
all Failover Controller, NameNode, and JournalNode roles from the
list.
Click Actions for
Selected > Restart.
Rollback the DataNodes.
Use the DataNode rollback directory
you created before upgrading to Cloudera Private Cloud Base
(/etc/hadoop/conf.rollback.datanode) to perform the following steps
on all DataNode hosts:
(Clusters with TLS enabled only) Edit the
/etc/hadoop/conf.rollback.datanode/ssl-server.xml file on all
DataNode hosts (Located in the temporary rollback directory.) and update the
keystore passwords (ssl.server.keystore.password and
ssl.server.keystore.keypassword) with the actual passwords.
The passwords will have values that look like
this:
(TLS only) Edit the
/etc/hadoop/conf.rollback.datanode/ssl-server.xml file and update
the ssl.server.keystore.location
property:
# Original version of the keystore.location property:
<property>
<name>ssl.server.keystore.location</name>
<value>/var/run/cloudera-scm-agent/process/879-hdfs-NAMENODE/cm-auto-host_keystore.jks</value>
</property>
# New version of the keystore.location property:
<property>
<name>ssl.server.keystore.location</name>
<value>/etc/hadoop/conf.rollback.namenode/cm-auto-host_keystore.jks</value>
</property>
(TLS only) Edit the
/etc/hadoop/conf.rollback.datanode/ssl-server.xml file and remove
the hadoop.security.credential.provider.path property.
Edit the /etc/hadoop/conf.rollback.datanode/hdfs-site.xml file
and remove the dfs.datanode.max.locked.memory property.
Run one of the following commands:
Run as root if the DataNodes use reserved ports.
Search in the logs for
completed rollback line. It will not be shown in the command line of the
rollback.
If the DataNode is running with privileged ports (usually 1004 and 1006):
cd /etc/hadoop/conf.rollback.datanodeexport HADOOP_SECURE_DN_USER=hdfsexport JSVC_HOME=/opt/cloudera/parcels/<parcel_filename>/lib/bigtop-utilshdfs --config /etc/hadoop/conf.rollback.datanode datanode -rollback
If the DataNode is not running on privileged
ports:
cd /etc/hadoop/conf.rollback.datanodesudo hdfs --config /etc/hadoop/conf.rollback.datanode datanode -rollback
You may see the following error after issuing these
commands:
ERROR datanode.DataNode: Exception in secureMain
java.io.IOException: The path component: '/var/run/hdfs-sockets' in '/var/run/hdfs-sockets/dn' has permissions 0755 uid 39998 and gid 1006.
It is not protected because it is owned by a user who is not root and not the effective user: '0'.
The
error message will also include the following command to
run:
chown root /var/run/hdfs-sockets
After
running this command, rerun the DataNode rollback command:
When the rolling back of the DataNodes is complete, terminate the console
session by typing Control-C. Look for output from the
command similar to the following that indicates when the DataNode rollback is
complete:
Rollback of /dataroot/ycloud/dfs/dn/current/BP-<Block Group number> is complete
If High Availability for HDFS is enabled, restart the HDFS service. In the
Cloudera Manager Admin Console, go to the HDFS service and select Actions > Restart.
If high availability is not enabled for HDFS, use the Cloudera Manager Admin
Console to restart all NameNodes and DataNodes.
Go to the HDFS service.
Select the Instances tab
Select all DataNode and NameNode roles from the list.
Click Actions for Selected > Restart.
If high availability is not enabled for HDFS, roll back the
Secondary NameNode.
(Clusters with TLS enabled only) Edit the
/etc/hadoop/conf.rollback.secondarynamenode/ssl-server.xml
file on all Secondary NameNode hosts (Located in the temporary
rollback directory.) and update the keystore passwords with the
actual cleartext passwords.
The passwords will have values that look like
this:
When the rolling back of the Secondary
NameNode is complete, terminate the console
session by typing Control-C. Look for
output from the command similar to the following that indicates
when the Secondary NameNode rollback is
complete:
2020-12-21 17:09:36,239 INFO namenode.SecondaryNameNode: Web server init done
Restart the HDFS service. Open the Cloudera Manager Admin Console,
go to the HDFS service page, and select
Actions > Restart.
The Restart Command page displays the
progress of the restart. Wait for the page to display the
Successfully restarted service message
before continuing.
Start the HBase Service🔗
Restart the HBase Service. Open the Cloudera Manager Admin Console, go
to the HBase service page, and select
Actions > Start.
If you have configured any HBase coprocessors, you must revert them to
the versions used before the upgrade.
If CDP 7.x HBase Master was started after the upgrade and
there was any ongoing (or stuck) HBase Master Procedure present in the
HBase Master before stopping the CDP 7 Cluster, then it is expected for
the CDH 6 HBase Master to fail with warnings and errors in the role log
from the classes like 'ProcedureWALFormatReader' and 'WALProcedureStore'
or 'TransitRegionStateProcedure'. These errors mean that the HBase
Master Write-Ahead Log files are incompatible with the CDH 6 HBase
version. The only way to fix this problem is to sideline the log files
(all the files placed under /hbase/MasterProcWALs by default), then
restart the HBase Master. After the HBase Master has started, Use the
HBCK command to find out if there are any
inconsistencies that will need to be fixed manually.
You my encounter other errors when starting HBase (for
example, replication-related problems, region assignment related issues,
and meta region assignment problems). In this case you should delete the
znode in ZooKeeper and then start HBase again. (This will delete
replication peer information and you will need to re-configure your
replication schedules.):
In Cloudera Manager, look up the value of the
zookeeper.znode.parent property. The default
value is /hbase.
Connect to the ZooKeeper ensemble by running the following command
from any HBase gateway host:
zookeeper-client -server zookeeper_ensemble
To
find the value to use for
zookeeper_ensemble, open
the /etc/hbase/conf.cloudera.<HBase service
name>/hbase-site.xml file on any HBase
gateway host. Use the value of the
hbase.zookeeper.quorum property.
The ZooKeeper command-line interface opens.
Enter the following
command:
rmr /hbase
After HBase is healthy, make sure you restore the states of the
Balancer and Normalizer (enable them if they were enabled before the
rollback). Also re-enable the Merge and Split operations you
disabled before the rollback to avoid the Master Procedure
incompatibility problem. Run the following commands in HBase Shell:
When you are rolling back from CDP Private Cloud Base 7.1.8 to CDH 6 if you
encounter a change in the tableinfo file name format from the new tableinfo file name that
was created during the 7.1.8 upgrade can prevent HBase from functioning normally.
After the rollback, if HDFS rollback was not successful and Hbase is unable to
read the tableinfo files then use the HBCK2 tool to verify the list of
tableinfo files that need to be fixed.
Follow these steps to execute the HBCK2 command on the HBCK2 tool to fix the
tableinfo file format:
Contact Cloudera support to request the latest version of HBCK2
tool.
Use the following HBCK2 command and run the HBCK2
tool without the –fix
option:
Check the output and verify whether all the tableinfo files are fixed.
Restore CDH Databases🔗
Restore the following databases from the CDH 6 backups:
Hive Metastore
Hue
Oozie
Sentry Server
The steps for backing up and restoring databases differ depending on
the database vendor and version you select for your cluster and are
beyond the scope of this document.
See the following vendor resources for more
information:
A Cloudera Private Cloud Base 7 cluster that is running Kafka
can be rolled back to the previous CDH5/CDK versions as long as
theinter.broker.protocol.version and
log.message.format.version properties have not been set to the new
version or removed from the configuration.
To perform the rollback using Cloudera Manager:
Activate the previous CDK parcel. Please note, that when
rolling back Kafka from CDP Private Cloud Base 7 to CDH 6/CDK, the
Kafka cluster will restart. Rolling restart is not supported for
this scenario. See Activating a Parcel.
Remove the following properties from the
Kafka Broker Advanced Configuration Snippet (Safety
Valve) configuration property.
Inter.broker.protocol.version
log.message.format.version
Deploy the Client Configuration🔗
On the Cloudera Manager
Home page, click
the Actions menu and select Deploy Client
Configuration.
Click Deploy Client Configuration.
Restart the Cluster🔗
On the Cloudera Manager
Home page, click
the Actions menu and select
Restart.
Click Restart that appears in the next screen to confirm. If
you have enabled high availability for HDFS,
you can choose Rolling
Restart instead to minimize cluster downtime.
The Command Details window shows the progress of stopping
services.
When All services successfully started appears,
the task is complete and you can close the Command Details
window.
Roll Back Cloudera Navigator Encryption Components🔗
If you are rolling back any encryption components (Key Trustee Server,
Key Trustee KMS, HSM KMS, Key HSM, or Navigator Encrypt), first refer to:
To roll back Key Trustee Server, replace the currently used parcel
(for example, the parcel for version 7.1.4) with the parcel for the
version to which you wish to roll back (for example, version 5.14.0).
See Parcels for detailed
instructions on using parcels.
The Keytrustee Server 7.x upgrades the bundled Postgres engine from
version 9.3 to 12.1. The upgrade happens automatically, however,
downgrading to CDH 6 requires manual steps to roll back the database
engine to version 9.3. Because the previously upgraded database is
left unchanged, the database server will fail start. Follow these
steps to recreate the Postgres 9.3 compatible database:
Open the Cloudera Manager Admin Console and go to the Key
Trustee Server service. If you see that Key Trustee Server has
stale configurations, click the yellow or blue button and follow
the prompts.
Make sure that the Keytrustee Server database roles are stopped.
Then rename the folder containing Keytrustee Postgres database
data (both on master and slave hosts):
Open the Cloudera Manager Admin Console and go to the Key
Trustee Server service.
Select the Instances tab.
Select the Active Database role type.
Click Actions for
Selected > Set Up the Key Trustee Server
Database.
Click Set Up the Key Trustee Server
Database to confirm.
Cloudera Manager sets up the
Key Trustee Server database.
Start the Active Database role in Cloudera Manager by clicking Actions for Selected > Start.
Click Start to confirm.
On the master KTS node: running as user keytrustee, restore the
keytrustee database from the dump created during the upgrade by running the following
commands:
sudo -su keytrustee
cd
export HOME=/opt/cloudera/parcels/KEYTRUSTEE_SERVER
export JAVA_HOME=... # Set this to your Java Home folder
export PATH="/opt/cloudera/parcels/KEYTRUSTEE_SERVER/bin:/opt/cloudera/parcels/KEYTRUSTEE_SERVER/PG_DB/opt/postgres/9.3/bin:$PATH"
source /opt/cloudera/parcels/KEYTRUSTEE_SERVER/meta/keytrustee_env.sh
dropdb -p 11381 keytrustee
If you see the message: could not change directory to "/root:
Permission denied on the console, run the following command to check the
exit code of the last command:
$?
You can use the exit code to
debug any issues.
Run the following command to import a database dump that was created during
upgrade:
Enable synchronous replication by navigating to KTS > Actions > Enable Synchronous Replication in HA Mode.
Start the Passive Database instance: select the Passive
Database, click Actions for
Selected > Start.
In the Cloudera Manager Admin Console, start the active KTS
instance.
In the Cloudera Manager Admin Console, start the passive KTS
instance.
Start the Key Management Server🔗
Restart the Key Management Server. Open the Cloudera Manager Admin
Console, go to the KMS service page, and select
Actions > Start.
Roll Back Key HSM🔗
To roll back Key HSM:
Install the version of Navigator Key HSM to which you wish to
roll back
Install the Navigator Key HSM package using
yum:
sudo yum downgrade keytrustee-keyhsm
Cloudera
Navigator Key HSM is installed to the
/usr/share/keytrustee-server-keyhsm directory
by default.
Rename Previously-Created Configuration Files
For Key
HSM major version rollbacks, previously-created configuration
files do not authenticate with the HSM and Key Trustee Server,
so you must recreate these files by re-executing the
setup and trust commands.
First, navigate to the Key HSM installation directory and rename
the applications.properties,
keystore, and truststore
files:
cd /usr/share/keytrustee-server-keyhsm/mv application.properties application.properties.bakmv keystore keystore.bakmv truststore truststore.bak
Initialize Key HSM
Run the service keyhsm
setup command in conjunction with the name of the
target HSM distribution:
Establish Trust Between Key HSM and the Key Trustee
Server
The Key HSM service must explicitly trust the Key
Trustee Server certificate (presented during TLS handshake). To
establish this trust, run the following
command:
Remove Configuration Files From Previous
Installation
After completing the rollback, remove the
saved configuration files from the previous
installation:
cd /usr/share/keytrustee-server-keyhsm/rm application.properties.bakrm keystore.bakrm truststore.bak
Roll Back Key Ranger KMS Parcels🔗
Enable the desired parcel that you wish to roll back to (for example,
version 6.3.4 of Key Trustee KMS). See Parcels for detailed
instructions on using parcels. See Parcels for detailed
instructions on using parcels.
Roll Back HSM KMS Parcels🔗
To roll back the HSM KMS parcels, replace the currently used parcel
(for example, the parcel for version 6.0.0) with the parcel for the
version to which you wish to roll back (for example, version 5.14.0).
See Parcels for detailed
instructions on using parcels.
If rolling back to a release lower than NavEncrypt 6.2:
Print the existing ACL rules and save that output
to a
file:
sudo navencrypt acl --print+ vim acls.txt
Delete all existing ACLs, for example, if there are
a total of 7 ACL rules
run:
sudo navencrypt acl --del --line=1,2,3,4,5,6,7
To fully downgrade Navigator Encrypt, manually downgrade
all of the associated Navigator Encrypt packages (in the order listed):
navencrypt
navencrypt-kernel-module (Only required for
operating systems other than SLES)
cloudera-navencryptfs-kmp (Only required for the
SLES operating system)
libkeytrustee
If rolling back to a release less than NavEncrypt
6.2
Reapply the ACL
rules:
sudo navencrypt acl --add --file=acls.txt
Recompute process
signatures:
sudo navencrypt acl --update
Restart the Navigator Encrypt mount
service
sudo /etc/init.d/navencrypt-mount start
(Optional) Cloudera Manager Rollback Steps🔗
After you complete the rollback steps, your cluster is using Cloudera
Manager 7 to manage your CDH 6 or CDH 6 cluster. You can continue to use
Cloudera Manager 7 to manage your CDH 6 cluster, or you can downgrade to
Cloudera Manager 6 by following these steps:
Back up the repository directory. You can create a top-level
backup directory and an environment variable to reference the
directory using the following commands. You can also substitute
another directory path in the backup commands
below:
Run the following commands on the Cloudera Manager server
host:
Operating System
Command
RHEL
sudo yum remove cloudera-manager-server
sudo yum install cloudera-manager-server
SLES
sudo zypper remove cloudera-manager-server
sudo zypper install cloudera-manager-server
Ubuntu or Debian
sudo apt-get purge cloudera-manager-server
sudo apt-get install cloudera-manager-server
Restore Cloudera Manager Databases🔗
Restore the Cloudera Manager databases from the backup of Cloudera
Manager that was taken before upgrading to Cloudera Manager 7. See the
procedures provided by your database vendor.
These databases include the following:
Cloudera Manager Server
Reports Manager
Navigator Audit Server
Navigator Metadata Server
Activity Monitor (Only used for MapReduce 1 monitoring).
This site uses cookies and related technologies, as described in our privacy policy, for purposes that may include site operation, analytics, enhanced user experience, or advertising. You may choose to consent to our use of these technologies, or