Cloudera Manager 7.11.3 Cumulative hotfix 1

Know more about the Cloudera Manager 7.11.3 cumulative hotfixes 1.

This cumulative hotfix was released on November 2, 2023.

New features and changed behavior for Cloudera Manager 7.11.3 CHF1 (version:
Cloudera Navigator role instances under the Cloudera Management Service are no longer available while using Cloudera Runtime 7.1.9 CHF1 version.
You must first migrate Cloudera Navigator to Atlas before you upgrade from CDH 6.x + Cloudera Manager 6.x / 7.x to CDP 7.1.9 CHF1 + Cloudera Manager 7.11.3 Latest cumulative hotfix. For more information, you must refer to Migrating from Cloudera Navigator to Atlas using Cloudera Manager 6 and Migrating from Cloudera Manager to Atlas using Cloudera Manager 7.
OpenJDK 17 (TCK certified) support for the Cloudera Manager 7.11.3 CHF1 and operating systems

Cloudera Manager 7.11.3 CHF1 now supports OpenJDK 17 (TCK certified) on RHEL 7, RHEL 8, RHEL 9, Ubuntu 20, and SLES 15.

You must upgrade to Cloudera Manager 7.11.3 CHF1 or higher, before upgrading to OpenJDK 17 (TCK certified).

Replicate Hive external tables in Dell EMC Isilon storage clusters using Hive external table replication policies
You can use Hive external table replication policies in CDP Private Cloud Base Replication Manager to replicate Hive external tables between Dell EMC Isilon storage clusters where the 7.1.9 clusters use Cloudera Manager 7.11.3 CHF1 or higher versions.
Following are the list of known issues and their corresponding workarounds that are shipped for Cloudera Manager 7.11.3 CHF1 (version:
OPSAPS-69406: Cannot edit existing HDFS and HBase snapshot policy configuration
The Edit Configuration modal window does not appear when you click Actions > Edit Configuration on the Cloudera Manager > Replication > Snapshot Policies page for existing HDFS or HBase snapshot policies.
OPSAPS-68340: Zeppelin paragraph execution fails with the User not allowed to impersonate error.

Starting from Cloudera Manager 7.11.3, Cloudera Manager auto-configures the livy_admin_users configuration when Livy is run for the first time. If you add Zeppelin or Knox services later to the existing cluster and do not manually update the service user, the User not allowed to impersonate error is displayed.

If you add Zeppelin or Knox services later to the existing cluster, you must manually add the respective service user to the livy_admin_users configuration in the Livy configuration page.

OPSAPS-68689: Unable to emit the LDAP Bind password in core-site.xml for client configurations

If the CDP cluster has LDAP group to OS group mapping enabled, then applications running in Spark or Yarn would fail to authenticate to the LDAP server when trying to use the LDAP bind account during the LDAP group search.

This is because the LDAP bind password was not passed to the /etc/hadoop/conf/core-site.xml file. This was intended behavior to prevent leaking the LDAP bind password in a clear text field.

Set the LDAP Bind password through the HDFS client configuration safety valve.
  1. On the Cloudera Manager UI, navigate to the HDFS service, by clicking on the HDFS service under the Cluster.
  2. Click the Configuration tab. Search for the HDFS Client Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml configuration parameter.

  3. Add an entry with the following values:
    • Name =
    • Value = (Enter the LDAP bind password here)
    • Description = Password for LDAP bind account
  4. Then click the Save Changes button to save the safety valve entry.

  5. Perform the instructions from the Manually Redeploying Client Configuration Files to manually deploy client configuration files to the cluster.

OPSAPS-69342: Access issues identified in MariaDB 10.6 were causing discrepancies in High Availability (HA) mode

MariaDB 10.6, by default, includes the property require_secure_transport=ON in the configuration file (/etc/my.cnf), which is absent in MariaDB 10.4. This setting prohibits non-TLS connections, leading to access issues. This problem is observed in High Availability (HA) mode, where certain operations may not be using the same connection.

To resolve the issue temporarily, you can either comment out or disable the line require_secure_transport in the configuration file located at /etc/my.cnf.

OPSAPS-68452: Azul Open JDK 8 and 11 are not supported with Cloudera Manager

Azul Open JDK 8 and 11 are not supported with Cloudera Manager. To use Azul Open JDK 8 or 11 for Cloudera Manager RPM/DEBs, you must manually create a symlink between the Zulu JDK installation path and the default JDK path.

After installing Azul Open JDK8 or 11, you must run the following commands on all the hosts in the cluster:
Azul Open JDK 8
# sudo ln -s /usr/lib/jvm/java-8-zulu-openjdk-jdk /usr/lib/jvm/java-8-openjdk
Ubuntu or Debian
# sudo ln -s /usr/lib/jvm/zulu-8-amd64 /usr/lib/jvm/java-8-openjdk
Azul Open JDK 11
For DEBs only
# sudo ln -s /usr/lib/jvm/zulu-11-amd64 /usr/lib/jvm/java-11
OPSAPS-69481: Some Kafka Connect metrics missing from Cloudera Manager due to conflicting definitions
The metric definitions for kafka_connect_connector_task_metrics_batch_size_avg and kafka_connect_connector_task_metrics_batch_size_max in recent Kafka CSDs conflict with previous definitions in other CSDs. This prevents Cloudera Manager from registering these metrics. It also results in SMM returning an error. The metrics also cannot be monitored in Cloudera Manager chart builder or queried using the Cloudera Manager API.
Contact Cloudera support for a workaround.
OPSAPS-68559: On-premises to on-premises Hive external replication won't work with a cloud target
You cannot replicate Hive external tables using Hive external table replication policies from an on-premises cluster to another on-premises cluster with an external account to replicate the Hive data only to the cloud.
OPSAPS-68658: Source ozone service id used as target
Ozone replication policies fail when the Ozone service ID is different on the source and destination clusters because Ozone replication uses the destination Ozone service ID during the path normalization process.
OPSAPS-68698: Replication command type is incorrectly reported for Ozone incremental replications
When you create an Ozone replication policy using “Incremental only” or “Incremental with fallback to full file listing” Listing types, sometimes the Ozone replication command type is incorrectly reported for different types of runs.
OPSAPS-42908: "User:hdfs not allowed to do DECRYPT_EEK" error appears for Hive external table replication policies
When you run Hive external table replication policies on clusters using Ranger KMS, the “User:hdfs not allowed to do 'DECRYPT_EEK'” error appears when you do not use the hdfs username.
Edit the Hive external table replication policy, and configure the Advanced > Directory for metadata file field to a new directory that is not encrypted. The replication policy uses this directory to store the transient data during Hive replication.
OPSAPS-69897: NPE in Ozone replication from CM 7.7.1 to CM 7.11.3
When you use source Cloudera Manager 7.7.1 and target Cloudera Manager 7.11.3 for Ozone replication policies, the policies fail with Failure during PreOzoneCopyListingCheck execution: null error. This is because the target Cloudera Manager 7.11.3 does not retrieve the required source bucket information for validation from the source Cloudera Manager 7.7.1 during the PreCopyListingCheck command phase. You come across this error when you use source Cloudera Manager versions lower than 7.10.1 and target Cloudera Manager versions higher than or equal to 7.10.1 in an Ozone replication policy.
Upgrade the source Cloudera Manager to 7.11.3 or higher version.
CDPD-62464: Java process called by tool fails on JDK-8 version
While running script on OracleJDK 8 an error message is thrown and returns code 0 on an unsuccessful run.
You must install JDK-11 version on the host. Make sure not to put into the default path and JAVA_HOME. In a shell, set the JAVA_HOME to this location and run the script.
CDPD-62834: Status of the deleted table is seen as ACTIVE in Atlas after the completion of navigator2atlas migration process
The status of the deleted table displays as ACTIVE.
CDPD-62837: During the navigator2atlas process, the hive_storagedesc is incomplete in Atlas
For the hive_storagedesc entity, some of the attributes are not getting populated.
Following are the list of fixed issues that were shipped for Cloudera Manager 7.11.3 CHF1 (version:
OPSAPS-68664: Added the support for JDK 17 in HDFS.
This issue is resolved.
OPSAPS-68550: Ozone Canary failing with unknown option --skipTrash.
This issue is resolved.
OPSAPS-66023: Error message about an unsupported ciphersuite while upgrading or installing cluster with the latest FIPS compliance

When upgrading or installing a FIPS enabled cluster, Cloudera Manager is unable to download the new CDP parcel from the Cloudera parcel archive.

Cloudera Manager displays the following error message:

HTTP ERROR 400 Unsupported ciphersuite TLS_EDH_RSA_WITH_3DES_EDE_CBC_SHA

This issue is fixed now by correcting the incorrect ciphersuite selection.
OPSAPS-65504: Upgraded Apache Ivy version

The Apache Ivy version is upgraded from 2.x.x to 2.5.1 version to fix CVE issues.

OPSAPS-68500: The cloudera-manager-installer.bin fails to reach Ubuntu 20 repository on the Archive URL due to redirections

Agent Installation with Cloudera Manager on Ubuntu20 platform does not function when the self-installer method (using the installer.bin file) is employed to install Cloudera Manager. The failure mode is that Cloudera Manager Agent installation step will fail with an error message saying "The repository ' focal-cm7 InRelease' is not signed."

This issue is fixed now.

OPSAPS-68422: Incorrect HBase shutdown command can lead to inconsistencies

Cloudera Manager uses an incomplete stop command when you stop the HBase service or the corresponding roles on a 7.1.8 or higher private cloud cluster. Due to this, the Cloudera Manager cannot gracefully stop the processes and kill them after a set timeout. This could lead to metadata corruption.

This issue is fixed now.

OPSAPS-68506: Knox CSD changes for readiness check

A readiness endpoint was added to determine whether Knox is ready to receive traffic. Clouder Manager checks the state of Knox after startup to reduce downtime during rolling restarts.

OPSAPS-68424 Impala: CM Agent unable to extract logs to TP export directory

Impala queries were not displaying in the Cloudera Observability and Workload XM web User Interfaces. This was due to an internal error that was stopping Cloudera Manager from pushing the Impala profile to the Telemetry Publisher logs directory.

This Issue is now fixed and the Telemetry Publisher’s log extraction has been re-enable.
OPSAPS-68697 Error while generating email template resulting in an inability to trigger mail notification

Cloudera Observability and Workload XM were unable to trigger an email notification when an Impala query matched the Auto Action’s alert threshold value.

This problem occurred when both the following conditions were met:
  • The Auto Action is triggered for an Impala Scope.
  • The length of the Impala query on which the action event is triggered is less than 36 characters.

This issue is now fixed.

OPSAPS-68798 Auto Actions not using proxy while connecting to DBUS

The Cloudera Observability and Workload XM Auto Actions feature was not recognizing the proxy server credentials, even when they were correct and the proxy server was enabled in Telemetry Publisher.

This issue is now fixed.
Known issue:
OPSAPS-68629: HDFS HTTPFS GateWay is not able to start with custom krb5.conf location set in Cloudera Manager.
On a cluster with a custom krb5.conf file location configured in Cloudera Manager, HDFS HTTPFS role is not able to start because it does not have the custom Kerberos configuration file setting properly propagated to the service, and therefore it fails with a Kerberos related exception: in thread "main" Unable to initialize WebAppContext at org.apache.hadoop.http.HttpServer2.start( at org.apache.hadoop.fs.http.server.HttpFSServerWebServer.start( at org.apache.hadoop.fs.http.server.HttpFSServerWebServer.main( Caused by: java.lang.IllegalArgumentException: Can't get Kerberos realm at at at at org.apache.hadoop.lib.service.hadoop.FileSystemAccessService.init( at org.apache.hadoop.lib.server.BaseService.init( at org.apache.hadoop.lib.server.Server.initServices( at org.apache.hadoop.lib.server.Server.init( at org.apache.hadoop.fs.http.server.HttpFSServerWebApp.init( at org.apache.hadoop.lib.servlet.ServerWebApp.contextInitialized( at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized( at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized( at org.eclipse.jetty.server.handler.ContextHandler.contextInitialized( at org.eclipse.jetty.servlet.ServletHandler.initialize( at org.eclipse.jetty.servlet.ServletContextHandler.startContext( at org.eclipse.jetty.webapp.WebAppContext.startWebapp( at org.eclipse.jetty.webapp.WebAppContext.startContext( at org.eclipse.jetty.server.handler.ContextHandler.doStart( at org.eclipse.jetty.servlet.ServletContextHandler.doStart( at org.eclipse.jetty.webapp.WebAppContext.doStart( at org.eclipse.jetty.util.component.AbstractLifeCycle.start( at org.eclipse.jetty.util.component.ContainerLifeCycle.start( at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart( at org.eclipse.jetty.server.handler.AbstractHandler.doStart( at org.eclipse.jetty.util.component.AbstractLifeCycle.start( at org.eclipse.jetty.util.component.ContainerLifeCycle.start( at org.eclipse.jetty.server.Server.start( at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart( at org.eclipse.jetty.server.handler.AbstractHandler.doStart( at org.eclipse.jetty.server.Server.doStart( at org.eclipse.jetty.util.component.AbstractLifeCycle.start( at org.apache.hadoop.http.HttpServer2.start( ... 2 more Caused by: java.lang.IllegalArgumentException: KrbException: Cannot locate default realm at<init>( at at ...
  1. Log in to Cloudera Manager.
  2. Select the HDFS service.
  3. Select Configurations tab.
  4. Search for HttpFS Environment Advanced Configuration Snippet (Safety Valve)
  5. Add to or extend the HADOOP_OPTS environment variable with the following value:<the custom krb5.conf location>
  6. Click Save Changes.

The repositories for Cloudera Manager 7.11.3-CHF1 are listed in the following table:

Table 1. Cloudera Manager 7.11.3-CHF1
Repository Type Repository Location
RHEL 9 Compatible Repository:
Repository File:
RHEL 8 Compatible Repository:
Repository File:
RHEL 7 Compatible Repository:
Repository File:
SLES 15 Repository:
Repository File:
SLES 12 Repository:
Repository File:
Ubuntu 20 Repository:
Repository File: