Cloudera Manager 7.11.3 Cumulative hotfix 15

Know more about the Cloudera Manager 7.11.3 cumulative hotfixes 15.

This cumulative hotfix was released on May 8, 2025.

New features and changed behavior for Cloudera Manager 7.11.3 CHF 15 (version: 7.11.3.34-66004006):
OPSAPS-71124: Swapping full Perl stack with Perl interpreter package

The Cloudera Manager Agent now depends on the Perl interpreter package rather than the full Perl stack. As a result, the GCC toolchain is not pulled in, so no compiler components are installed with the Cloudera Manager Agent. There is no functional impact to Cloudera Manager Agents.

OPSAPS-70909: Use specified users instead of "hive" for Ozone replication-related commands

Starting from Cloudera Manager 7.11.3 CHF15, Ozone commands executed by Ozone replication policies are run by impersonating the users that you specify in the Run as Username and Run on Peer as Username fields in the Create Ozone replication policy wizard. The bucket access for OBS-to-OBS replication depends on the user with the access key specified in the fs.s3a.access.key property.

When the source and target clusters are secure, and Ranger is enabled for Ozone, specific permissions are required for Ozone replication to replicate Ozone data using Ozone replication policies. For information about the permissions, see Preparing clusters to replicate Ozone data.
OPSAPS-73164: Ozone's upgrade handlers are not properly added to the UpgradeHandlerRegistry
Certain upgrade handlers are not added anymore during an upgrade, but this change in behaviour corrects potential problems by skipping correctly the UpgradeHandlers that are not designated to run for a certain upgrade path.
OPSAPS-73075: Add Safety Valve for hadoop-metrics2.properties for Ozone roles
Safety Valve for hadoop-metrics2.properties is now available for Ozone roles to enable tuning metrics collection.
Following are the list of known issues and their corresponding workarounds that are shipped for Cloudera Manager 7.11.3 CHF 15 (version: 7.11.3.34-66004006):
ENGESC-30503, OPSAPS-74868: Cloudera Manager limited support for custom external repository requiring basic authentication
Current Cloudera Manager does not support custom external repository with basic authentication (the Cloudera Manager Wizard supports either HTTP (non-secured) repositories or usage of Cloudera https://archive.cloudera.com only). In case customers want to use a custom external repository with basic authentication, they might get errors.

The assumption is that you can access the external custom repository (such as Nexus or JFrog, or others) using LDAP credentials. In case an applicative user is used to fetch the external content (as done in Data Services with the docker imager repository), the customer should ensure that this applicative user is located under the user's base search path where the real users are being retrieved during LDAP authentication check (so the external repository will find it and will allow it to gain access for fetching the files).

Once done, you can use the current custom URL fields in the Cloudera Manager Wizard and enter the URL for the RPMs or parcels/other files in the format of "https://USERNAME:PASSWORD@server.example.com/XX".

While using the password, you are advised to use only the printable ASCII character range (excluding space), whereas in case of a special character (not letter/number) it can be replaced with HEX value (For example, you can replace Aa1234$ with Aa1234%24 as '%24' is translated into $ sign).

OPSAPS-60726: Newly saved parcel URLs are not showing up in the parcels page in the Cloudera Manager HA cluster.
To safely manage parcels in a Cloudera Manager HA environment, follow these steps:
  1. Shutdown the Passive Cloudera Manager Server.
  2. Add and manage the parcel as usual, as described in Install Parcels.
  3. Restart the Passive Cloudera Manager server after parcel operations are complete.
OPSAPS-73211: Cloudera Manager 7.11.3 does not clean up Python Path impacting Hue to start

When you upgrade from Cloudera Manager 7.7.1 or lower versions to Cloudera Manager 7.11.3 or higher versions with CDP Private Cloud Base 7.1.7.x Hue does not start because Cloudera Manager forces Hue to start with Python 3.8, and Hue needs Python 2.7.

The reason for this issue is because Cloudera Manager does not clean up the Python Path at any time, so when Hue tries to start the Python Path points to 3.8, which is not supported in CDP Private Cloud Base 7.1.7.x version by Hue.

To resolve this issue temporarily, you must perform the following steps:

  1. Locate the hue.sh in /opt/cloudera/cm-agent/service/hue/.
  2. Add the following line after export HADOOP_CONF_DIR=$CONF_DIR/hadoop-conf:
    export PYTHONPATH=/opt/cloudera/parcels/CDH/lib/hue/build/env/lib64/python2.7/site-packages
OPSAPS-73655: Cloud replication fails after the delegation token is issued
HDFS and Hive external table replication policies from an on-premises cluster to cloud fail when the following conditions are true:
  1. You choose the Advanced Options > Delete Policy > Delete Permanently option during the replication policy creation process.
  2. Incremental replication is in progress, that is the source paths of the replication are snapshottable directories and the bootstrap replication run is complete.
None
OPSAPS-72984: Alerts due to change in hostname fetching functionality in jdk 8 and jdk 11

Upgrading JAVA from JDK 8 to JDK 11 creates the following alert in CMS:

Bad : CMSERVER:pit666.slayer.mayank: Reaching Cloudera Manager Server failed

This happens due to a functionality change in JDK 11 on hostname fetching.
[root@pit666.slayer ~]# /us/lib/jvm/java-1.8.0/bin/java GetHostName
Hostname: pit666.slayer.mayank

[root@pit666.slayer ~]# /usr/lib/jvm/java-11/bin/java GetHostName
Hostname: pit666.slayer

You can notice that the "hostname" is set to a short name instead of FQDN.

The current workaround is to set the hostname as FQDN.

OPSAPS-72784: Upgrades from CDH6 to CDP Private Cloud Base 7.1.9 SP1 or higher versions fail with a health check timeout exception
If you are using Cloudera Manager 7.11.3 cumulative hotfix 14 or higher versions and upgrading from CDH 6 to CDP Private Cloud Base 7.1.9 SP1 or higher versions, the upgrade fails with a CMUpgradeHealthException timeout exception. This is because upgrades from CDH 6 to CDP Private Cloud Base 7.1.9 SP1 or to any of its cumulative hotfix versions are not supported.
None.
OPSAPS-68340: Zeppelin paragraph execution fails with the User not allowed to impersonate error.

Starting from Cloudera Manager 7.11.3, Cloudera Manager auto-configures the livy_admin_users configuration when Livy is run for the first time. If you add Zeppelin or Knox services later to the existing cluster and do not manually update the service user, the User not allowed to impersonate error is displayed.

If you add Zeppelin or Knox services later to the existing cluster, you must manually add the respective service user to the livy_admin_users configuration in the Livy configuration page.

OPSAPS-69847:Replication policies might fail if source and target use different Kerberos encryption types

Replication policies might fail if the source and target Cloudera Manager instances use different encryption types in Kerberos because of different Java versions. For example, the Java 11 and higher versions might use the aes256-cts encryption type, and the versions lower than Java 11 might use the rc4-hmac encryption type.

Ensure that both the instances use the same Java version. If it is not possible to have the same Java versions on both the instances, ensure that they use the same encryption type for Kerberos. To check the encryption type in Cloudera Manager, search for krb_enc_types on the Cloudera Manager > Administration > Settings page.

OPSAPS-69342: Access issues identified in MariaDB 10.6 were causing discrepancies in High Availability (HA) mode

MariaDB 10.6, by default, includes the property require_secure_transport=ON in the configuration file (/etc/my.cnf), which is absent in MariaDB 10.4. This setting prohibits non-TLS connections, leading to access issues. This problem is observed in High Availability (HA) mode, where certain operations may not be using the same connection.

To resolve the issue temporarily, you can either comment out or disable the line require_secure_transport in the configuration file located at /etc/my.cnf.

OPSAPS-70771: Running replication policy runs must not allow you to download the performance reports
During a replication policy run, the A server error has occurred. See Cloudera Manager server log for details error message appears on the UI and the Cloudera Manager log shows "java.lang.IllegalStateException: Command has no result data." when you click:
  • Performance Reports > Performance Summary or Performance Reports > Performance Full on the Replication Policies page.
  • Download CSV on the Replication History page to download any report.
This is because the Replication Manager UI shows the performance report links as enabled and clickable which is incorrect. You can download the reports only after the replication job run is complete.
None
OPSAPS-70713: Error appears when running Atlas replication policy if source or target clusters use Dell EMC Isilon storage
You cannot create an Atlas replication policy between clusters if one or both the clusters use Dell EMC Isilon storage.
None
DMX-3973: Ozone replication policy with linked bucket as destination fails intermittently
When you create an Ozone replication policy using a linked/non-linked source cluster bucket and a linked target bucket, the replication policy fails during the "Trigger a OZONE replication job on one of the available OZONE roles" step.
None
OPSAPS-68143:Ozone replication policy fails for empty source OBS bucket
An Ozone incremental replication policy for an OBS bucket fails during the “Run File Listing on Peer cluster” step when the source bucket is empty.
None
OPSAPS-74398: Ozone and HDFS replication policies might fail when you use different destination proxy user and source proxy user
HDFS on-premises to on-premises replication fails when the following conditions are true:
  • You configure different Run As Username and Run on Peer as Username during the replication policy creation process.
  • The user configured in Run As Username does not have the permission to access the source path on the source HDFS.
Ozone replication fails when the following conditions are true:
  • FSO-to-FSO replication or an OBS-to-OBS replication with Incremental with fallback to full file listing or Incremental only replication type.
  • You configured different Run As Username and Run on Peer as Username during the replication policy creation process.
  • The user configured in Run As Username does not have the permission to access the source bucket on the source Ozone.
Provide the same permissions to the user configured in Run As Username as the permissions of Run on Peer as Username on the source cluster.
Following are the list of fixed issues that were shipped for Cloudera Manager 7.11.3 CHF 15 (version: 7.11.3.34-66004006):
OPSAPS-73164: Ozone's upgrade handlers are not properly added to the UpgradeHandlerRegistry
Ozone Upgrade handlers were not properly applied in certain CDP upgrade scenarios. This fix corrects potential problems by skipping correctly the UpgradeHandlers that are not designated to run for a certain upgrade path.
OPSAPS-73011: Wrong parameter in the /etc/default/cloudera-scm-server file

In case the Cloudera Manager needs to be installed in High Availability (2 nodes or more as explained here), the parameter CMF_SERVER_ARGS in the /etc/default/cloudera-scm-server file is missing the word "export" before it (on the file there is only CMF_SERVER_ARGS= and not export CMF_SERVER_ARGS=), so the parameter cannot be utilized correctly.

This issue is fixed now.

OPSAPS-65377: Cloudera Manager - Host Inspector not finding Psycopg2 on Ubuntu 20 or Redhat 8.x when Psycopg2 version 2.9.3 is installed.

Host Inspector fails with Psycopg2 version error while upgrading to Cloudera Manager 7.13.1.x versions. When you run the Host Inspector, you get an error Not finding Psycopg2, even though it is installed on all hosts. This issue is fixed now.

OPSAPS-69383: HTTP header used the wrong Strict_Transport_Security header
Previously, the HADOOP_HTTP_HEADER_STRICT_TRANSPORT_SECURITY parameter used the wrong header:hadoop.http.header.Strict_Transport_Security syntax. This issue is now resolved and the HTTP header name is now corrected to Strict-Transport-Security.
OPSAPS-70983: Hive replication command fails for Sentry to Ranger replication
Hive replication command for Sentry to Ranger replication works as expected now. The Sentry to Ranger migration during the Hive replication policy run from CDH 6.3.x or higher to Cloudera on cloud 7.3.0.1 or higher is successful.
OPSAPS-71046: The jstack logs collected on Cloudera Manager 7.11.3 are not in the right format
On viewing the jstack logs in the user cluster, the jstack logs for ozone and other services on Cloudera Manager 7.11.3 and CDP Private Cloud Base 7.1.9 are not in the right format. This issue is fixed now.
OPSAPS-72447, CDPD-76705: Ozone incremental replication fails to copy renamed directory

Ozone incremental replication using Ozone replication policies succeed but might fail to sync nested renames for FSO buckets.

When a directory and its contents are renamed between the replication runs, the outer level rename synced but did not sync the contents with the previous name.

This issue is fixed now.

OPSAPS-72710: Marking the snapshots created by incremental replication policies differently
In the Ozone bucket browser, the snapshots created by an Ozone replication are marked. When the snapshots are deleted, a confirmation modal window appears before the deletion. The restore bucket modal window now displays information about how the restore operation is implemented in Ozone and how this operation affects Ozone replications.
OPSAPS-72756:The runOzoneCommand API endpoint fails during the Ozone replication policy run
The /clusters/{clusterName}/runOzoneCommand Cloudera Manager API endpoint fails when the API is called with the getOzoneBucketInfo command. In this scenario, the Ozone replication policy runs also fail if the following conditions are true:
  • The source Cloudera Manager version is 7.11.3 CHF11 or 7.11.3 CHF12.
  • The target Cloudera Manager is version 7.11.3 through 7.11.3 CHF10 or 7.13.0.0 or later where the feature flag API_OZONE_REPLICATION_USING_PROXY_USER is disabled.

This issue is fixed now.

OPSAPS-72978: The getUsersFromRanger API parameter truncates the user list after 200 items
The Cloudera Manager API endpoint v58/clusters/[***CLUSTER***]/services/[***SERVICE***]/commands/getUsersFromRanger API endpoint no longer truncates the list of returned users at 200 items.
OPSAPS-73481: Knox readiness check gateway-status endpoint should return the list of topologies for which it is waiting for
Knox readiness check for gateway-status endpoint now returns the list of topologies for which it is waiting for. Before the update you had to check the gateway.log to understand what are topologies Knox is waiting for to be deployed.
OPSAPS-73038: False-positive port conflict error message displayed in Cloudera Manager
Cloudera Manager might display a false-positive error message Port conflict detected: 8443 (Gateway Health HTTP Port) is also used by: Knox Gateway during cluster installations. The warning does not cause actual installation failures.
None.
Fixed Common Vulnerabilities and Exposures
For information about Common Vulnerabilities and Exposures (CVE) that are fixed in Cloudera Manager 7.11.3 cumulative hotfix 15, see Fixed Common Vulnerabilities and Exposures in Cloudera Manager 7.11.3 cumulative hotfixes.

The repositories for Cloudera Manager 7.11.3-CHF 15 are listed in the following table:

Table 1. Cloudera Manager 7.11.3-CHF 15
Repository Type Repository Location
RHEL 9 Compatible Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.34/redhat9/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.34/redhat9/yum/cloudera-manager.repo
RHEL 8 Compatible Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.34/redhat8/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.34/redhat8/yum/cloudera-manager.repo
RHEL 7 Compatible Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.34/redhat7/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.34/redhat7/yum/cloudera-manager.repo
SLES 15 Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.34/sles15/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.34/sles15/yum/cloudera-manager.repo
SLES 12 Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.34/sles12/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.34/sles12/yum/cloudera-manager.repo
Ubuntu 22 Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.34/ubuntu2204/apt
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.34/ubuntu2204/apt/cloudera-manager.list
Ubuntu 20 Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.34/ubuntu2004/apt
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.34/ubuntu2004/apt/cloudera-manager.list