Cloudera Manager 7.11.3 Cumulative hotfix 17
Know more about the Cloudera Manager 7.11.3 cumulative hotfixes 17.
This cumulative hotfix was released on August 13, 2025.
- Rocky Linux 9.4 support for Cloudera Manager
-
Starting with the Cloudera Manager 7.11.3 CHF17 release, Cloudera Manager provides support for Rocky Linux. This update ensures seamless compatibility with Rocky Linux version 9.4, offering greater flexibility and platform options.
Rocky Linux 9.4 supports only Python 3.9 version in Cloudera Manager 7.11.3 CHF17 release.
- Upgraded embedded PostgreSQL to 14.16
- The embedded PostgreSQL version within Key Trustee Server is upgraded from 14.2 to 14.16.
- OPSAPS-74756 and OPSAPS-74460: Previously, Spark extractions did not fetch YARN application metadata. The Spark jobs could not fetch accurate queue information and did not produce an auxiliary-files/YARN/appInfo.json file in the extraction output.
- With the new functionality, Spark extractions now include YARN application metadata. This provides an accurate queue mapping for Spark jobs and creates an auxiliary-files/YARN/appInfo.json file in the extraction output.
- OPSAPS-74300: Allow override of the Cloudera Manager supplied PYTHONPATH in Livy CSDs
- Livy uses the Python executable and PYTHONPATH as set by the Cloudera Manager Agent for PySpark sessions. If required, now it is possible to override these default settings via multiple environment variables.
- ENGESC-30503, OPSAPS-74868: Cloudera Manager limited support for custom external repository requiring basic authentication
- Current Cloudera Manager does not support custom external
repository with basic authentication (the Cloudera Manager Wizard supports either HTTP
(non-secured) repositories or usage of Cloudera
https://archive.cloudera.com
only). In case customers want to use a custom external repository with basic authentication, they might get errors. - OPSAPS-60726: Newly saved parcel URLs are not showing up in the parcels page in the Cloudera Manager HA cluster.
- To safely manage parcels in a Cloudera Manager HA
environment, follow these steps:
- Shutdown the Passive Cloudera Manager Server.
- Add and manage the parcel as usual, as described in Install Parcels.
- Restart the Passive Cloudera Manager server after parcel operations are complete.
- OPSAPS-73211: Cloudera Manager 7.11.3 does not clean up Python Path impacting Hue to start
-
When you upgrade from Cloudera Manager 7.7.1 or lower versions to Cloudera Manager 7.11.3 or higher versions with CDP Private Cloud Base 7.1.7.x Hue does not start because Cloudera Manager forces Hue to start with Python 3.8, and Hue needs Python 2.7.
The reason for this issue is because Cloudera Manager does not clean up the Python Path at any time, so when Hue tries to start the Python Path points to 3.8, which is not supported in CDP Private Cloud Base 7.1.7.x version by Hue.
- OPSAPS-72984: Alerts due to change in hostname fetching functionality in jdk 8 and jdk 11
-
Upgrading JAVA from JDK 8 to JDK 11 creates the following alert in CMS:
Bad : CMSERVER:pit666.slayer.mayank: Reaching Cloudera Manager Server failed
This happens due to a functionality change in JDK 11 on hostname fetching.[root@pit666.slayer ~]# /us/lib/jvm/java-1.8.0/bin/java GetHostName Hostname: pit666.slayer.mayank [root@pit666.slayer ~]# /usr/lib/jvm/java-11/bin/java GetHostName Hostname: pit666.slayer
You can notice that the "hostname" is set to a short name instead of
FQDN
. - OPSAPS-72784: Upgrades from CDH6 to CDP Private Cloud Base 7.1.9 SP1 or higher versions fail with a health check timeout exception
- If you are using Cloudera Manager 7.11.3 cumulative hotfix 14 or
higher versions and upgrading from CDH 6 to CDP Private Cloud Base 7.1.9 SP1 or higher
versions, the upgrade fails with a
CMUpgradeHealthException
timeout exception. This is because upgrades from CDH 6 to CDP Private Cloud Base 7.1.9 SP1 or to any of its cumulative hotfix versions are not supported. - OPSAPS-68340: Zeppelin paragraph execution fails with the User not allowed to impersonate error.
-
Starting from Cloudera Manager 7.11.3, Cloudera Manager auto-configures the
livy_admin_users
configuration when Livy is run for the first time. If you add Zeppelin or Knox services later to the existing cluster and do not manually update the service user, the User not allowed to impersonate error is displayed. - OPSAPS-69847:Replication policies might fail if source and target use different Kerberos encryption types
-
Replication policies might fail if the source and target Cloudera Manager instances use different encryption types in Kerberos because of different Java versions. For example, the Java 11 and higher versions might use the aes256-cts encryption type, and the versions lower than Java 11 might use the rc4-hmac encryption type.
- OPSAPS-69342: Access issues identified in MariaDB 10.6 were causing discrepancies in High Availability (HA) mode
-
MariaDB 10.6, by default, includes the property
require_secure_transport=ON
in the configuration file (/etc/my.cnf), which is absent in MariaDB 10.4. This setting prohibits non-TLS connections, leading to access issues. This problem is observed in High Availability (HA) mode, where certain operations may not be using the same connection. - OPSAPS-70771: Running replication policy runs must not allow you to download the performance reports
- During a replication policy run, the A server
error has occurred. See Cloudera Manager server log for details error message
appears on the UI and the Cloudera Manager log shows
"java.lang.IllegalStateException: Command has no result data." when you
click:
- Replication Policies page. or on the
- Download CSV on the Replication History page to download any report.
- OPSAPS-70713: Error appears when running Atlas replication policy if source or target clusters use Dell EMC Isilon storage
- You cannot create an Atlas replication policy between clusters if one or both the clusters use Dell EMC Isilon storage.
- DMX-3973: Ozone replication policy with linked bucket as destination fails intermittently
- When you create an Ozone replication policy using a linked/non-linked source cluster bucket and a linked target bucket, the replication policy fails during the "Trigger a OZONE replication job on one of the available OZONE roles" step.
- OPSAPS-68143:Ozone replication policy fails for empty source OBS bucket
- An Ozone incremental replication policy for an OBS bucket fails during the “Run File Listing on Peer cluster” step when the source bucket is empty.
- OPSAPS-74398: Ozone and HDFS replication policies might fail when you use different destination proxy user and source proxy user
- HDFS on-premises to on-premises replication fails when
the following conditions are true:
- You configure different Run As Username and Run on Peer as Username during the replication policy creation process.
- The user configured in Run As Username does not have the permission to access the source path on the source HDFS.
- OPSAPS-73038: False-positive port conflict error message displayed in Cloudera Manager
- This issue is fixed now. Health port added as a
configuration to the Knox configuration. The health topology port can be set with
topology port mapping and by setting the new configuration the
checkDeployment
script will use the new health port. - OPSAPS-73711 and OPSAPS-73165: When Ranger is enabled, Telemetry Publisher fails to export Hive payloads from Data Hub due to the missing Ranger client dependencies in the Telemetry Publisher classpath.
- This issue has been resolved by adding the necessary dependencies to the classpath.
- OPSAPS-74379 and OPSAPS-74375: When creating a compressed archive of an input directory, an open input stream was not closed before a file was deleted. This could lead to filesystem errors, such as the creation of .nfs files.
- The issue is now resolved by ensuring the input stream for each file is closed when adding it to the archive.
- OPSAPS-72439, OPSAPS-74265: HDFS and Hive external tables replication policies failed when using custom krb5.conf files
- The issue appeared because the custom krb5.conf was not propagated to the required files. To mitigate this issue, complete the instructions provided in Step 13 in Using a custom Kerberos configuration path before you run the replication polices.
- OPSAPS-73602, OPSAPS-74360: HDFS replication policies to cloud failed with HTTP 400 error
- The HDFS replication policies to cloud were failing after you edited the replication policies in the . This issue is fixed.
- OPSAPS-74040, OPSAPS-74057: Ozone OBS replication fails due to pre-filelisting check failure
- During OBS-to-OBS Ozone replication, if the source bucket
is a linked bucket, the replication failed during the Run Pre-Filelisting
Check step, and the error message Source bucket is a linked bucket,
however the bucket it points to is also a link appeared, even when the source
bucket directly links to a regular (non-linked) bucket.
Ozone OBS-to-OBS replication no longer fails when the source or the target bucket is a link bucket (The link bucket resides in the s3v volume, and refers to another bucket in s3v or any other volume.).
- OPSAPS-73655, OPSAPS-74060: Cloud replication failed after the delegation token was issued
- When you chose the option during the replication policy creation process, the HDFS and Hive external table replication policies from an on-premises cluster to cloud failed when incremental replication was in progress (the source paths of the replication were snapshottable directories and the bootstrap replication run was complete). This issue is fixed.
- OPSAPS-74276: RockDB JNI library is loaded from the same place to multiple Ozone components
- By default, Ozone roles define a separate directory to
load the RocksDB shared library and clean up separately from each other on the same
host, unless the environment already defines the
ROCKSDB_SHAREDLIB_DIR
variable through a Safety valve as suggested in the workaround for OPSAPS-67650. After this change, that workaround becomes obsolete. The new directory used resides within directories used by the Cloudera Manager agent to manage the Ozone related processes.
- Fixed Common Vulnerabilities and Exposures
- For information about Common Vulnerabilities and Exposures (CVE) that are fixed in Cloudera Manager 7.11.3 cumulative hotfix 17, see Fixed Common Vulnerabilities and Exposures in Cloudera Manager 7.11.3 cumulative hotfixes.
The repositories for Cloudera Manager 7.11.3 CHF17 are listed in the following table:
Repository Type | Repository Location |
---|---|
RHEL 9 Compatible | Repository: Repository
File:
|
RHEL 8 Compatible | Repository: Repository
File:
|
RHEL 7 Compatible | Repository: Repository
File:
|
SLES 15 | Repository: Repository
File:
|
SLES 12 | Repository: Repository
File:
|
Ubuntu 22 | Repository: Repository
File:
|
Ubuntu 20 | Repository: Repository
File:
|