Cloudera Manager 7.11.3 Cumulative hotfix 13
Know more about the Cloudera Manager 7.11.3 cumulative hotfixes 13.
This cumulative hotfix was released on March 7, 2025.
- OPSAPS-69339: Deleting VERSION file, bootstrap file, certificates and keys after OM decommissioning
- After running the Ozone Manager decommissioning command, the VERSION file, bootstrap file, certificates and keys are deleted.
- ENGESC-30503, OPSAPS-74868: Cloudera Manager limited support for custom external repository requiring basic authentication
- Current Cloudera Manager does not support custom external
repository with basic authentication (the Cloudera Manager Wizard supports either HTTP
(non-secured) repositories or usage of Cloudera
https://archive.cloudera.com
only). In case customers want to use a custom external repository with basic authentication, they might get errors. - OPSAPS-60726: Newly saved parcel URLs are not showing up in the parcels page in the Cloudera Manager HA cluster.
- To safely manage parcels in a Cloudera Manager HA
environment, follow these steps:
- Shutdown the Passive Cloudera Manager Server.
- Add and manage the parcel as usual, as described in Install Parcels.
- Restart the Passive Cloudera Manager server after parcel operations are complete.
- OPSAPS-73211: Cloudera Manager 7.11.3 does not clean up Python Path impacting Hue to start
-
When you upgrade from Cloudera Manager 7.7.1 or lower versions to Cloudera Manager 7.11.3 or higher versions with CDP Private Cloud Base 7.1.7.x Hue does not start because Cloudera Manager forces Hue to start with Python 3.8, and Hue needs Python 2.7.
The reason for this issue is because Cloudera Manager does not clean up the Python Path at any time, so when Hue tries to start the Python Path points to 3.8, which is not supported in CDP Private Cloud Base 7.1.7.x version by Hue.
- OPSAPS-72984: Alerts due to change in hostname fetching functionality in jdk 8 and jdk 11
-
Upgrading JAVA from JDK 8 to JDK 11 creates the following alert in CMS:
Bad : CMSERVER:pit666.slayer.mayank: Reaching Cloudera Manager Server failed
This happens due to a functionality change in JDK 11 on hostname fetching.[root@pit666.slayer ~]# /us/lib/jvm/java-1.8.0/bin/java GetHostName Hostname: pit666.slayer.mayank [root@pit666.slayer ~]# /usr/lib/jvm/java-11/bin/java GetHostName Hostname: pit666.slayer
You can notice that the "hostname" is set to a short name instead of
FQDN
. - OPSAPS-73011: Wrong parameter in the /etc/default/cloudera-scm-server file
- In case the Cloudera Manager needs to be installed in
High Availability (2 nodes or more as explained here), the parameter
CMF_SERVER_ARGS
in the /etc/default/cloudera-scm-server file is missing the word "export
" before it (on the file there is onlyCMF_SERVER_ARGS=
and notexport CMF_SERVER_ARGS=
), so the parameter cannot be utilized correctly. - OPSAPS-65377: Cloudera Manager - Host Inspector not finding Psycopg2 on Ubuntu 20 or Redhat 8.x when Psycopg2 version 2.9.3 is installed.
-
Host Inspector fails with Psycopg2 version error while upgrading to Cloudera Manager 7.11.3 or Cloudera Manager 7.11.3 CHF-x versions. When you run the Host Inspector, you get an error Not finding Psycopg2, even though it is installed on all hosts.
- OPSAPS-72756:The runOzoneCommand API endpoint fails during the Ozone replication policy run
- The
/clusters/{clusterName}/runOzoneCommand Cloudera Manager API
endpoint fails when the API is called with the getOzoneBucketInfo
command. In this scenario, the Ozone replication policy runs also fail if the following
conditions are true:
- The source Cloudera Manager version is 7.11.3 CHF11 or 7.11.3 CHF12.
- The target Cloudera Manager is version 7.11.3 through 7.11.3 CHF10 or 7.13.0.0 or later where the feature flag API_OZONE_REPLICATION_USING_PROXY_USER is disabled.
- OPSAPS-72784: Upgrades from CDH6 to CDP Private Cloud Base 7.1.9 SP1 or higher versions fail with a health check timeout exception
- If you are using Cloudera Manager 7.11.3 cumulative hotfix 13
and upgrading from CDH 6 to CDP Private Cloud Base 7.1.9 SP1 or higher versions, the
upgrade fails with a
CMUpgradeHealthException
timeout exception. This is because upgrades from CDH 6 to CDP Private Cloud Base 7.1.9 SP1 or to any of its cumulative hotfix versions are not supported. - OPSAPS-68340: Zeppelin paragraph execution fails with the User not allowed to impersonate error.
-
Starting from Cloudera Manager 7.11.3, Cloudera Manager auto-configures the
livy_admin_users
configuration when Livy is run for the first time. If you add Zeppelin or Knox services later to the existing cluster and do not manually update the service user, the User not allowed to impersonate error is displayed. - OPSAPS-69847:Replication policies might fail if source and target use different Kerberos encryption types
-
Replication policies might fail if the source and target Cloudera Manager instances use different encryption types in Kerberos because of different Java versions. For example, the Java 11 and higher versions might use the aes256-cts encryption type, and the versions lower than Java 11 might use the rc4-hmac encryption type.
- OPSAPS-69342: Access issues identified in MariaDB 10.6 were causing discrepancies in High Availability (HA) mode
-
MariaDB 10.6, by default, includes the property
require_secure_transport=ON
in the configuration file (/etc/my.cnf), which is absent in MariaDB 10.4. This setting prohibits non-TLS connections, leading to access issues. This problem is observed in High Availability (HA) mode, where certain operations may not be using the same connection. - OPSAPS-70771: Running replication policy runs must not allow you to download the performance reports
- During a replication policy run, the A server
error has occurred. See Cloudera Manager server log for details error message
appears on the UI and the Cloudera Manager log shows
"java.lang.IllegalStateException: Command has no result data." when you
click:
- Replication Policies page. or on the
- Download CSV on the Replication History page to download any report.
- OPSAPS-70713: Error appears when running Atlas replication policy if source or target clusters use Dell EMC Isilon storage
- You cannot create an Atlas replication policy between clusters if one or both the clusters use Dell EMC Isilon storage.
- CDPD-53185: Clear REPL_TXN_MAP table on target cluster when deleting a Hive ACID replication policy
- The entry in REPL_TXN_MAP table on the target cluster is
retained when the following conditions are true:
- A Hive ACID replication policy is replicating a transaction that requires multiple replication cycles to complete.
- The replication policy and databases used in it get deleted on the source and target cluster even before the transaction is completely replicated.
In this scenario, if you create a database using the same name as the deleted database on the source cluster, and then use the same name for the new Hive ACID replication policy to replicate the database, the replicated database on the target cluster is tagged as ‘database incompatible’. This happens after the housekeeper thread process (that runs every 11 days for an entry) deletes the retained entry.
- OPSAPS-73655: Cloud replication fails after the delegation token is issued
- HDFS and Hive external table replication policies from an
on-premises cluster to cloud fail when the following conditions are true:
- You choose the option during the replication policy creation process.
- Incremental replication is in progress, that is the source paths of the replication are snapshottable directories and the bootstrap replication run is complete.
- DMX-3973: Ozone replication policy with linked bucket as destination fails intermittently
- When you create an Ozone replication policy using a linked/non-linked source cluster bucket and a linked target bucket, the replication policy fails during the "Trigger a OZONE replication job on one of the available OZONE roles" step.
- OPSAPS-68143:Ozone replication policy fails for empty source OBS bucket
- An Ozone incremental replication policy for an OBS bucket fails during the “Run File Listing on Peer cluster” step when the source bucket is empty.
- OPSAPS-72447, CDPD-76705: Ozone incremental replication fails to copy renamed directory
- Ozone incremental replication using Ozone replication policies succeed but might fail to sync nested renames for FSO buckets.
- OPSAPS-74398: Ozone and HDFS replication policies might fail when you use different destination proxy user and source proxy user
- HDFS on-premises to on-premises replication fails when
the following conditions are true:
- You configure different Run As Username and Run on Peer as Username during the replication policy creation process.
- The user configured in Run As Username does not have the permission to access the source path on the source HDFS.
- OPSAPS-72804: For recurring policies, the interval is overwritten to 1 after the replication policy is edited
- When you edit an Atlas, Iceberg, Ozone, or a Ranger replication policy that has a recurring schedule on the Replication Manager UI, the Edit Replication Policy modal window appears as expected. However, the frequency of the policy is reset to run at “1” unit where the unit depends on what you have set in the replication policy. For example, if you have set the replication policy to run every four hours, it is reset to one hour when you edit the replication policy.
- OPSAPS-67197: Ranger RMS server shows as healthy without service being accessible
- Being a Web service, Ranger RMS might not be initialized
due to other issues causing RMS to be inaccessible. But Ranger RMS service was still
shown as healthy, because Cloudera Manager only monitors Process Identification Number
(PID).
This issue is fixed now. Added the health status canary support for Ranger RMS service which connects to RMS after some specific intervals and shows alert on the Cloudera Manager UI if RMS is not reachable.
- OPSAPS-72632: Cloudera Manager - Stale service restart API call is failing
- When there is a configuration change for the Cloudera Management Service (CMS), process staleness detection for the CMS does not work. This issue is fixed now.
- OPSAPS-71933: Telemetry Publisher is unable to publish Spark event logs to Observability when multiple History Servers are set up in the Spark service.
- This issue is now resolved by adding the support for multiple Spark History Server deployments in Telemetry Publisher.
- OPSAPS-69622: Cannot view the correct number of files copied for Ozone replication policies
- The last run of an Ozone replication policy does not show the correct number of the files copied during the policy run when you load the page after the Ozone replication policy run completes successfully. This issue is fixed now.
- OPSAPS-72795: Do not allow multiple Ozone services in a cluster
- It is possible to configure multiple Ozone services in a single cluster which can cause irreversible damage to a running cluster. So, this fix allows you to install only one Ozone service in a cluster.
- OPSAPS-72767: Install Oozie ShareLib Cloudera Manager command fails on FIPS and FedRAMP clusters
- The Install Oozie ShareLib command using Cloudera Manager fails to execute on FIPS and FedRAMP clusters. This issue is fixed now.
- CDPD-53160: Incorrect job run status appears for subsequent Hive ACID replication policy runs after the replication policy fails
- When a Hive ACID replication policy run fails with the FAILED_ADMIN status, the subsequent Hive ACID replication policy runs show SKIPPED instead of FAILED_ADMIN status on the page which is incorrect. This issue is fixed now.
- OPSAPS-71566: The polling logic of RemoteCmdWork goes down if the remote Cloudera Manager goes down
- When the remote Cloudera Manager goes down or when there are network failures, the RemoteCmdWork stops to poll. To ensure that the daemon continues to poll even when there are network failures or if the Cloudera Manager goes down, you can set the remote_cmd_network_failure_max_poll_count=[*** ENTER REMOTE EXECUTOR MAX POLL COUNT***] parameter on the page. Note that the actual timeout is provided by a piecewise constant function (step function) where the breakpoints are: 1 through 11 is 5 seconds, 12 through 17 is 1 minute, 18 through 35 is 2 minutes, 36 through 53 is 5 minutes, 54 through 74 is 8 minutes, 75 through 104 is 15 minutes, and so on. Therefore when you enter 1, the polling continues for 5 seconds after the Cloudera Manager goes down or after a network failure. Similarly when you set it 75, the polling continues for 15 minutes.
- Fixed Common Vulnerabilities and Exposures
- For information about Common Vulnerabilities and Exposures (CVE) that are fixed in Cloudera Manager 7.11.3 cumulative hotfix 13, see Fixed Common Vulnerabilities and Exposures in Cloudera Manager 7.11.3 cumulative hotfixes.
The repositories for Cloudera Manager 7.11.3-CHF 13 are listed in the following table:
Repository Type | Repository Location |
---|---|
RHEL 9 Compatible | Repository: Repository
File:
|
RHEL 8 Compatible | Repository: Repository
File:
|
RHEL 7 Compatible | Repository: Repository
File:
|
SLES 15 | Repository: Repository
File:
|
SLES 12 | Repository: Repository
File:
|
Ubuntu 22 | Repository: Repository
File:
|
Ubuntu 20 | Repository: Repository
File:
|