Cloudera Manager 7.13.1 Cumulative hotfix 1
Know more about the Cloudera Manager 7.13.1 cumulative hotfixes 1.
This cumulative hotfix was released on March 18, 2025.
- OPSAPS-68890: Secure Approach for Passing a Token in Cloudera Manager
- You can now securely manage the secret token for the LLM hosting service through Cloudera Manager. Previously, the secret token had to be stored as plain text in Hue’s safety valve configuration. This enhancement improves security and compliance.
- OPSAPS-72663: Replace the Rolling Restart with Restart during ECS upgrade
- Enabled the Restart back in ECS, so that we can do a Restart on ECS cluster, services and roles. This will be a combination of Stop and Start operation. Also, the Rolling Restart after the ECS upgrade will be a simple Restart.
- OPSAPS-72584: Add Services Health Check to the ECS Pre-Upgrade UI
- A list of pre-upgrade checks are added that runs after the upgrade version has been chosen. This checklist verifies if your cluster is ready for upgrade.
- CDPD-79725: Hive fails to start after Datahub restart due to high memory usage
-
After restarting the Cloudera Data hub, the services appears to be down in the Cloudera Manager UI. The Cloudera Management Console reports a node failure error for the master node.
The issue is caused by high memory usage due to the G1 garbage collector on Java 17, leading to insufficient memory issues and thereby moving the Cloudera clusters to an error state.
Starting with Cloudera 7.3.1.0, Java 17 is the default runtime instead of Java 8, and its memory management increases memory usage, potentially affecting system performance. Clusters might report error states, and logs might show insufficient memory exceptions.
- OPSAPS-72706: Hive queries fail after upgrading Cloudera Manager from 7.11.2 to 7.11.3 or later
- Upgrading Cloudera Manager from version 7.11.2
or earlier to 7.11.3 or later causes Hive queries to fail due to JDK17 restrictions.
Some JDK8 options are deprecated, leading to inaccessible classes and
exceptions:
java.lang.reflect.InaccessibleObjectException: Unable to make field private volatile java.lang.String java.net.URI.string accessible
- OPSAPS-72998: Missing charts for HMS event APIs
- Charts for HMS event APIs (get_next_notification, get_current_notificationEventId, and fire_listener_event) are missing in
- OPSAPS-68340: Zeppelin paragraph execution fails with the User not allowed to impersonate error.
-
Starting from Cloudera Manager 7.11.3, Cloudera Manager auto-configures the
livy_admin_users
configuration when Livy is run for the first time. If you add Zeppelin or Knox services later to the existing cluster and do not manually update the service user, the User not allowed to impersonate error is displayed. - OPSAPS-69342: Access issues identified in MariaDB 10.6 were causing discrepancies in High Availability (HA) mode
-
MariaDB 10.6, by default, includes the property
require_secure_transport=ON
in the configuration file (/etc/my.cnf), which is absent in MariaDB 10.4. This setting prohibits non-TLS connections, leading to access issues. This problem is observed in High Availability (HA) mode, where certain operations may not be using the same connection. - OPSAPS-70771: Running Ozone replication policy does not show performance reports
- During an Ozone replication policy run, the A
server error has occurred. See Cloudera Manager server log for details error
message appears when you click:
- Replication Policies page. or on the
- Download CSV on the Replication History page to download any report.
- OPSAPS-70713: Error appears when running Atlas replication policy if source or target clusters use Dell EMC Isilon storage
- You cannot create an Atlas replication policy between clusters if one or both the clusters use Dell EMC Isilon storage.
- CDPD-53185: Clear REPL_TXN_MAP table on target cluster when deleting a Hive ACID replication policy
- The entry in REPL_TXN_MAP table on the target cluster is
retained when the following conditions are true:
- A Hive ACID replication policy is replicating a transaction that requires multiple replication cycles to complete.
- The replication policy and databases used in it get deleted on the source and target cluster even before the transaction is completely replicated.
In this scenario, if you create a database using the same name as the deleted database on the source cluster, and then use the same name for the new Hive ACID replication policy to replicate the database, the replicated database on the target cluster is tagged as ‘database incompatible’. This happens after the housekeeper thread process (that runs every 11 days for an entry) deletes the retained entry.
- DMX-3973: Ozone replication policy with linked bucket as destination fails intermittently
- When you create an Ozone replication policy using a linked/non-linked source cluster bucket and a linked target bucket, the replication policy fails during the "Trigger a OZONE replication job on one of the available OZONE roles" step.
- OPSAPS-68143:Ozone replication policy fails for empty source OBS bucket
- An Ozone incremental replication policy for an OBS bucket fails during the “Run File Listing on Peer cluster” step when the source bucket is empty.
- CDPD-76705: Ozone incremental replication fails to copy renamed directory
- Ozone incremental replication using Ozone replication policies succeed but might fail to sync nested renames for FSO buckets.
- OPSAPS-72756:The runOzoneCommand API endpoint fails during the Ozone replication policy run
- The
/clusters/{clusterName}/runOzoneCommand Cloudera Manager API
endpoint fails when the API is called with the getOzoneBucketInfo
command. In this scenario, the Ozone replication policy runs also fail if the following
conditions are true:
- The source Cloudera Manager version is 7.11.3 CHF11 or 7.11.3 CHF12.
- The target Cloudera Manager is version 7.11.3 through 7.11.3 CHF10 or 7.13.0.0 or later where the feature flag API_OZONE_REPLICATION_USING_PROXY_USER is disabled.
- CDPD-53160: Incorrect job run status appears for subsequent Hive ACID replication policy runs after the replication policy fails
- When a Hive ACID replication policy run fails with the FAILED_ADMIN status, the subsequent Hive ACID replication policy runs show SKIPPED instead of FAILED_ADMIN status on the page which is incorrect. It is recommended that you check Hive ACID replication policy runs if multiple subsequent policy runs show the SKIPPED status.
- OPSAPS-72804: For recurring replication policies, the interval is overwritten to 1 after the replication policy is edited
- When you edit an Atlas, Iceberg, Ozone, or a Ranger replication policy that has a recurring schedule on the Replication Manager UI, the Edit Replication Policy modal window appears as expected. However, the frequency of the policy is reset to run at “1” unit where the unit depends on what you have set in the replication policy. For example, if you have set the replication policy to run every four hours, it is reset to one hour when you edit the replication policy.
- CDPQE-36126: Iceberg replication fails when source and target clusters use different nameservice names
- When you run an Iceberg replication policy between clusters where the source and target clusters use different nameservice names, the replication policy fails.
- OPSAPS-72369: Update snapshot default configuration for enabling ordered snapshot deletion
- This issue is now resolved by changing the default configuration value on Cloudera Manager.
- OPSAPS-72215: ECS CM UI Config for docker cert CANNOT accept the new line - unable to update new registry cert in correct format
- Currently there is no direct way to update the external
docker certificate in the UI for ECS because newlines are removed when the field is
saved. Certs can be uploaded by adding '\n' character for newline now. When user wants
to update docker cert through Cloudera Manager UI config. User need to add '\n' to
specify a newline character in the certificate. Example:
-----BEGIN CERTIFICATE-----\ nMIIERTCCAy2gAwIBAgIUIL8o1MjD5he7nZKKa/C8rx9uPjcwDQYJKoZIhvcNAQEL\n BQAwXTELMAkGA1UEBhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExEzARBgNVBAcM\nClNhbnRhQ2xhcmExETA PBgNVBAoMCENsb3VkZXJhMREwDwYDVQQLDAhDbG91ZGVy\nYTAeFw0yNDAzMTExMjU5NDVaFw0zNDAzMDkxMjU5NDVaMF0x CzAJBgNVBAYTAlVT\nMRMwEQYDVQQIDApDYWxpZm9ybmlhMRMwEQYDVQQHDApTYW50YUNsYXJhMREwDwYD\nVQQKDA hDbG91ZGVyYTERMA8GA1UECwwIQ2xvdWRlcmEwggEiMA0GCSqGSIb3DQEB\nAQUAA4IBDwAwggEKAoIBAQDcuxGszWmzVnWCwDICnlxUBtO +Ps2jxQ7C7kIj\nTHTaQ2kGl/ZzQOJBpYT/jFmiQGPSKb4iLSxed+Xk5xAOkNWDIL+Hlf5txjkw/FTf\nHiyWep9DaQDF07M/Cl3nb8JmpRyA5fKYpVbJAFIEXOhT xrcnH/4o5ubLM7mHVXwY\nafoPD5AuiOD/I+xxmqb/x+fKtHzY1eEzDb2vjjDJBRqxpHvg/S4hHsgZJ7wU7wg+\nPk4uPV3O83h9NI+b4SOwXunuKRCCh4dRKm8/Qw4f7tDFdCA IubvO1AGtfyJJp9xR\npMIjhIuna1K2TnPQomdoIy/KqrFFzVaHevyinEnRLG2NAgMBAAGjgfwwgfkwHQYD\nVR0OBBYEFHWX21/BhL5J5kNpxmb8F mDchlmBMIGaBgNVHSMEgZIwgY+AFHWX21/B\nhL5J5kNpxmb8FmDchlmBoWGkXzBdMQswCQYDVQQGEwJVUzETMBEGA1UECAwKQ2Fs\naWZvcm5pYTET MBEGA1UEBwwKU2FudGFDbGFyYTERMA8GA1UECgwIQ2xvdWRlcmEx\nETAPBgNVBAsMCENsb3VkZXJhghQgvyjUyMPmF7udkopr8LyvH24+NzAMBgNVHRM E\nBTADAQH/MAsGA1UdDwQEAwIC/DAPBgNVHREECDAGhwQKgW26MA8GA1UdEgQIMAaH\nBAqBbbowDQYJKoZIhvcNAQELBQADggEBAMks+sY+ETaPzFLg2PolUT 4GeXEqnGl\nSmZzIkiA6l2DCYQD/7mTLd5Ea63oI78fxatRnG5MLf5aHVLs4W+WYhoP6B7HLPUo\nNGPJviRBHtUDRYVpD5Q0hhQtHB4Q1H+sgrE53VmbIQqLPOAxvpM//oJCFDT8N bOI\n+bTJ48N34ujosjNaiP6x09xbzRzOnYd6VyhZ/pgsiRZ4qlZsVyv1TImP9VpHcC7P\nukxNuBdXBS3jEXcvEV1Eq4Di+z6PIWoPIHUunQ9P0akYEvbXu L88knM5FNhS6YBP\nGd91KkGdz6srRIVRiF+XP0e6IwZC70kkWiwf8vX/CuR64ZQxc30ot70=\n-----END CERTIFICATE-----\n
- OPSAPS-72662: UIDs (User IDs) conflicts for the kubernetes containers as the Kubernetes containers use the user ID - 1001 which is a pretty common UID in a Unix environment.
-
This issue is fixed now by using a large UID such as 1000001 to reduce UID conflicts.
Using large UIDs (User IDs) for Kubernetes containers is a recommended security practice because it helps minimize the risk of a container compromising the host system. By assigning a high UID, it reduces the chances of conflicts with existing user accounts on the host, particularly if the container is compromised and attempts to access host files or escalate privileges. In essence, a large UID ensures the container operates with restricted permissions on the host system. Therefore, when creating the CLI pod in Cloudera Manager, therunAsUser
value should be set to an integer greater than 1,000,000. To avoid UID conflicts, it is advisable to use a UID such as 1000001. - OPSAPS-72559: Incorrect error messages appear for Hive ACID replication policies
- Replication Manager now shows correct error messages for every Hive ACID replication policy run on the page as expected. This issue is fixed now.
- OPSAPS-72509: Hive metadata transfer to GCS fails with ClassNotFoundException
- Hive external table replication policies from an on-premises cluster to cloud failed during the Transfer Metadata Files step when the target is on Google Cloud and the source Cloudera Manager version is 7.11.3 CHF7, 7.11.3 CHF8, 7.11.3 CHF9, 7.11.3 CHF9.1, 7.11.3 CHF10, or 7.11.3 CHF11. This issue is fixed.
- OPSAPS-72559: Incorrect error messages appear for Hive ACID replication policies
- Replication Manager now shows correct error messages for every Hive ACID replication policy run on the page as expected.
- OPSAPS-72558, OPSAPS-72505: Replication Manager chooses incorrect target cluster for Iceberg, Atlas, and Hive ACID replication policies
- When a Cloudera Manager instance managed multiple clusters, Replication Manager picked the first cluster in the list as the Destination during the Iceberg, Atlas, and Hive ACID replication policy creation process, and the Destination field was non-editable. You can now edit the replication policy to change the target cluster in these scenarios.
- OPSAPS-72468: Subsequent Ozone OBS-to-OBS replication policy do not skip replicated files during replication
- Replication Manager now skips the replicated files during
subsequent Ozone replication policy runs after you add the following key-value pairs in
- com.cloudera.enterprise.distcp.ozone-schedules-with-unsafe-equality-check
= [***ENTER COMMA-SEPARATED LIST OF OZONE REPLICATION POLICIES’ ID or
ENTER all TO APPLY TO ALL OZONE REPLICATION POLICIES***]
The advanced snippet skips the already replicated files when the relative file path, file name, and file size are equal and ignores the modification times.
- com.cloudera.enterprise.distcp.require-source-before-target-modtime-in-unsafe-equality-check = [***ENTER true OR false***]
When you add both the key-value pairs, the subsequent Ozone replication policy runs skip replicating files when the matching file on the target has the same relative file path, file name, file size and the source file’s modification time is less or equal to the target file modification time.
: - com.cloudera.enterprise.distcp.ozone-schedules-with-unsafe-equality-check
= [***ENTER COMMA-SEPARATED LIST OF OZONE REPLICATION POLICIES’ ID or
ENTER all TO APPLY TO ALL OZONE REPLICATION POLICIES***]
- OPSAPS-72214: Cannot create a Ranger replication policy if the source and target cluster names are not the same
- You could not create a Ranger replication policy if the source cluster and target cluster names were not the same. This issue is fixed.
- OPSAPS-71853: The Replication Policies page does not load the replication policies’ history
- When the sourceService is null for a Hive ACID replication policy, the Cloudera Manager UI fails to load the existing replication policies’ history details and the current state of the replication policies on the Replication Policies page. This issue is fixed now.
- OPSAPS-72181: Currently Apply Host Template checks for active command on the service, if the active command is taking time (like a long-running replication command) then Apply Host Template operation will also get delayed.
- This issue is fixed now for certain scenario like when host template has only gateway role then the Apply Host Template operation will not check for active command on service. If host template has other roles than gateway then the behaviour remains same. Apply Host Template with gateway roles only will not wait for any active service command.
- OPSAPS-72249: Oozie database dump fails on JDK17
- Oozie database dump and load commands couldn't be executed from Cloudera Manager with JDK 17. This issue is fixed now.
- OPSAPS-72276: Cannot edit Ozone replication policy if the MapReduce service is stale
- You could not edit an Ozone replication policy in Replication Manager if the MapReduce service did not load completely. This issue is fixed.
- OPSAPS-71932: Ranger HDFS plugin resource lookup issue
-
For JDK 17 Isilon cluster, user was not able to create a new policy under
cm_hdfs
. The connection was failing with the following error message:cannot access class sun.net.util.IPAddressUtil
The issue is fixed now. Added sun.net.util package to Ranger Admin java opts for JDK 17.
- OPSAPS-71907: Solr auditing URL changed port
- The Solr auditing URL generated for Ranger plugin
services in the data hub cluster is correct when both the local ZooKeeper and the data
lake ZooKeeper have
ssl_enabled
enabled. However, if thessl_enabled
parameter is disabled on the local ZooKeeper in data hub, the Solr auditing URL changed the port to use 2181.The fix fetches the Solr auditing URL from the data context of data lake on data hub, resolving the issue where, if the ZooKeeper
ssl_enabled
parameter is disabled, Solr auditing uses port 2181; a rare, corner-case occurrence. - OPSAPS-71666: Replication Manager uses the required property values in the “ozone_replication_core_site_safety_valve ” in the source Cloudera Manager during Ozone replication policy run
- During an Ozone replication policy run, Replication
Manager obtains the required properties and its values from the
ozone_replication_core_site_safety_valve
. It then adds the new properties and its values and overrides the value for existing properties in the core-site.xml file. Replication Manager uses this file during the Ozone replication policy run. - OPSAPS-71659: Ranger replication policy failed because of incorrect source to destination service name mapping
- Ranger replication policy failed during the transform step because of incorrect source to destination service name mapping. This issue is fixed now.
- OPSAPS-71642:
GflagConfigFileGenerator
is removing the=
sign in the Gflag configuration file when the configuration value passed is empty in the advanced safety valve -
If the user adds
file_metadata_reload_properties
configuration in the advanced safety valve with=
sign and empty value, then theGflagConfigFileGenerator
is removing the=
sign in the Gflag configuration file when the configuration value passed is empty in the advanced safety valve.This issue is fixed now.
- OPSAPS-71592: Replication Manager does not read the default value of “ozone_replication_core_site_safety_valve” during Ozone replication policy run
- When the ozone_replication_core_site_safety_valve advanced configuration snippet is set to its default value, Replication Manager does not read its value during the Ozone replication policy run. To mitigate this issue, the default value of ozone_replication_core_site_safety_valve has been set to an empty value. If you have set any key-value pairs for ozone_replication_core_site_safety_valve, then these values are written to core-site.xml during the Ozone replication policy run.
- OPSAPS-71424: The 'configuration sanity check' step ignores the replication advanced configuration snippet values during the Ozone replication policy job run
- The OBS-to-OBS Ozone replication policy jobs failed when the S3 property values for fs.s3a.endpoint, fs.s3a.secret.key, and fs.s3a.access.key were empty in Ozone Service Advanced Configuration Snippet (Safety Valve) for ozone-conf/ozone-site.xml even when these properties were defined in Ozone Replication Advanced Configuration Snippet (Safety Valve) for core-site.xml. This issue is fixed.
- OPSAPS-71256: The “Create Ranger replication policy” action shows 'TypeError' if no peer exists
- When you click , the TypeError: Cannot read properties of undefined error appears. This issue is fixed now.
- OPSAPS-71093: Validation on source for Ranger replication policy fails
- The Cloudera Manager page would be logged out
automatically when you created a Ranger replication policy. This is because the source
cluster did not support the
getUsersFromRanger
orgetPoliciesFromRanger
API requests. The issue is fixed now, and the required validation on the source completes successfully as expected. - OPSAPS-70848: Hive external table replication policies succeed when the source cluster uses Dell EMC Isilon storage
- During the Hive external table replication policy run,
the replication policy failed at the
Hive Replication Export
step. This issue is fixed now. - OPSAPS-70822: Save the Hive external table replication policy on the ‘Edit Hive External Table Replication Policy’ window
- Replication Manager saves the changes as expected when you click Save Policy after you edit a Hive replication policy. To edit a replication policy, you click for the replication policy on the Replication Policies page.
- OPSAPS-70721:
QueueManagementDynamicEditPolicy
is not enabled with Auto Queue Deletion enabled - Whenever Auto Queue Deletion is enabled, the
QueueManagementDynamicEdit
policy is not enabled. This issue is fixed now and when there are no applications running in a queue, then its capacity is set to zero. - OPSAPS-70449: After creating a new Dashboard from the Cloudera Manager UI, the Chart Title field was allowing Javascript as input
- In Cloudera Manager UI, while creating a new plot object, a Chart Title field allows Javascript as input. This allows the user to execute a script, which results in an XSS attack. This issue is fixed now.
- OPSAPS-69782: Exception appears if the peer Cloudera Manager's API version is higher than the local cluster's API version
- HBase replication using HBase replication policies in CDP
Public Cloud Replication Manager between two Data Hubs/COD clusters succeed as expected
when all the following conditions are true:
- The destination Data Hub/COD cluster’s Cloudera Manager version is 7.9.0-h7 through 7.9.0-h9 or 7.11.0-h2 through 7.11.0-h4, or 7.12.0.0.
- The source Data Hub/COD cluster's Cloudera Manager major version is higher than the destination cluster's Cloudera Manager major version.
- The Initial Snapshot option is chosen during the HBase replication policy creation process and/or the source cluster is already participating in another HBase replication setup as a source or destination with a third cluster.
- OPSAPS-69622: Cannot view the correct number of files copied for Ozone replication policies
- The last run of an Ozone replication policy does not show the correct number of the files copied during the policy run when you load the page after the Ozone replication policy run completes successfully. This issue is fixed now.
- OPSAPS-72143: Atlas replication policies fail if the source and target clusters support FIPS
- The Atlas replication policies fail during the Exporting atlas entities from remote atlas service step if the source and target clusters support FIPS. This issue is fixed now.
- OPSAPS-67498: The Replication Policies page takes a long time to load
- To ensure that the page loads faster, new query parameters have been added to the internal policies that fetch the REST APIs for the page which improves pagination. Replication Manager also caches internal API responses to speed up the page load.
- OPSAPS-65371: Kudu user was not part of the
cm_solr RANGER_AUDITS_COLLECTION
policy - Kudu user was not part of the default policy of
cm_solr
, which prevented to write any Kudu audit logs on Ranger Admin untill Kudu user was manually added to the policy.The issue is fixed now. Added Kudu user to default policy for
cm_solr - RANGER_AUDITS_COLLECTION
, so that Kudu user does not need to be added manually to write audits to Ranger Admin.
- Fixed Common Vulnerabilities and Exposures
- For information about Common Vulnerabilities and Exposures (CVE) that are fixed in Cloudera Manager 7.13.1 cumulative hotfix 1, see Fixed Common Vulnerabilities and Exposures in Cloudera Manager 7.13.1 and Cloudera Manager 7.13.1 cumulative hotfixes.
The repositories for Cloudera Manager 7.13.1-CHF 1 are listed in the following table:
Repository Type | Repository Location |
---|---|
RHEL 9 Compatible | Repository:https://username:password@archive.cloudera.com/p/cm7/7.13.1.100/redhat9/yum Repository
File:https://username:password@archive.cloudera.com/p/cm7/7.13.1.100/redhat9/yum/cloudera-manager.repo |
RHEL 8 Compatible | Repository:https://username:password@archive.cloudera.com/p/cm7/7.13.1.100/redhat8/yum Repository
File:https://username:password@archive.cloudera.com/p/cm7/7.13.1.100/redhat8/yum/cloudera-manager.repo |
SLES 15 | Repository:https://username:password@archive.cloudera.com/p/cm7/7.13.1.100/sles15/yum Repository
File:https://username:password@archive.cloudera.com/p/cm7/7.13.1.100/sles15/yum/cloudera-manager.repo |
Ubuntu 22 | Repository:https://username:password@archive.cloudera.com/p/cm7/7.13.1.100/ubuntu2204/apt Repository
File:https://username:password@archive.cloudera.com/p/cm7/7.13.1.100/ubuntu2204/apt/cloudera-manager.list |
Ubuntu 20 | Repository:https://username:password@archive.cloudera.com/p/cm7/7.13.1.100/ubuntu2004/apt Repository
File:https://username:password@archive.cloudera.com/p/cm7/7.13.1.100/ubuntu2004/apt/cloudera-manager.list |