Cloudera Manager 7.11.3 Cumulative hotfix 8

Know more about the Cloudera Manager 7.11.3 cumulative hotfixes 8.

This cumulative hotfix was released on August 27, 2024.

New features and changed behavior for Cloudera Manager 7.11.3 CHF8 (version: 7.11.3.16-56304673):
Added ability in the Cloudera Manager Agent's config.ini file to disable filesystem checks.

In Cloudera Manager Agent 7.11.3 CHF8 and higher versions, a new optional configuration flag is available. The new flag is monitor_filesystems, which you can set up in the Cloudera Manager Agent config.ini file (found in /etc/cloudera-scm-agent/config.ini).

You can add the following lines in the config.ini file before upgrading Cloudera Manager Agent to disable monitoring of filesystems:
  • The flag monitor_filesystems is used to determine if the agent has to monitor the filesystems.
  • If the flag is set to True, Cloudera Manager Agent monitors the filesystems.
  • If the flag is set to False, Cloudera Manager Agent will not monitor any filesystems. If the flag is not included in the file, it will default to True, and Cloudera Manager Agent behavior will match previous versions.
Following are the list of known issues and their corresponding workarounds that are shipped for Cloudera Manager 7.11.3 CHF8 (version: 7.11.3.16-56304673):
OPSAPS-68340: Zeppelin paragraph execution fails with the User not allowed to impersonate error.

Starting from Cloudera Manager 7.11.3, Cloudera Manager auto-configures the livy_admin_users configuration when Livy is run for the first time. If you add Zeppelin or Knox services later to the existing cluster and do not manually update the service user, the User not allowed to impersonate error is displayed.

If you add Zeppelin or Knox services later to the existing cluster, you must manually add the respective service user to the livy_admin_users configuration in the Livy configuration page.

OPSAPS-69847:Replication policies might fail if source and target use different Kerberos encryption types

Replication policies might fail if the source and target Cloudera Manager instances use different encryption types in Kerberos because of different Java versions. For example, the Java 11 and higher versions might use the aes256-cts encryption type, and the versions lower than Java 11 might use the rc4-hmac encryption type.

Ensure that both the instances use the same Java version. If it is not possible to have the same Java versions on both the instances, ensure that they use the same encryption type for Kerberos. To check the encryption type in Cloudera Manager, search for krb_enc_types on the Cloudera Manager > Administration > Settings page.

OPSAPS-69342: Access issues identified in MariaDB 10.6 were causing discrepancies in High Availability (HA) mode

MariaDB 10.6, by default, includes the property require_secure_transport=ON in the configuration file (/etc/my.cnf), which is absent in MariaDB 10.4. This setting prohibits non-TLS connections, leading to access issues. This problem is observed in High Availability (HA) mode, where certain operations may not be using the same connection.

To resolve the issue temporarily, you can either comment out or disable the line require_secure_transport in the configuration file located at /etc/my.cnf.

OPSAPS-70771: Running Ozone replication policy does not show performance reports
During an Ozone replication policy run, the A server error has occurred. See Cloudera Manager server log for details error message appears when you click:
  • Performance Reports > OZONE Performance Summary or Performance Reports > OZONE Performance Full on the Replication Policies page.
  • Download CSV on the Replication History page to download any report.
None
OPSAPS-70704: Kerberos connectivity check does not work as expected with JDK17 when you add Cloudera Manager peers
When you add a source Cloudera Manager that supports JDK17, the Kerberos connectivity check fails and the Error while reading /etc/krb5.conf on <hostname ; for all hosts>... error message appears.
None
OPSAPS-70713: Error appears when running Atlas replication policy if source or target clusters use Dell EMC Isilon storage
You cannot create an Atlas replication policy between clusters if one or both the clusters use Dell EMC Isilon storage.
None
OPSAPS-70297: Optional 'Run As User' for HBase initial snapshot export process
If you configure an IDBroker-based external account on the source CDP Private Cloud Base cluster and want to use it in an HBase replication policy in CDP Public Cloud Replication Manager, you must map the hbase user to the IAM role that has access to the target S3 bucket using the source Cloudera Manager > Clusters > Knox service > Instances > Kerberos Proxy Block property. Also, you can replicate HBase data using that replication policy to only one target COD cluster.
None
CDPD-53185: Clear REPL_TXN_MAP table on target cluster when deleting a Hive ACID replication policy
The entry in REPL_TXN_MAP table on the target cluster is retained when the following conditions are true:
  1. A Hive ACID replication policy is replicating a transaction that requires multiple replication cycles to complete.
  2. The replication policy and databases used in it get deleted on the source and target cluster even before the transaction is completely replicated.

In this scenario, if you create a database using the same name as the deleted database on the source cluster, and then use the same name for the new Hive ACID replication policy to replicate the database, the replicated database on the target cluster is tagged as ‘database incompatible’. This happens after the housekeeper thread process (that runs every 11 days for an entry) deletes the retained entry.

Create another Hive ACID replication policy with a different name for the new database
OPSAPS-71592: Replication Manager does not read the default value of “ozone_replication_core_site_safety_valve” during Ozone replication policy run
During the Ozone replication policy run, Replication Manager does not read the value in the ozone_replication_core_site_safety_valve advanced configuration snippet if it is configured with the default value.
To mitigate this issue, you can use one of the following methods:
  • Remove some or all the properties in ozone_replication_core_site_safety_valve, and move them to ozone-conf/ozone-site.xml_service_safety_valve.
  • Add a dummy property with no value in ozone_replication_core_site_safety_valve. For example, add <property><name>dummy_property</name><value></value></property>, save the changes, and run the Ozone replication policy.
Following are the list of fixed issues that were shipped for Cloudera Manager 7.11.3 CHF8 (version: 7.11.3.16-53216725):
OPSAPS-68845: Cloudera Manager Server fails to start after the Cloudera Manager upgrade
Starting from the Cloudera Manager 7.11.3 version up to the Cloudera Manager 7.11.3 CHF7 version, the Cloudera Manager Server fails to start after the Cloudera Manager upgrade due to Navigator user roles improperly handled in the upgrade in some scenarios. This issue is fixed now by removing the extra Navigator roles.
OPSAPS-70419: The Livy3 server lacks necessary iceberg configurations in spark-defaults.

Attempting to query the Iceberg table using Livy3 failed with Error in loading storage handler.org.apache.iceberg.mr.hive.HiveIcebergStorageHandler. The same query was successful when executed using spark3-shell.

Now Iceberg is added to the Livy3 classpath.

OPSAPS-69806: Collection of YARN diagnostic bundle will fail

For any combinations of CM 7.11.3 version up to CM 7.11.3 CHF7 version, with CDP 7.1.7 through CDP 7.1.8, collection of the YARN diagnostic bundle will fail, and no data transmits occur.

Now the changes are made to Cloudera Manager to allow the collection of the YARN diagnostic bundle and make this operation successful.

OPSAPS-70831: Regenerate missing keytab option listed for ECS/Docker instances, even though it is no-op
On the Cloudera Manager UI, the Regenerate missing Keytab option is now hidden for Docker and ECS instances and this option remains visible for all other service types.
OPSAPS-70655: The hadoop-metrics2.properties file is not getting generated into the ranger-rms-conf folder
The hadoop-metrics2.properties file was getting created in the process directory conf folder, for example, conf/hadoop-metrics2.properties, whereas the directory structure in Ranger RMS should be {process_directory}/ranger-rms-conf/hadoop-metrics2.properties.
The issue is fixed now. The directory name is changed from conf to ranger-rms-conf, so that the hadoop-metrics2.properties file gets created under the correct directory structure.
OPSAPS-71014: Auto action email content generation failed for some cluster(s) while loading the template file

The issue has been fixed by using a more appropriate template loader class in the freemarker configuration.

OPSAPS-70826: Ranger replication policies fail when target cluster uses Dell EMC Isilon storage and supports JDK17

Ranger replication policies no longer fail if the target cluster is deployed with Dell EMC Isilon storage and also supports JDK17.

OPSAPS-70861: HDFS replication policy creation process fails for Isilon source clusters

When you choose a source CDP Private Cloud Base cluster using the Isilon service and a target cloud storage bucket for an HDFS replication policy in CDP Private Cloud Base Replication Manager UI, the replication policy creation process fails. This issue is fixed now.

OPSAPS-70708: Cloudera Manager Agent not skipping autofs filesystems during filesystem check

Clusters in which there are a large number of network mounts on each host (for example, more than 100 networked file system mounts), cause the startup of Cloudera Manager Agent to take a long time, on the order of 10 to 20 seconds per mount point. This is due to the OS kernel on the cluster host interrogating each network mount on behalf of the Cloudera Manager Agent to gather monitoring information such as file system usage.

This issue is fixed now by adding the ability in the Cloudera Manager Agent's config.ini file to disable filesystem checks.

OPSAPS-68991: Change default SAML response binding to HTTP-POST

The default SAML response binding is HTTP-Artifact, rather than HTTP-POST. While HTTP-POST is designed for handling responses through the POST method, where as HTTP-Artifact necessitates a direct connection with the SP (Cloudera Manager in this case) and Identity Provider (IDP) and is rarely used. HTTP-POST should be the default choice instead.

This issue is fixed now by setting up the new Default SAML Binding to HTTP-POST.

OPSAPS-68353: Ozone Canary in Cloudera Manager Service Monitor uses keystore to store S3 secret.
Ozone Basic Canary now uses the S3 secret stored in the Cloudera Manager keystore instead of sending a request to Ozone. If the S3 secret is not available in the keystore, a request is sent to Ozone for the S3 secret credentials and is stored in the keystore.
OPSAPS-40169: Audits page does not list failed login attempts on applying Allowed = false filter

The Audits page in Cloudera Manager shows failed login attempts when no filter is applied. However, when the Allowed = false filter is applied it returns 0 results. Whereas it should have listed those failed login attempts. This issue is fixed now.

OPSAPS-70583: File Descriptor leak from Cloudera Manager 7.11.3 CHF3 version to Cloudera Manager 7.11.3 CHF7

Unable to create NettyTransceiver due to Avro library upgrade which leads to File Descriptor leak. File Descriptor leak occurs in Cloudera Manager when a service tries to talk with Event Server over Avro. This issue is fixed now.

OPSAPS-70962: Creating a cloud restore HDFS replication policy with a peer cluster as destination which is not supported by Replication Manager

During the HDFS replication policy creation process, incorrect Destination clusters and MapReduce services appear which when chosen creates a dummy replication policy to replicate from a cloud account to a remote peer cluster. This scenario is not supported by Replication Manager. This issue is now fixed.

OPSAPS-71108: Use the earlier format of PCR

You can use the latest version of the PCR (Post Copy Reconciliation) script, or you can restore PCR to the earlier format by setting the com.cloudera.enterprise.distcp.post-copy-reconciliation.legacy-output-format.enabled=true key value pair in the Cloudera Manager > Clusters > HDFS service > Configuration > hdfs_replication_hdfs_site_safety_valve property.

OPSAPS-71005: RemoteCmdWork is using a singlethreaded executor

By default, Replication Manager runs the remote commands for a replication policy through a single-thread executor. You can search and enable the enable_multithreaded_remote_cmd_executor property in the target Cloudera Manager > Administration > Settings page to run future replication policies through the multi-threaded executor. This action improves the processing performance of the replication workloads.

Additionally, you can also change the multithreaded_remote_cmd_executor_max_threads and multithreaded_remote_cmd_executor_keepalive_time properties to fine-tune the replication policy performance.
OPSAPS-70689: Enhanced performance of DistCp CRC check operation
When a MapReduce job for an HDFS replication policy job fails, or when there are target-side changes during a replication job, Replication Manager initiates the bootstrap replication process. During this process, a cyclic redundancy check (CRC) check is performed by default to determine whether a file can be skipped for replication.

By default, the CRC for each file is queried by the mapper (running on the target cluster) from the source cluster's NameNode. The round trip between the source and target cluster for each file consumes network resources and raises the cost of execution. To improve the performance, you can set the following variables to true, on the target cluster, to improve the performance of the CRC check for the Cloudera Manager > Clusters > HDFS service > Configuration > HDFS_REPLICATION_ENV_SAFETY_VALVE property:

  • ENABLE_FILESTATUS_EXTENSIONS
  • ENABLE_FILESTATUS_CRC_EXTENSIONS

By default, these are set to false.

After you set the key-value pairs, the CRC for each file is queried locally from the NameNode on the source cluster and copied over to the target cluster at the end of the replication process, which reduces the cost because round trip is between two nodes of the same cluster. The CRC checksums are written to the file listing files.

OPSAPS-70685: Post Copy Reconciliation (PCR) for HDFS replication policies between on-premises clusters
To add the Post Copy Reconciliation (PCR) script to run as a command step during the HDFS replication policy job run, you can enter the SCHEDULES_WITH_ADDITIONAL_DEBUG_STEPS = [***ENTER COMMA-SEPARATED LIST OF NUMERICAL IDS OF THE REPLICATION POLICIES***] key-value pair in the target Cloudera Manager > Clusters > HDFS service > hdfs_replication_env_safety_valve property.
To run the PCR script on the HDFS replication policy, use the /clusters/[***CLUSTER NAME***]>/services/[***SERVICE***]/replications/[***SCHEDULE ID***]/postCopyReconciliation API.
For more information about the PCR script, see How to use the post copy reconciliation script for HDFS replication policies.
Fixed Common Vulnerabilities and Exposures
For information about Common Vulnerabilities and Exposures (CVE) that are fixed in Cloudera Manager 7.11.3 cumulative hotfix 8, see Fixed Common Vulnerabilities and Exposures in Cloudera Manager 7.11.3 cumulative hotfixes.

The repositories for Cloudera Manager 7.11.3-CHF8 are listed in the following table:

Table 1. Cloudera Manager 7.11.3-CHF8
Repository Type Repository Location
RHEL 9 Compatible Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.16/redhat9/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.16/redhat9/yum/cloudera-manager.repo
RHEL 8 Compatible Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.16/redhat8/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.16/redhat8/yum/cloudera-manager.repo
RHEL 7 Compatible Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.16/redhat7/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.16/redhat7/yum/cloudera-manager.repo
SLES 15 Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.16/sles15/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.16/sles15/yum/cloudera-manager.repo
SLES 12 Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.16/sles12/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.16/sles12/yum/cloudera-manager.repo
Ubuntu 20 Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.16/ubuntu2004/apt
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.16/ubuntu2004/apt/cloudera-manager.list
Ubuntu 22 Repository:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.16/ubuntu2204/apt
Repository File:
https://username:password@archive.cloudera.com/p/cm7/7.11.3.16/ubuntu2204/apt/cloudera-manager.list