Cloudera Manager 7.11.3 Cumulative hotfix 1

Know more about the Cloudera Manager 7.11.3 cumulative hotfixes 1.

This cumulative hotfix was released on November 2, 2023.

New features and changed behavior for Cloudera Manager 7.11.3 CHF1 (version: 7.11.3.2-46642574):
Cloudera Navigator role instances under the Cloudera Management Service are no longer available while using Cloudera Runtime 7.1.9 CHF1 version.
You must first migrate Cloudera Navigator to Atlas before you upgrade from CDH 6.x + Cloudera Manager 6.x / 7.x to CDP 7.1.9 CHF1 + Cloudera Manager 7.11.3 Latest cumulative hotfix. For more information, you must refer to Migrating from Cloudera Navigator to Atlas using Cloudera Manager 6 and Migrating from Cloudera Manager to Atlas using Cloudera Manager 7.
OpenJDK 17 (TCK certified) support for the Cloudera Manager 7.11.3 CHF1 and operating systems

Cloudera Manager 7.11.3 CHF1 now supports OpenJDK 17 (TCK certified) on RHEL 7, RHEL 8, RHEL 9, Ubuntu 20, and SLES 15.

You must upgrade to Cloudera Manager 7.11.3 CHF1 or higher, before upgrading to OpenJDK 17 (TCK certified).

Replicate Hive external tables in Dell EMC Isilon storage clusters using Hive external table replication policies
You can use Hive external table replication policies in CDP Private Cloud Base Replication Manager to replicate Hive external tables between Dell EMC Isilon storage clusters where the 7.1.9 clusters use Cloudera Manager 7.11.3 CHF1 or higher versions.
Following are the list of known issues and their corresponding workarounds that are shipped for Cloudera Manager 7.11.3 CHF1 (version: 7.11.3.2-46642574):
OPSAPS-69806: Collection of YARN diagnostic bundle will fail

For any combinations of CM 7.11.3 version up to CM 7.11.3 CHF7 version, with CDP 7.1.7 through CDP 7.1.8, collection of the YARN diagnostic bundle will fail, and no data transmits occur.

Upgrade to CDP 7.1.9, or downgrade to Cloudera Manager 7.7.1.

OPSAPS-68845: Cloudera Manager Server fails to start after the Cloudera Manager upgrade
Starting from the Cloudera Manager 7.11.3 version up to the Cloudera Manager 7.11.3 CHF7 version, the Cloudera Manager Server fails to start after the Cloudera Manager upgrade due to Navigator user roles improperly handled in the upgrade in some scenarios.
None
OPSAPS-69406: Cannot edit existing HDFS and HBase snapshot policy configuration
The Edit Configuration modal window does not appear when you click Actions > Edit Configuration on the Cloudera Manager > Replication > Snapshot Policies page for existing HDFS or HBase snapshot policies.
None.
OPSAPS-68340: Zeppelin paragraph execution fails with the User not allowed to impersonate error.

Starting from Cloudera Manager 7.11.3, Cloudera Manager auto-configures the livy_admin_users configuration when Livy is run for the first time. If you add Zeppelin or Knox services later to the existing cluster and do not manually update the service user, the User not allowed to impersonate error is displayed.

If you add Zeppelin or Knox services later to the existing cluster, you must manually add the respective service user to the livy_admin_users configuration in the Livy configuration page.

OPSAPS-68689: Unable to emit the LDAP Bind password in core-site.xml for client configurations

If the CDP cluster has LDAP group to OS group mapping enabled, then applications running in Spark or Yarn would fail to authenticate to the LDAP server when trying to use the LDAP bind account during the LDAP group search.

This is because the LDAP bind password was not passed to the /etc/hadoop/conf/core-site.xml file. This was intended behavior to prevent leaking the LDAP bind password in a clear text field.

Set the LDAP Bind password through the HDFS client configuration safety valve.
  1. On the Cloudera Manager UI, navigate to the HDFS service, by clicking on the HDFS service under the Cluster.
  2. Click the Configuration tab. Search for the HDFS Client Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml configuration parameter.

  3. Add an entry with the following values:
    • Name = hadoop.security.group.mapping.ldap.bind.password
    • Value = (Enter the LDAP bind password here)
    • Description = Password for LDAP bind account
  4. Then click the Save Changes button to save the safety valve entry.

  5. Perform the instructions from the Manually Redeploying Client Configuration Files to manually deploy client configuration files to the cluster.

OPSAPS-69342: Access issues identified in MariaDB 10.6 were causing discrepancies in High Availability (HA) mode

MariaDB 10.6, by default, includes the property require_secure_transport=ON in the configuration file (/etc/my.cnf), which is absent in MariaDB 10.4. This setting prohibits non-TLS connections, leading to access issues. This problem is observed in High Availability (HA) mode, where certain operations may not be using the same connection.

To resolve the issue temporarily, you can either comment out or disable the line require_secure_transport in the configuration file located at /etc/my.cnf.

OPSAPS-68452: Azul Open JDK 8 and 11 are not supported with Cloudera Manager

Azul Open JDK 8 and 11 are not supported with Cloudera Manager. To use Azul Open JDK 8 or 11 for Cloudera Manager RPM/DEBs, you must manually create a symlink between the Zulu JDK installation path and the default JDK path.

After installing Azul Open JDK8 or 11, you must run the following commands on all the hosts in the cluster:
Azul Open JDK 8
RHEL or SLES
# sudo ln -s /usr/lib/jvm/java-8-zulu-openjdk-jdk /usr/lib/jvm/java-8-openjdk
Ubuntu or Debian
# sudo ln -s /usr/lib/jvm/zulu-8-amd64 /usr/lib/jvm/java-8-openjdk
Azul Open JDK 11
For DEBs only
# sudo ln -s /usr/lib/jvm/zulu-11-amd64 /usr/lib/jvm/java-11
OPSAPS-69481: Some Kafka Connect metrics missing from Cloudera Manager due to conflicting definitions
The metric definitions for kafka_connect_connector_task_metrics_batch_size_avg and kafka_connect_connector_task_metrics_batch_size_max in recent Kafka CSDs conflict with previous definitions in other CSDs. This prevents Cloudera Manager from registering these metrics. It also results in SMM returning an error. The metrics also cannot be monitored in Cloudera Manager chart builder or queried using the Cloudera Manager API.
Contact Cloudera support for a workaround.
OPSAPS-68559: On-premises to on-premises Hive external replication won't work with a cloud target
You cannot replicate Hive external tables using Hive external table replication policies from an on-premises cluster to another on-premises cluster with an external account to replicate the Hive data only to the cloud.
None
OPSAPS-68658: Source ozone service id used as target
Ozone replication policies fail when the Ozone service ID is different on the source and destination clusters because Ozone replication uses the destination Ozone service ID during the path normalization process.
None
OPSAPS-68698: Replication command type is incorrectly reported for Ozone incremental replications
When you create an Ozone replication policy using “Incremental only” or “Incremental with fallback to full file listing” Listing types, sometimes the Ozone replication command type is incorrectly reported for different types of runs.
None
OPSAPS-42908: "User:hdfs not allowed to do DECRYPT_EEK" error appears for Hive external table replication policies
When you run Hive external table replication policies on clusters using Ranger KMS, the “User:hdfs not allowed to do 'DECRYPT_EEK'” error appears when you do not use the hdfs username.
Edit the Hive external table replication policy, and configure the Advanced > Directory for metadata file field to a new directory that is not encrypted. The replication policy uses this directory to store the transient data during Hive replication.
OPSAPS-69897: NPE in Ozone replication from CM 7.7.1 to CM 7.11.3
When you use source Cloudera Manager 7.7.1 and target Cloudera Manager 7.11.3 for Ozone replication policies, the policies fail with Failure during PreOzoneCopyListingCheck execution: null error. This is because the target Cloudera Manager 7.11.3 does not retrieve the required source bucket information for validation from the source Cloudera Manager 7.7.1 during the PreCopyListingCheck command phase. You come across this error when you use source Cloudera Manager versions lower than 7.10.1 and target Cloudera Manager versions higher than or equal to 7.10.1 in an Ozone replication policy.
Upgrade the source Cloudera Manager to 7.11.3 or higher version.
CDPD-62464: Java process called by navatlas.sh tool fails on JDK-8 version
While running nav2atlas.sh script on OracleJDK 8 an error message is thrown and returns code 0 on an unsuccessful run.
You must install JDK-11 version on the host. Make sure not to put into the default path and JAVA_HOME. In a shell, set the JAVA_HOME to this location and run the nav2atlas.sh script.
CDPD-62834: Status of the deleted table is seen as ACTIVE in Atlas after the completion of navigator2atlas migration process
The status of the deleted table displays as ACTIVE.
None
CDPD-62837: During the navigator2atlas process, the hive_storagedesc is incomplete in Atlas
For the hive_storagedesc entity, some of the attributes are not getting populated.
None
Following are the list of fixed issues that were shipped for Cloudera Manager 7.11.3 CHF1 (version: 7.11.3.2-46642574):
OPSAPS-68664: Added the support for JDK 17 in HDFS.
This issue is resolved.
OPSAPS-68550: Ozone Canary failing with unknown option --skipTrash.
This issue is resolved.
OPSAPS-66023: Error message about an unsupported ciphersuite while upgrading or installing cluster with the latest FIPS compliance

When upgrading or installing a FIPS enabled cluster, Cloudera Manager is unable to download the new CDP parcel from the Cloudera parcel archive.

Cloudera Manager displays the following error message:

HTTP ERROR 400 java.net.ConnectException: Unsupported ciphersuite TLS_EDH_RSA_WITH_3DES_EDE_CBC_SHA

This issue is fixed now by correcting the incorrect ciphersuite selection.
OPSAPS-65504: Upgraded Apache Ivy version

The Apache Ivy version is upgraded from 2.x.x to 2.5.1 version to fix CVE issues.

OPSAPS-68500: The cloudera-manager-installer.bin fails to reach Ubuntu 20 repository on the Archive URL due to redirections

Agent Installation with Cloudera Manager on Ubuntu20 platform does not function when the self-installer method (using the installer.bin file) is employed to install Cloudera Manager. The failure mode is that Cloudera Manager Agent installation step will fail with an error message saying "The repository 'https://archive.cloudera.com/p/cm7/7.11.3/ubuntu2004/apt focal-cm7 InRelease' is not signed."

This issue is fixed now.

OPSAPS-68422: Incorrect HBase shutdown command can lead to inconsistencies

Cloudera Manager uses an incomplete stop command when you stop the HBase service or the corresponding roles on a 7.1.8 or higher private cloud cluster. Due to this, the Cloudera Manager cannot gracefully stop the processes and kill them after a set timeout. This could lead to metadata corruption.

This issue is fixed now.

OPSAPS-68506: Knox CSD changes for readiness check

A readiness endpoint was added to determine whether Knox is ready to receive traffic. Clouder Manager checks the state of Knox after startup to reduce downtime during rolling restarts.

OPSAPS-68424 Impala: CM Agent unable to extract logs to TP export directory

Impala queries were not displaying in the Cloudera Observability and Workload XM web User Interfaces. This was due to an internal error that was stopping Cloudera Manager from pushing the Impala profile to the Telemetry Publisher logs directory.

This Issue is now fixed and the Telemetry Publisher’s log extraction has been re-enable.
OPSAPS-68697 Error while generating email template resulting in an inability to trigger mail notification

Cloudera Observability and Workload XM were unable to trigger an email notification when an Impala query matched the Auto Action’s alert threshold value.

This problem occurred when both the following conditions were met:
  • The Auto Action is triggered for an Impala Scope.
  • The length of the Impala query on which the action event is triggered is less than 36 characters.

This issue is now fixed.

OPSAPS-69480: Hardcode MR add-opens-as-default config
When Cloudera Manager is upgraded to 7.11.3, if the CDP cluster is not 7.1.9, then the YARN Container Usage Aggregation job fails.
Add the following property in the MapReduce Client Advanced Configuration Snippet (Safety Valve) for mapred-site.xml file.
NAME: mapreduce.jvm.add-opens-as-default
VALUE: false
OPSAPS-68798 Auto Actions not using proxy while connecting to DBUS

The Cloudera Observability and Workload XM Auto Actions feature was not recognizing the proxy server credentials, even when they were correct and the proxy server was enabled in Telemetry Publisher.

This issue is now fixed.
Known issue:
OPSAPS-68629: HDFS HTTPFS GateWay is not able to start with custom krb5.conf location set in Cloudera Manager.
On a cluster with a custom krb5.conf file location configured in Cloudera Manager, HDFS HTTPFS role is not able to start because it does not have the custom Kerberos configuration file setting properly propagated to the service, and therefore it fails with a Kerberos related exception: in thread "main" java.io.IOException: Unable to initialize WebAppContext at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1240) at org.apache.hadoop.fs.http.server.HttpFSServerWebServer.start(HttpFSServerWebServer.java:131) at org.apache.hadoop.fs.http.server.HttpFSServerWebServer.main(HttpFSServerWebServer.java:162) Caused by: java.lang.IllegalArgumentException: Can't get Kerberos realm at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:71) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:329) at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:380) at org.apache.hadoop.lib.service.hadoop.FileSystemAccessService.init(FileSystemAccessService.java:166) at org.apache.hadoop.lib.server.BaseService.init(BaseService.java:71) at org.apache.hadoop.lib.server.Server.initServices(Server.java:581) at org.apache.hadoop.lib.server.Server.init(Server.java:377) at org.apache.hadoop.fs.http.server.HttpFSServerWebApp.init(HttpFSServerWebApp.java:100) at org.apache.hadoop.lib.servlet.ServerWebApp.contextInitialized(ServerWebApp.java:158) at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:1073) at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:572) at org.eclipse.jetty.server.handler.ContextHandler.contextInitialized(ContextHandler.java:1002) at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:765) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:379) at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1449) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1414) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:916) at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:288) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) at org.eclipse.jetty.server.Server.start(Server.java:423) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97) at org.eclipse.jetty.server.Server.doStart(Server.java:387) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1218) ... 2 more Caused by: java.lang.IllegalArgumentException: KrbException: Cannot locate default realm at java.security.jgss/javax.security.auth.kerberos.KerberosPrincipal.<init>(KerberosPrincipal.java:174) at org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:108) at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:69) ...
  1. Log in to Cloudera Manager.
  2. Select the HDFS service.
  3. Select Configurations tab.
  4. Search for HttpFS Environment Advanced Configuration Snippet (Safety Valve)
  5. Add to or extend the HADOOP_OPTS environment variable with the following value: -Djava.security.krb5.conf=<the custom krb5.conf location>
  6. Click Save Changes.

The repositories for Cloudera Manager 7.11.3-CHF1 are listed in the following table:

Table 1. Cloudera Manager 7.11.3-CHF1
Repository Type Repository Location
RHEL 9 Compatible Repository:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.2-46642574/redhat9/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.2-46642574/redhat9/yum/cloudera-manager.repo
RHEL 8 Compatible Repository:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.2-46642574/redhat8/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.2-46642574/redhat8/yum/cloudera-manager.repo
RHEL 7 Compatible Repository:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.2-46642574/redhat7/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.2-46642574/redhat7/yum/cloudera-manager.repo
SLES 15 Repository:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.2-46642574/sles15/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.2-46642574/sles15/yum/cloudera-manager.repo
SLES 12 Repository:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.2-46642574/sles12/yum
Repository File:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.2-46642574/sles12/yum/cloudera-manager.repo
Ubuntu 20 Repository:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.2-46642574/ubuntu2004/apt
Repository File:
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.2-46642574/ubuntu2004/apt/cloudera-manager.repo
IBM PowerPC RHEL 8
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.2-46642574/redhat8-ppc/yum
IBM PowerPC RHEL 9
https://username:password@archive.cloudera.com/p/cm7/patch/7.11.3.2-46642574/redhat9-ppc/yum