Cloudera Manager 7.11.3 Cumulative hotfix 1
Know more about the Cloudera Manager 7.11.3 cumulative hotfixes 1.
This cumulative hotfix was released on November 2, 2023.
- Cloudera Navigator role instances under the Cloudera Management Service are no longer available while using Cloudera Runtime 7.1.9 CHF1 version.
- You must first migrate Cloudera Navigator to Atlas before you upgrade from CDH 6.x + Cloudera Manager 6.x / 7.x to CDP 7.1.9 CHF1 + Cloudera Manager 7.11.3 Latest cumulative hotfix. For more information, you must refer to Migrating from Cloudera Navigator to Atlas using Cloudera Manager 6 and Migrating from Cloudera Manager to Atlas using Cloudera Manager 7.
- OpenJDK 17 (TCK certified) support for the Cloudera Manager 7.11.3 CHF1 and operating systems
-
Cloudera Manager 7.11.3 CHF1 now supports OpenJDK 17 (TCK certified) on RHEL 7, RHEL 8, RHEL 9, Ubuntu 20, and SLES 15.
You must upgrade to Cloudera Manager 7.11.3 CHF1 or higher, before upgrading to OpenJDK 17 (TCK certified).
- Replicate Hive external tables in Dell EMC Isilon storage clusters using Hive external table replication policies
- You can use Hive external table replication policies in CDP Private Cloud Base Replication Manager to replicate Hive external tables between Dell EMC Isilon storage clusters where the 7.1.9 clusters use Cloudera Manager 7.11.3 CHF1 or higher versions.
- OPSAPS-73211: Cloudera Manager 7.11.3 does not clean up Python Path impacting Hue to start
-
When you upgrade from Cloudera Manager 7.7.1 or lower versions to Cloudera Manager 7.11.3 or higher versions with CDP Private Cloud Base 7.1.7.x Hue does not start because Cloudera Manager forces Hue to start with Python 3.8, and Hue needs Python 2.7.
The reason for this issue is because Cloudera Manager does not clean up the Python Path at any time, so when Hue tries to start the Python Path points to 3.8, which is not supported in CDP Private Cloud Base 7.1.7.x version by Hue.
- OPSAPS-73011: Wrong parameter in the /etc/default/cloudera-scm-server file
- In case the Cloudera Manager needs to be installed in
High Availability (2 nodes or more as explained here), the parameter
CMF_SERVER_ARGS
in the /etc/default/cloudera-scm-server file is missing the word "export
" before it (on the file there is onlyCMF_SERVER_ARGS=
and notexport CMF_SERVER_ARGS=
), so the parameter cannot be utilized correctly. - OPSAPS-72984: Alerts due to change in hostname fetching functionality in jdk 8 and jdk 11
-
Upgrading JAVA from JDK 8 to JDK 11 creates the following alert in CMS:
Bad : CMSERVER:pit666.slayer.mayank: Reaching Cloudera Manager Server failed
This happens due to a functionality change in JDK 11 on hostname fetching.[root@pit666.slayer ~]# /us/lib/jvm/java-1.8.0/bin/java GetHostName Hostname: pit666.slayer.mayank [root@pit666.slayer ~]# /usr/lib/jvm/java-11/bin/java GetHostName Hostname: pit666.slayer
You can notice that the "hostname" is set to a short name instead of
FQDN
. - OPSAPS-65377: Cloudera Manager - Host Inspector not finding Psycopg2 on Ubuntu 20 or Redhat 8.x when Psycopg2 version 2.9.3 is installed.
-
Host Inspector fails with Psycopg2 version error while upgrading to Cloudera Manager 7.11.3 or Cloudera Manager 7.11.3 CHF-x versions. When you run the Host Inspector, you get an error Not finding Psycopg2, even though it is installed on all hosts.
- OPSAPS-71642:
GflagConfigFileGenerator
is removing the=
sign in the Gflag configuration file when the configuration value passed is empty in the advanced safety valve -
If the user adds
file_metadata_reload_properties
configuration in the advanced safety valve with=
sign and empty value, then theGflagConfigFileGenerator
is removing the=
sign in the Gflag configuration file when the configuration value passed is empty in the advanced safety valve. - OPSAPS-69806: Collection of YARN diagnostic bundle will fail
-
For any combinations of CM 7.11.3 version up to CM 7.11.3 CHF7 version, with CDP 7.1.7 through CDP 7.1.8, collection of the YARN diagnostic bundle will fail, and no data transmits occur.
- OPSAPS-68845: Cloudera Manager Server fails to start after the Cloudera Manager upgrade
- Starting from the Cloudera Manager 7.11.3 version up to the Cloudera Manager 7.11.3 CHF7 version, the Cloudera Manager Server fails to start after the Cloudera Manager upgrade due to Navigator user roles improperly handled in the upgrade in some scenarios.
- OPSAPS-69406: Cannot edit existing HDFS and HBase snapshot policy configuration
- The Edit Configuration modal window does not appear when you click on the page for existing HDFS or HBase snapshot policies.
- OPSAPS-68340: Zeppelin paragraph execution fails with the User not allowed to impersonate error.
-
Starting from Cloudera Manager 7.11.3, Cloudera Manager auto-configures the
livy_admin_users
configuration when Livy is run for the first time. If you add Zeppelin or Knox services later to the existing cluster and do not manually update the service user, the User not allowed to impersonate error is displayed. - OPSAPS-68689: Unable to emit the LDAP Bind password in
core-site.xml
for client configurations -
If the CDP cluster has LDAP group to OS group mapping enabled, then applications running in Spark or Yarn would fail to authenticate to the LDAP server when trying to use the LDAP bind account during the LDAP group search.
This is because the LDAP bind password was not passed to the /etc/hadoop/conf/core-site.xml file. This was intended behavior to prevent leaking the LDAP bind password in a clear text field.
- OPSAPS-69342: Access issues identified in MariaDB 10.6 were causing discrepancies in High Availability (HA) mode
-
MariaDB 10.6, by default, includes the property
require_secure_transport=ON
in the configuration file (/etc/my.cnf), which is absent in MariaDB 10.4. This setting prohibits non-TLS connections, leading to access issues. This problem is observed in High Availability (HA) mode, where certain operations may not be using the same connection. - OPSAPS-68452: Azul Open JDK 8 and 11 are not supported with Cloudera Manager
-
Azul Open JDK 8 and 11 are not supported with Cloudera Manager. To use Azul Open JDK 8 or 11 for Cloudera Manager RPM/DEBs, you must manually create a symlink between the Zulu JDK installation path and the default JDK path.
- OPSAPS-69481: Some Kafka Connect metrics missing from Cloudera Manager due to conflicting definitions
- The metric definitions for
kafka_connect_connector_task_metrics_batch_size_avg
andkafka_connect_connector_task_metrics_batch_size_max
in recent Kafka CSDs conflict with previous definitions in other CSDs. This prevents Cloudera Manager from registering these metrics. It also results in SMM returning an error. The metrics also cannot be monitored in Cloudera Manager chart builder or queried using the Cloudera Manager API. - OPSAPS-68559: On-premises to on-premises Hive external replication won't work with a cloud target
- You cannot replicate Hive external tables using Hive external table replication policies from an on-premises cluster to another on-premises cluster with an external account to replicate the Hive data only to the cloud.
- OPSAPS-68658: Source ozone service id used as target
- Ozone replication policies fail when the Ozone service ID is different on the source and destination clusters because Ozone replication uses the destination Ozone service ID during the path normalization process.
- OPSAPS-68698: Replication command type is incorrectly reported for Ozone incremental replications
- When you create an Ozone replication policy using “Incremental only” or “Incremental with fallback to full file listing” Listing types, sometimes the Ozone replication command type is incorrectly reported for different types of runs.
- OPSAPS-42908: "User:hdfs not allowed to do DECRYPT_EEK" error appears for Hive external table replication policies
- When you run Hive external table replication policies on clusters using Ranger KMS, the “User:hdfs not allowed to do 'DECRYPT_EEK'” error appears when you do not use the hdfs username.
- OPSAPS-69897: NPE in Ozone replication from CM 7.7.1 to CM 7.11.3
- When you use source Cloudera Manager 7.7.1 and target Cloudera Manager 7.11.3 for Ozone replication policies, the policies fail with Failure during PreOzoneCopyListingCheck execution: null error. This is because the target Cloudera Manager 7.11.3 does not retrieve the required source bucket information for validation from the source Cloudera Manager 7.7.1 during the PreCopyListingCheck command phase. You come across this error when you use source Cloudera Manager versions lower than 7.10.1 and target Cloudera Manager versions higher than or equal to 7.10.1 in an Ozone replication policy.
- CDPD-62464: Java process called by navatlas.sh tool fails on JDK-8 version
- While running
nav2atlas.sh
script on OracleJDK 8 an error message is thrown and returns code 0 on an unsuccessful run.
- CDPD-62834: Status of the deleted table is seen as ACTIVE in Atlas after the completion of navigator2atlas migration process
- The status of the deleted table displays as ACTIVE.
- CDPD-62837: During the navigator2atlas process, the hive_storagedesc is incomplete in Atlas
- For the hive_storagedesc entity, some of the attributes are not getting populated.
- OPSAPS-68664: Added the support for JDK 17 in HDFS.
- This issue is resolved.
- OPSAPS-68550: Ozone Canary failing with unknown option --skipTrash.
- This issue is resolved.
- OPSAPS-66023: Error message about an unsupported ciphersuite while upgrading or installing cluster with the latest FIPS compliance
-
When upgrading or installing a FIPS enabled cluster, Cloudera Manager is unable to download the new CDP parcel from the Cloudera parcel archive.
Cloudera Manager displays the following error message:
HTTP ERROR 400 java.net.ConnectException: Unsupported ciphersuite TLS_EDH_RSA_WITH_3DES_EDE_CBC_SHA
- OPSAPS-65504: Upgraded Apache Ivy version
-
The Apache Ivy version is upgraded from 2.x.x to 2.5.1 version to fix CVE issues.
- OPSAPS-68500: The cloudera-manager-installer.bin fails to reach Ubuntu 20 repository on the Archive URL due to redirections
-
Agent Installation with Cloudera Manager on Ubuntu20 platform does not function when the self-installer method (using the installer.bin file) is employed to install Cloudera Manager. The failure mode is that Cloudera Manager Agent installation step will fail with an error message saying "The repository 'https://archive.cloudera.com/p/cm7/7.11.3/ubuntu2004/apt focal-cm7 InRelease' is not signed."
This issue is fixed now.
- OPSAPS-68422: Incorrect HBase shutdown command can lead to inconsistencies
-
Cloudera Manager uses an incomplete stop command when you stop the HBase service or the corresponding roles on a 7.1.8 or higher private cloud cluster. Due to this, the Cloudera Manager cannot gracefully stop the processes and kill them after a set timeout. This could lead to metadata corruption.
This issue is fixed now.
- OPSAPS-68506: Knox CSD changes for readiness check
-
A readiness endpoint was added to determine whether Knox is ready to receive traffic. Clouder Manager checks the state of Knox after startup to reduce downtime during rolling restarts.
- OPSAPS-68424 Impala: CM Agent unable to extract logs to TP export directory
-
Impala queries were not displaying in the Cloudera Observability and Workload XM web User Interfaces. This was due to an internal error that was stopping Cloudera Manager from pushing the Impala profile to the Telemetry Publisher logs directory.
This Issue is now fixed and the Telemetry Publisher’s log extraction has been re-enable. - OPSAPS-68697 Error while generating email template resulting in an inability to trigger mail notification
-
Cloudera Observability and Workload XM were unable to trigger an email notification when an Impala query matched the Auto Action’s alert threshold value.
This problem occurred when both the following conditions were met:- The Auto Action is triggered for an Impala Scope.
- The length of the Impala query on which the action event is triggered is less than 36 characters.
This issue is now fixed.
- OPSAPS-69480: Hardcode MR add-opens-as-default config
- When Cloudera Manager is upgraded to 7.11.3, if the CDP cluster is not 7.1.9, then the YARN Container Usage Aggregation job fails.
- OPSAPS-68798 Auto Actions not using proxy while connecting to DBUS
-
The Cloudera Observability and Workload XM Auto Actions feature was not recognizing the proxy server credentials, even when they were correct and the proxy server was enabled in Telemetry Publisher.
This issue is now fixed.
- OPSAPS-68629: HDFS HTTPFS GateWay is not able to start with custom krb5.conf location set in Cloudera Manager.
- On a cluster with a custom krb5.conf
file location configured in Cloudera Manager, HDFS HTTPFS role is not able to start
because it does not have the custom Kerberos configuration file setting properly
propagated to the service, and therefore it fails with a Kerberos related exception:
in thread "main" java.io.IOException: Unable to initialize WebAppContext at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1240) at org.apache.hadoop.fs.http.server.HttpFSServerWebServer.start(HttpFSServerWebServer.java:131) at org.apache.hadoop.fs.http.server.HttpFSServerWebServer.main(HttpFSServerWebServer.java:162) Caused by: java.lang.IllegalArgumentException: Can't get Kerberos realm at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:71) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:329) at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:380) at org.apache.hadoop.lib.service.hadoop.FileSystemAccessService.init(FileSystemAccessService.java:166) at org.apache.hadoop.lib.server.BaseService.init(BaseService.java:71) at org.apache.hadoop.lib.server.Server.initServices(Server.java:581) at org.apache.hadoop.lib.server.Server.init(Server.java:377) at org.apache.hadoop.fs.http.server.HttpFSServerWebApp.init(HttpFSServerWebApp.java:100) at org.apache.hadoop.lib.servlet.ServerWebApp.contextInitialized(ServerWebApp.java:158) at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:1073) at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:572) at org.eclipse.jetty.server.handler.ContextHandler.contextInitialized(ContextHandler.java:1002) at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:765) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:379) at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1449) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1414) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:916) at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:288) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) at org.eclipse.jetty.server.Server.start(Server.java:423) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97) at org.eclipse.jetty.server.Server.doStart(Server.java:387) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1218) ... 2 more Caused by: java.lang.IllegalArgumentException: KrbException: Cannot locate default realm at java.security.jgss/javax.security.auth.kerberos.KerberosPrincipal.<init>(KerberosPrincipal.java:174) at org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:108) at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:69) ...
The repositories for Cloudera Manager 7.11.3-CHF1 are listed in the following table:
Repository Type | Repository Location |
---|---|
RHEL 9 Compatible | Repository: Repository
File:
|
RHEL 8 Compatible | Repository: Repository
File:
|
RHEL 7 Compatible | Repository: Repository
File:
|
SLES 15 | Repository: Repository
File:
|
SLES 12 | Repository: Repository
File:
|
Ubuntu 20 | Repository: Repository
File:
|
IBM PowerPC RHEL 8 |
|
IBM PowerPC RHEL 9 |
|