Known Issues in Cloudera Manager 7.11.3
Known issues in Cloudera Manager 7.11.3.
- OPSAPS-59723: Extra step required when using Cloudera Manager Trial installer on SLES 15 SP4
- When using
cloudera-manager-installer.bin
to install a trial version of Cloudera Manager, the installation will fail. - OPSAPS-66579: The GUI version of the Cloudera Manager self-installer is not available on the RHEL 9 operating system
-
While installing the Cloudera Manager (Cloudera Manager Server, Cloudera Manager Agent, and the database) the GUI version of the Cloudera Manager self-installer is not available on the RHEL 9 operating system.
This issue is due to the non-availability of the libncurses5 library on the RHEL 9 operating system. Users can provide input using the CLI prompts instead of the GUI prompts during the installation process.
- OPSAPS-68395: Cloudera Management Service roles might fail to start
-
While starting Cloudera Manager Server (during a fresh install, an upgrade, or when rolling back an upgrade) the status of one or more roles of the Cloudera Management Service are in the Stopped state, and later these roles might fail to start.
This failure might happen if you attempt to start the affected role(s) within first few minutes after starting the Cloudera Manager Server or a cluster, then the status of the affected roles shows the Down state, and the corresponding functionality is lost. Accordingly, Cloudera Manager might display the errors. This failure is caused by a temporary resource contention, and subsequent timeout.
- OPSAPS-60726: Newly saved parcel URL is not showing up on the parcels page in Cloudera Manager High Availability (HA) cluster
-
Newly saved parcels might not show up on the parcels page in Cloudera Manager HA mode.
- OPSAPS-68178: Inconsistent Java Keystore Type while performing upgrade from CDH 6 to CDP Private Cloud Base 7.1.9
-
While performing upgrade from CDH 6 to CDP Private Cloud Base 7.1.9, the configured Java Keystore Type is
jks
on Cloudera Manager UI. However, the physical Truststore files on the upgraded cluster are available inpkcs12
format. - OPSAPS-67929: While upgrading from CDP 7.1.7 SP2 to CDP 7.1.9 version and if there is an upgrade failure in the middle of the process, the Resume option is not available.
- You must reach out to Cloudera Support.
- OPSAPS-68325: Cloudera Manager fails to install with MariaDB 10.6.15, 10.5.22, and 10.4.31
-
Cloudera Manager Server fails to execute the DDL commands that involve disabling the
FOREIGN_KEY_CHECKS
when you use the following databases:- MariaDB 10.6.15
- MariaDB 10.5.22
- MariaDB 10.4.31
- OPSAPS-68240: After restarting Cloudera Manager Server and MySQL, Cloudera Manager server fails to start
-
When using MySQL 8 version, Cloudera Manager fails to start and displays an error message on the logs - java.sql.SQLNonTransientConnectionException: Public Key Retrieval is not allowed
- DMX-3167
- When multiple Iceberg replication policies replicate the same database simultaneously, one of the replication policies might show “Database already exists” error.
- DMX-3193
- If the source and target clusters have the same nameservice environment and a table is dropped on the source cluster during the incremental replication run of an Iceberg replication policy, the replication policy fails with the"Metadata file not found for table" error.
- OPSAPS-68143
- When you replicate empty OBS buckets using an Ozone replication policy, the policy fails and a FileNotFoundException appears during the "Run File Listing on Peer cluster" step.
- DMX-3169
- The YARN jobs (DistCp) for Iceberg replication policies cannot use the hdfs username if the replication policies use secure source and target clusters.
- DMX-3174
- Iceberg replication policies fail if the clusters with HDFS HA have different nameservice names and are Auto-TLS enabled on unified realms.
- CDPD-59437
- An Iceberg replication policy might not find a table in the database during the replication process if another Iceberg replication policy that is running simultaneously (replicating a different set of tables from the same database) has dropped the table.
- OPSAPS-68221: Cloudera Manager Agent installation might fail while upgrading to Cloudera Manager 7.11.3 without installing Python 3 on the Cloudera Manager Server host
-
Before upgrading to Cloudera Manager 7.11.3, if you do not install Python 3 on the Cloudera Manager Server host, then Cloudera Manager Agent installation might fail. This state is not recoverable by reinstalling Cloudera Manager Agent alone.
- OPSAPS-68426: Atlas service dependencies are not set during CDH 6 to CDP 7.x.x upgrade if Navigator role instances are not configured under the Cloudera Management Service.
-
Navigator support has been discontinued in Cloudera Manager 7.11.3. Consequently, if you are using CDH 6 and have Navigator installed, it is necessary to remove the Navigator service before proceeding with the upgrade to Cloudera Manager version 7.11.3 or any higher version. Due to this change, when upgrading the Runtime version from CDH 6 to CDP 7.x.x, it is important to note that Atlas, which replaces Navigator in CDP 7.x.x, might not automatically be set as a service dependency for certain components. The components that could potentially be impacted include: HBase, Hive, Hive on Tez, Hue, Impala, Oozie, Spark, and Sqoop.
- OPSAPS-68340: Zeppelin paragraph execution fails with the User not allowed to impersonate error.
-
Starting from Cloudera Manager 7.11.3, Cloudera Manager auto-configures the
livy_admin_users
configuration when Livy is run for the first time. If you add Zeppelin or Knox services later to the existing cluster and do not manually update the service user, the User not allowed to impersonate error is displayed. - OPSAPS-68500: The cloudera-manager-installer.bin fails to reach Ubuntu 20 repository on the Archive URL due to redirections.
-
Agent Installation with Cloudera Manager on Ubuntu20 platform does not function when the self-installer method (using the installer.bin file) is employed to install Cloudera Manager. The failure mode is that Cloudera Manager Agent installation step will fail with an error message saying "The repository 'https://archive.cloudera.com/p/cm7/7.11.3/ubuntu2004/apt focal-cm7 InRelease' is not signed."
- DMX-3003
- The progress.json file is updated along with the progress of the DistCp job run whenever the number of files copied is equal to the incremental count (default is 50) for Iceberg replication policies. The file report does not get synchronized as expected and the reported numbers are also inconsistent.
- DMX-2977, DMX-2978
- You cannot view the current status of an ongoing export task (exportCLI) or sync task (syncCLI) for an Iceberg replication policy.
- OPSAPS-68629: HDFS HTTPFS GateWay is not able to start with custom krb5.conf location set in Cloudera Manager.
- On a cluster with a custom krb5.conf file
location configured in Cloudera Manager, HDFS HTTPFS role is not able to start because it does
not have the custom Kerberos configuration file setting properly propagated to the service, and
therefore it fails with a Kerberos related exception:
in thread "main" java.io.IOException: Unable to initialize WebAppContext at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1240) at org.apache.hadoop.fs.http.server.HttpFSServerWebServer.start(HttpFSServerWebServer.java:131) at org.apache.hadoop.fs.http.server.HttpFSServerWebServer.main(HttpFSServerWebServer.java:162) Caused by: java.lang.IllegalArgumentException: Can't get Kerberos realm at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:71) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:329) at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:380) at org.apache.hadoop.lib.service.hadoop.FileSystemAccessService.init(FileSystemAccessService.java:166) at org.apache.hadoop.lib.server.BaseService.init(BaseService.java:71) at org.apache.hadoop.lib.server.Server.initServices(Server.java:581) at org.apache.hadoop.lib.server.Server.init(Server.java:377) at org.apache.hadoop.fs.http.server.HttpFSServerWebApp.init(HttpFSServerWebApp.java:100) at org.apache.hadoop.lib.servlet.ServerWebApp.contextInitialized(ServerWebApp.java:158) at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:1073) at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:572) at org.eclipse.jetty.server.handler.ContextHandler.contextInitialized(ContextHandler.java:1002) at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:765) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:379) at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1449) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1414) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:916) at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:288) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) at org.eclipse.jetty.server.Server.start(Server.java:423) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97) at org.eclipse.jetty.server.Server.doStart(Server.java:387) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1218) ... 2 more Caused by: java.lang.IllegalArgumentException: KrbException: Cannot locate default realm at java.security.jgss/javax.security.auth.kerberos.KerberosPrincipal.<init>(KerberosPrincipal.java:174) at org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:108) at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:69) ...