Known Issues in Cloudera Manager 7.11.3

Known issues in Cloudera Manager 7.11.3.

OPSAPS-60169: Cloudera Manager statestore connectivity health check fails if Kerberos is enabled for Impala Web UI
If Kerberos is enabled for Impala Web UI, the Cloudera Manager statestore connectivity health check fails and the Service Monitor displays the following exception:
WARN com.cloudera.cmon.firehose.polling.CdhTask: (14 skipped) Exception in doWork for task: impala_IMPALA_SERVICE_STATE_FETCHER

In Cloudera Manager, go to Clusters > Impala > Configuration, search for the "Enable Kerberos Authentication for HTTP Web-Consoles" property and disable this property.

For more information, see the Cloudera Knowledge Base article.

OPSAPS-68689: Unable to emit the LDAP Bind password in core-site.xml for client configurations

If the CDP cluster has LDAP group to OS group mapping enabled, then applications running in Spark or Yarn would fail to authenticate to the LDAP server when trying to use the LDAP bind account during the LDAP group search.

This is because the LDAP bind password was not passed to the /etc/hadoop/conf/core-site.xml file. This was intended behavior to prevent leaking the LDAP bind password in a clear text field.

Set the LDAP Bind password through the HDFS client configuration safety valve.
  1. On the Cloudera Manager UI, navigate to the HDFS service, by clicking on the HDFS service under the Cluster.
  2. Click the Configuration tab. Search for the HDFS Client Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml configuration parameter.

  3. Add an entry with the following values:
    • Name = hadoop.security.group.mapping.ldap.bind.password
    • Value = (Enter the LDAP bind password here)
    • Description = Password for LDAP bind account
  4. Then click the Save Changes button to save the safety valve entry.

  5. Perform the instructions from the Manually Redeploying Client Configuration Files to manually deploy client configuration files to the cluster.

OPSAPS-69806: Collection of YARN diagnostic bundle will fail

For any combinations of CM 7.11.3 version up to CM 7.11.3 CHF7 version, with CDP 7.1.7 through CDP 7.1.8, collection of the YARN diagnostic bundle will fail, and no data transmits occur.

Upgrade to CDP 7.1.9, or downgrade to Cloudera Manager 7.7.1.

OPSAPS-70207: Cloudera Manager Agents sending the Impala profile data with an incorrect header
Cloudera Manager agent might send incorrect HTTP header to Telemetry Publisher causing incorrect Content-Type error message resulting connection error. This issue causes missing Impala profile on Observatory.

Impala profile data is not available on Observatory.

Telemetry Publisher logs show:

DEBUG org.apache.cxf.jaxrs.utils.JAXRSUtils: No method match, method name : addProfileEvent, request path : /cluster/impala2, method @Path : /{clusterName}/{serviceName}, HTTP Method : POST, method HTTP Method : POST, ContentType : application/x-www-form-urlencoded, method @Consumes : application/json,, Accept : */*,, method @Produces : application/json,.

Cloudera Manager agent logs on Impalad hosts report:

Error occurred when sending entry to server: HTTP Error 415: Unsupported Media Type, url: http://<telemetry_publisher_host>:<port>

None
OPSAPS-69342: Access issues identified in MariaDB 10.6 were causing discrepancies in High Availability (HA) mode

MariaDB 10.6, by default, includes the property require_secure_transport=ON in the configuration file (/etc/my.cnf), which is absent in MariaDB 10.4. This setting prohibits non-TLS connections, leading to access issues. This problem is observed in High Availability (HA) mode, where certain operations may not be using the same connection.

To resolve the issue temporarily, you can either comment out or disable the line require_secure_transport in the configuration file located at /etc/my.cnf.

OPSAPS-68845: Cloudera Manager Server fails to start after the Cloudera Manager upgrade
Starting from the Cloudera Manager 7.11.3 version up to the Cloudera Manager 7.11.3 CHF7 version, the Cloudera Manager Server fails to start after the Cloudera Manager upgrade due to Navigator user roles improperly handled in the upgrade in some scenarios.
None
OPSAPS-68577: Invalid Iceberg license validator message

During the Cloudera Manager 7.11.3 (Cloudera Runtime 7.1.9) upgrade, you might see the following warning message:

"details on this warning: Validation Suppress Configuration Validator: Iceberg License Validator Current Message Failed parameter validation. Suppress For CORE_SETTINGS-1"

You can safely suppress the Configuration validator message.

OPSAPS-69357: Python incompatibility issues when Cloudera Manager (Python 3.x compatible) manages a cluster with Cloudera Runtime 7.1.7 (Python 2 compatible)

If Cloudera Manager is compatible with Python 3, then scripts that are packaged with this Cloudera Manager are also ported to Python 3 syntax.

So, using Cloudera Manager (7.11.3 or any other Cloudera Manager version ported to Python 3.x version) to manage a cluster with Cloudera Runtime 7.1.7 (Python 2 compatible) would cause Python incompatibility issues because the process assumes Python 2 environment but the scripts that are packaged with this Cloudera Manager are ported to Python 3 syntax.

None
OPSAPS-68452: Azul Open JDK 8 and 11 are not supported with Cloudera Manager

Azul Open JDK 8 and 11 are not supported with Cloudera Manager. To use Azul Open JDK 8 or 11 for Cloudera Manager RPM/DEBs, you must manually create a symlink between the Zulu JDK installation path and the default JDK path.

After installing Azul Open JDK8 or 11, you must run the following commands on all the hosts in the cluster:
Azul Open JDK 8
RHEL or SLES
# sudo ln -s /usr/lib/jvm/java-8-zulu-openjdk-jdk /usr/lib/jvm/java-8-openjdk
Ubuntu or Debian
# sudo ln -s /usr/lib/jvm/zulu-8-amd64 /usr/lib/jvm/java-8-openjdk
Azul Open JDK 11
For DEBs only
# sudo ln -s /usr/lib/jvm/zulu-11-amd64 /usr/lib/jvm/java-11
OPSAPS-69255: Using auth-to-local rules to isolate cluster users is not working consistently

When your cluster is defining auth_to_local rules as mentioned in Using auth-to-local rules to isolate cluster users, after upgrading Cloudera Manager you might experience undesired configuration changes. Many marked Proxyuser settings are removed from core-site.xml and the user 'nobody' is added.

Set Cloudera Manager to the previous behavior for the Proxyuser configurations, this requires editing /etc/default/cloudera-scm-server to add the following JVM argument to the CMF_JAVA_OPTS:

-Dcom.cloudera.cmf.service.config.HadoopUserStrategy=LEGACY
OPSAPS-59723: Extra step required when using Cloudera Manager Trial installer on SLES 15 SP4
When using cloudera-manager-installer.bin to install a trial version of Cloudera Manager, the installation will fail.
Before running cloudera-manager-installer.bin, run the following command:
SUSEConnect --list-extensions
SUSEConnect -p sle-module-legacy/15.4/x86_64
zypper install libncurses5
OPSAPS-66579: The GUI version of the Cloudera Manager self-installer is not available on the RHEL 9 operating system

While installing the Cloudera Manager (Cloudera Manager Server, Cloudera Manager Agent, and the database) the GUI version of the Cloudera Manager self-installer is not available on the RHEL 9 operating system.

This issue is due to the non-availability of the libncurses5 library on the RHEL 9 operating system. Users can provide input using the CLI prompts instead of the GUI prompts during the installation process.

Use CLI prompts instead of GUI prompts.

OPSAPS-68395: Cloudera Management Service roles might fail to start

While starting Cloudera Manager Server (during a fresh install, an upgrade, or when rolling back an upgrade) the status of one or more roles of the Cloudera Management Service are in the Stopped state, and later these roles might fail to start.

This failure might happen if you attempt to start the affected role(s) within first few minutes after starting the Cloudera Manager Server or a cluster, then the status of the affected roles shows the Down state, and the corresponding functionality is lost. Accordingly, Cloudera Manager might display the errors. This failure is caused by a temporary resource contention, and subsequent timeout.

After fifteen minutes, restart the affected roles, or the Cloudera Management Service as a whole. Alternatively, go to Clusters > Cloudera Management Service > Configuration and change / increase the value of Descriptor Fetch Max Attempts and Starting Interval for Descriptor Fetch Attempts. Cloudera recommends to set the value of Descriptor Fetch Max Attempts to "30" and Starting Interval for Descriptor Fetch Attempts to "30" seconds.

OPSAPS-60726: Newly saved parcel URL is not showing up on the parcels page in Cloudera Manager High Availability (HA) cluster

Newly saved parcels might not show up on the parcels page in Cloudera Manager HA mode.

You must restart the active and passive Cloudera Manager nodes.

OPSAPS-68178: Inconsistent Java Keystore Type while performing upgrade from CDH 6 to CDP Private Cloud Base 7.1.9

While performing upgrade from CDH 6 to CDP Private Cloud Base 7.1.9, the configured Java Keystore Type is jks on Cloudera Manager UI. However, the physical Truststore files on the upgraded cluster are available in pkcs12 format.

If the value of Java Keystore Type on Cloudera Manager UI is different from the actual Java Keystore Type in the physical Truststore files on the upgraded cluster, then perform the following steps:
  1. Stop the Cloudera Manager Server.
    sudo systemctl stop cloudera-scm-server
  2. Connect to the database.
  3. Verify the Java Keystore type which is set in database by running the following command:
    select * from CONFIGS WHERE ATTR='keystore_type';
  4. Verify the value of the CONFIG_ID in the result of the previous select command.
  5. Update the row (previously selected CONFIG_ID) on the database with the correct CONFIG_ID from your cluster by running the following command:
    UPDATE CONFIGS SET VALUE ='jks' WHERE CONFIG_ID=config_id;
  6. Start the Cloudera Manager Server.
    sudo systemctl start cloudera-scm-server
OPSAPS-67929: While upgrading from CDP 7.1.7 SP2 to CDP 7.1.9 version and if there is an upgrade failure in the middle of the process, the Resume option is not available.
You must reach out to Cloudera Support.
OPSAPS-68325: Cloudera Manager fails to install with MariaDB 10.6.15, 10.5.22, and 10.4.31
Cloudera Manager Server fails to execute the DDL commands that involve disabling the FOREIGN_KEY_CHECKS when you use the following databases:
  • MariaDB 10.6.15
  • MariaDB 10.5.22
  • MariaDB 10.4.31
None
OPSAPS-68240: After restarting Cloudera Manager Server and MySQL, Cloudera Manager server fails to start
When using MySQL 8 version, Cloudera Manager fails to start and displays an error message on the logs - java.sql.SQLNonTransientConnectionException: Public Key Retrieval is not allowed

To fix this issue, perform the workaround steps as mentioned in the KB article.

If you need any guidance during this process, contact Cloudera support.

OPSAPS-68484: Hive queries fail with 'get_partitions_ps_with_auth_req' error
Hive queries fail with the error due to a mismatch between HiveServer2 and Hive metastore during a zero-downtime upgrade (ZDU).
Error: Invalid method name: 'get_partitions_ps_with_auth_req'
The issue is addressed by adjusting the upgrade order, ensuring HiveMetastore is upgraded before HiveServer2.
DMX-3167
When multiple Iceberg replication policies replicate the same database simultaneously, one of the replication policies might show “Database already exists” error.
Run the replication policy again, the next run for the replication policy succeeds.
DMX-3193
If the source and target clusters have the same nameservice environment and a table is dropped on the source cluster during the incremental replication run of an Iceberg replication policy, the replication policy fails with the"Metadata file not found for table" error.
Copy the metadata file from the target cluster to the source cluster and run the incremental replication again.
OPSAPS-68143
When you replicate empty OBS buckets using an Ozone replication policy, the policy fails and a FileNotFoundException appears during the "Run File Listing on Peer cluster" step.
DMX-3169
The YARN jobs (DistCp) for Iceberg replication policies cannot use the hdfs username if the replication policies use secure source and target clusters.
Provide a proxy user to submit the DistCp jobs.

To configure the proxy user, configure the Advanced command line options for distcp used in Iceberg Replication = -proxy [***user_name***] key-value pair on the Cloudera Manager > Clusters > [***Iceberg Replication Service***] > Configuration tab.

DMX-3174
Iceberg replication policies fail if the clusters with HDFS HA have different nameservice names and are Auto-TLS enabled on unified realms.
Add the following property for the Advanced configuration snippet for hdfs-site.xml (hdfs_client_config_safety_valve) on the Cloudera Manager > Clusters > [***HDFS service***] > Configuration tab:

mapreduce.job.hdfs-servers.token-renewal.exclude = [***source name service***], [***target name service***].

For more information, see Kerberos setup guidelines.

CDPD-59437
An Iceberg replication policy might not find a table in the database during the replication process if another Iceberg replication policy that is running simultaneously (replicating a different set of tables from the same database) has dropped the table.
OPSAPS-68221: Cloudera Manager Agent installation might fail while upgrading to Cloudera Manager 7.11.3 without installing Python 3 on the Cloudera Manager Server host

Before upgrading to Cloudera Manager 7.11.3, if you do not install Python 3 on the Cloudera Manager Server host, then Cloudera Manager Agent installation might fail. This state is not recoverable by reinstalling Cloudera Manager Agent alone.

  1. You must uninstall Cloudera Manager Agent package manually.
  2. Install Python 3 on the host before upgrading to Cloudera Manager 7.11.3. See Installing Python 3.
  3. Reinstall the Cloudera Manager Agent.
OPSAPS-68426: Atlas service dependencies are not set during CDH 6 to CDP 7.x.x upgrade if Navigator role instances are not configured under the Cloudera Management Service.

Navigator support has been discontinued in Cloudera Manager 7.11.3. Consequently, if you are using CDH 6 and have Navigator installed, it is necessary to remove the Navigator service before proceeding with the upgrade to Cloudera Manager version 7.11.3 or any higher version. Due to this change, when upgrading the Runtime version from CDH 6 to CDP 7.x.x, it is important to note that Atlas, which replaces Navigator in CDP 7.x.x, might not automatically be set as a service dependency for certain components. The components that could potentially be impacted include: HBase, Hive, Hive on Tez, Hue, Impala, Oozie, Spark, and Sqoop.

Once you have completed the upgrade to CDP 7.x.x and have installed Atlas, it is advised to review and confirm the configuration settings for these services. Specifically, navigate to the respective configuration pages for each service. If you observe that the Atlas dependency is not enabled, you must enable it manually in order to integrate Atlas with that particular service. After adjusting the services' configurations, Cloudera Manager prompts you to restart the services to apply the changes. Note that deploying client configurations might also be necessary as part of this process.

OPSAPS-68340: Zeppelin paragraph execution fails with the User not allowed to impersonate error.

Starting from Cloudera Manager 7.11.3, Cloudera Manager auto-configures the livy_admin_users configuration when Livy is run for the first time. If you add Zeppelin or Knox services later to the existing cluster and do not manually update the service user, the User not allowed to impersonate error is displayed.

If you add Zeppelin or Knox services later to the existing cluster, you must manually add the respective service user to the livy_admin_users configuration in the Livy configuration page.

OPSAPS-68500: The cloudera-manager-installer.bin fails to reach Ubuntu 20 repository on the Archive URL due to redirections.

Agent Installation with Cloudera Manager on Ubuntu20 platform does not function when the self-installer method (using the installer.bin file) is employed to install Cloudera Manager. The failure mode is that Cloudera Manager Agent installation step will fail with an error message saying "The repository 'https://archive.cloudera.com/p/cm7/7.11.3/ubuntu2004/apt focal-cm7 InRelease' is not signed."

While adding a cluster in Cloudera Manager and the subsequent agent installation, customers should choose the "Custom Repository" selection, and manually enter the correct repository URL: https://[credentials]@archive.cloudera.com/p/cm7/7.11.3.0

DMX-3003
The progress.json file is updated along with the progress of the DistCp job run whenever the number of files copied is equal to the incremental count (default is 50) for Iceberg replication policies. The file report does not get synchronized as expected and the reported numbers are also inconsistent.
Click the required Iceberg replication policy on the Cloudera Manager > Replication > Replication Policies page to see the correct number of files copied for each incremental job run.
DMX-2977, DMX-2978
You cannot view the current status of an ongoing export task (exportCLI) or sync task (syncCLI) for an Iceberg replication policy.
Click the required Iceberg replication policy on the Cloudera Manager > Replication > Replication Policies page to view the final results of the export task and sync task for the replication policy job run.
OPSAPS-69480: Hardcode MR add-opens-as-default config
When Cloudera Manager is upgraded to 7.11.3, if the CDP cluster is not 7.1.9, then the YARN Container Usage Aggregation job fails.
Add the following property in the MapReduce Client Advanced Configuration Snippet (Safety Valve) for mapred-site.xml file.
NAME: mapreduce.jvm.add-opens-as-default
VALUE: false
OPSAPS-68629: HDFS HTTPFS GateWay is not able to start with custom krb5.conf location set in Cloudera Manager.
On a cluster with a custom krb5.conf file location configured in Cloudera Manager, HDFS HTTPFS role is not able to start because it does not have the custom Kerberos configuration file setting properly propagated to the service, and therefore it fails with a Kerberos related exception: in thread "main" java.io.IOException: Unable to initialize WebAppContext at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1240) at org.apache.hadoop.fs.http.server.HttpFSServerWebServer.start(HttpFSServerWebServer.java:131) at org.apache.hadoop.fs.http.server.HttpFSServerWebServer.main(HttpFSServerWebServer.java:162) Caused by: java.lang.IllegalArgumentException: Can't get Kerberos realm at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:71) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:329) at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:380) at org.apache.hadoop.lib.service.hadoop.FileSystemAccessService.init(FileSystemAccessService.java:166) at org.apache.hadoop.lib.server.BaseService.init(BaseService.java:71) at org.apache.hadoop.lib.server.Server.initServices(Server.java:581) at org.apache.hadoop.lib.server.Server.init(Server.java:377) at org.apache.hadoop.fs.http.server.HttpFSServerWebApp.init(HttpFSServerWebApp.java:100) at org.apache.hadoop.lib.servlet.ServerWebApp.contextInitialized(ServerWebApp.java:158) at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:1073) at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:572) at org.eclipse.jetty.server.handler.ContextHandler.contextInitialized(ContextHandler.java:1002) at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:765) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:379) at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1449) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1414) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:916) at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:288) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) at org.eclipse.jetty.server.Server.start(Server.java:423) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97) at org.eclipse.jetty.server.Server.doStart(Server.java:387) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1218) ... 2 more Caused by: java.lang.IllegalArgumentException: KrbException: Cannot locate default realm at java.security.jgss/javax.security.auth.kerberos.KerberosPrincipal.<init>(KerberosPrincipal.java:174) at org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:108) at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:69) ...
  1. Log in to Cloudera Manager.
  2. Select the HDFS service.
  3. Select Configurations tab.
  4. Search for HttpFS Environment Advanced Configuration Snippet (Safety Valve)
  5. Add to or extend the HADOOP_OPTS environment variable with the following value: -Djava.security.krb5.conf=<the custom krb5.conf location>
  6. Click Save Changes.
OPSAPS-69897: NPE in Ozone replication from CM 7.7.1 to CM 7.11.3
When you use source Cloudera Manager 7.7.1 and target Cloudera Manager 7.11.3 for Ozone replication policies, the policies fail with Failure during PreOzoneCopyListingCheck execution: null error. This is because the target Cloudera Manager 7.11.3 does not retrieve the required source bucket information for validation from the source Cloudera Manager 7.7.1 during the PreCopyListingCheck command phase. You come across this error when you use source Cloudera Manager versions lower than 7.10.1 and target Cloudera Manager versions higher than or equal to 7.10.1 in an Ozone replication policy.
Upgrade the source Cloudera Manager to 7.11.3 or higher version.
OPSAPS-69481: Some Kafka Connect metrics missing from Cloudera Manager due to conflicting definitions
The metric definitions for kafka_connect_connector_task_metrics_batch_size_avg and kafka_connect_connector_task_metrics_batch_size_max in recent Kafka CSDs conflict with previous definitions in other CSDs. This prevents Cloudera Manager from registering these metrics. It also results in SMM returning an error. The metrics also cannot be monitored in Cloudera Manager chart builder or queried using the Cloudera Manager API.
Contact Cloudera support for a workaround.
OPSAPS-69406: Cannot edit existing HDFS and HBase snapshot policy configuration
The Edit Configuration modal window does not appear when you click Actions > Edit Configuration on the Cloudera Manager > Replication > Snapshot Policies page for existing HDFS or HBase snapshot policies.
None.
OPSAPS-72298: Impala metadata replication is mandatory and UDF functions parameters are not mapped to the destination
Impala metadata replication is enabled by default but the legacy Impala C/C++ UDF's (user-defined functions) are not replicated as expected during the Hive external table replication policy run.
Edit the location of the UDF functions after the replication run is complete. To accomplish this task, you can edit the “path of the UDF function” to map it to the new cluster address, or you can use a script.