Known Issues

Current known issues and limitations in Cloudera Observability On-Premises.

Impala does not support super user configuration for the observability user for Apache Ranger-enabled cluster
The observability user requires full privileges on the Observability cluster. Required services such as Kafka, HDFS, HBase, and Hive support super user setup by specifying the ranger.plugin.[service].super.users property to observability. However, this super user setup is not supported for Impala.
Manually add a new user named observability in Apache Ranger and assign full privileges.
  • For information, see Adding a user in Cloudera Private Cloud Base documentation.
  • For information on granting user access using Apache Ranger, see Impala Authorization in CDP Private Cloud Data Warehouse Runtime documentation.
/cloudera-sigma-olap directory consumes significant storage space in HDFS
The Cloudera Observability On-Premises 3.5.2 version enables weekly execution of the purger by default. However, periodic clean-up of /cloudera-sigma-olap is not currently included in this purger.
Run the command to identify which specific directory consumes the most space.
hdfs dfs -du -h /cloudera-sigma-olap/
0        0        /cloudera-sigma-olap/auto_action_audit
2.0 G    6.0 G    /cloudera-sigma-olap/cluster_events
82.1 G   246.4 G  /cloudera-sigma-olap/cluster_metrics
0        0        /cloudera-sigma-olap/hive_on_mr_table
37.7 M   113.2 M  /cloudera-sigma-olap/hms_partition
13.2 M   39.6 M   /cloudera-sigma-olap/hms_partition_json_schema_v1
93.4 M   280.3 M  /cloudera-sigma-olap/hms_table
7.8 M    23.4 M   /cloudera-sigma-olap/hms_table_json_schema_v1
798.1 K  2.3 M    /cloudera-sigma-olap/impala_query
52.7 M   158.1 M  /cloudera-sigma-olap/pse_root
8.1 K    24.3 K   /cloudera-sigma-olap/schema
0        0        /cloudera-sigma-olap/yarn_app_metrics
Consider a scenario where you want to clean up the cluster_metrics directory.
  1. Navigate into the folder structure to review the directories for each date.
    /cloudera-sigma-olap/cluster_metrics/accountid=accountid/clusterid=/dt=2024-07-07
  2. Run the following commands to manually clean up the directory:

    Clean up for a year:

    hdfs dfs -rmr /cloudera-sigma-olap/cluster_metrics/accountid=accountid/clusterid=/dt=2023-*
    Clean up for a month
    hdfs dfs -rmr /cloudera-sigma-olap/cluster_metrics/accountid=accountid/clusterid=/dt=2023-07-*
Exporting of Impala queries fail for Telemetry Publisher with Cloudera Manager 7.11.3
Telemetry Publisher for Impala queries does not work with Cloudera Manager 7.11.3
Upgrade Cloudera Manager from 7.11.3 to 7.11.3 cumulative hotfix 6 (CHF6) version to successfully export Impala queries.
Auto Action trigger for Impala Engine
Impala Auto Action triggers do not work for the Kerberos-enabled Private Cloud base cluster running on Cloudera Manager 7.9.5 and 7.11.3.
Upgrade Cloudera Manager to 7.11.3 cumulative hotfix 9 (CHF9) version.
Full log link fails to open the log details page on Mozilla Firefox
On the Spark Job Details page, clicking the Full Log link does not open the Log details page. This issue occurs on Mozilla Firefox.
Use Google Chrome or Internet Explorer.
Telemetry publisher test altus connection fails for Cloudera Manager 7.11.3 hotfix (CHF6, 7, and 8) versions
Test connection fails with the following error:
Exception in thread "main" java.lang.NoSuchMethodError: 'com.google.common.collect.ImmutableSet com.google.common.collect.ImmutableSet.copyOf(java.util.Collection)'
	at com.cloudera.cdp.http.HttpCodesRetryChecker.<init>(HttpCodesRetryChecker.java:57)
	at com.cloudera.cdp.client.CdpClientConfigurationBuilder.<init>(CdpClientConfigurationBuilder.java:53)
	at com.cloudera.cdp.client.CdpClientConfigurationBuilder.defaultBuilder(CdpClientConfigurationBuilder.java:400)
	at com.cloudera.cdx.client.TestDatabusConnection.main(TestDatabusConnection.java:55)

This issue only affects the test connection method.

Upgrade Cloudera Manager to 7.11.3 cumulative hotfix 9 (CHF9) version, and then start Telemetry Publisher.
Workload alerts missing or Kafka connection error in Admin API logs with TLS enabled
Admin API server starts, however, the following warning is displayed in the role logs:
WARN org.apache.kafka.clients.NetworkClient: [kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Connection to node -1 (example-1.esc12345.root.comops.site/192.0.2.0:24) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue.
WARN org.apache.kafka.clients.NetworkClient: [kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Bootstrap broker example-1.esc12345.root.comops.site:24 (id: -1 rack: null) disconnected

This issue occurs when SSL Truststore path (JKS) and password are not configured for the Admin API Server component.

  1. In a supported web browser, log in to Cloudera Manager as a user with full system administrator privileges.
  2. From the Navigation panel, select Clusters and then OBSERVABILITY.
  3. In the Status Summary panel of the OBSERVABILITY page, select Admin API Server.
  4. Click the Configuration tab.
  5. Search for the Admin API Server Advanced Configuration Snippet (Safety Valve) option.
  6. Enter the following keys and values:
    Key Value
    SSL_TRUSTSTORE_PATH Enter the TLS/SSL certificate truststore file path. For example: [***path***].truststore.jks
    SSL_TRUSTSTORE_PASSWORD Enter the password for the TLS/SSL certificate truststore file.
  7. Click Save Changes to save the configuration.