Known issues

This section lists known issues that you might run into while using the CDP Private Cloud Management Console service.

CLI and SDKs are not supported in the current release

Problem: CLI and SDKs are not support in the current CDP Private Cloud release.

Workaround: Use the Management Console user interface instead to perform your tasks.

CDP Private Cloud does not support StartTLS for LDAP

Problem: CDP Private Cloud does not support StartTLS for LDAP.

Workaround: Use LDAPS.

Kerberos service does not always handle Cloudera Manager downtime

Problem: The Cloudera Manager Server in the base cluster must be running to generate Kerberos principals for CDP Private Cloud. If there is downtime, you might observe Kerberos-related errors.

Workaround: There is currently no workaround for the issue.

Management Console pod issues are incorrectly identified as monitoring component issues

Problem: Certain Management Console issues may be incorrectly classified as Monitoring service alerts.

Workaround: There is currently no workaround for the issue.

Monitoring platform alerts are not displayed on Grafana

Problem: Alerts of monitoring platform provisioning or upgrade failures are not displayed on Grafana.

Workaround: Use the Management Console dashboard to review the alerts.

The monitoring platform does not recover from an interrupted helm installation

Problem: If the monitoring-pvcservice pod is stopped during helm installation of the monitoring platform, the recovery process upon restart can fail due to an inconsistent helm state.

Workaround: Manually remove the monitoring platform namespace specified in the alert related to the failure. Retrying the provisioning process should recreate the namespace and the monitoring components automatically.

The monitoring dashboard continues to display failed alerts even after removing a faulty environment

Problem: When you register an environment with an incorrect kubeconfig file, the dashboard continues to display failed provisioning alerts, even after removal of the faulty environment.

Workaround: Restart the monitoring-pvcservice pod within the Management Console namespace to stop the invalid alert.

The monitoring dashboard does not display the environment if the provisioning of its monitoring platform fails

Problem: If the provisioning of the monitoring platform of an environment fails, the monitoring dashboard does not display the environment. An alert message about the failed provisioning is displayed.

Workaround: There is currently no workaround for the issue.

The upgrade process of the monitoring platform is not retried after two unsuccessful attempts

Problem: The upgrade process of the monitoring platform is not retried after two unsuccessful attempts.

Workaround:
  1. Ensure that you fix the root cause of the upgrade failure.
  2. Delete the monitoring-platform-upgrade config map from the Management Console workspace.
  3. Restart the monitoring-pvcservice pod within the Management Console namespace.

The monitoring services fail when registering an environment using a base cluster with manual TLS

Problem: The CDP monitoring services on the OpenShift cluster fail when registering an environment with a Private Cloud base cluster that has TLS configured manually.

Workaround: Configure auto-TLS on the base cluster.

The Grafana Model dashboard for CML does not update the Model and Project dropdown lists after you update the time range

Problem: The Grafana Model dashboard for Cloudera Machine Learning does not update the Model and Project dropdown lists after you update the time range.

Workaround: After you set the time range, reload the browser window manually so that the Model and Project dropdown lists show the available values.

Filtering the alerts by state on an environment's Overview dashboard leads to error

Problem: When you filter the alerts on an environment's Overview dashboard on Grafana by the alert state, the Critical Alerts and Warnings fields display error messages.

Workaround: There is currently no workaround for the issue.

The error message indicating an invalid storage class name when registering an environment pops up only once

Problem: When you try registering an environment multiple times after specifying an invalid storage class name, the corresponding error message popup window is displayed only after the first attempt and not after every attempt to register the environment.

Workaround:
  • Validate the storage class name that you specify when registering the environment. This is also part of the pre-installation checklist.
  • If you do not see the error message popup window or if the environment registration fails, then validate the class name on the OpenShift container deployment.

Cannot delete an environment when the registration of the compute cluster fails

Problem: If the CA certificates that you upload for the external database or vault are incorrect, and then attempt to register an environment, the registration succeeds but the corresponding compute cluster is not created. Deleting the environment then fails with the message "Compute cluster deregistration failed with error."

Workaround: Upload the correct CA certificates from the Administration tab of the Management Console, and then delete the environment.

Environment deletion fails without an error message if CML experience is also installed

Problem: If you attempt to delete an environment on a CDP Private Cloud deployment that also has Cloudera Machine Learning experience installed, then the deletion might fail without an error message.

Workaround: If the environment deletion fails, then ensure that there are no experiences associated with the environment.

Management Console allows registration of two environments of the same name

Problem: If two users attempt to register environments of the same name at the same time, this might result in an unusable environment.

Workaround: Delete the environment and ensure that only one user attempts to register a new environment.

Registration with an expired kubeconfig file makes the environment unusable

Problem: If you register an environment with an expired kubeconfig file, the environment becomes unusable.

Workaround: Delete the environment and register a new environment with a valid kubeconfig file.

Filtering environments by status might display invalid status values

Problem: When filtering environments by status on the Environments page of the Management Console, a few status names might render incorrectly.

Workaround: There is currently no workaround for the issue.