General known issues
This topic describes the general service-wide known issues for Cloudera Data Warehouse (CDW) Private Cloud.
Known Issues in 1.4.1
- DWX-13813: Unable to select or delete custom pod configurations for Impala
You may not be able to select an Impala pod configuration from the EDIT POD CONFIGURATIONS tab on the Environment Details page intermittently.
Also, you may not be able to delete a pod configuration if you click Apply or Apply Changes after it has been created and the “Configuration update initiated” message is displayed.
- You can delete a pod configuration right after creating it, but before clicking Apply or Apply Changes.
- DWX-13816: Refresh option at the environment level is not functional
- The Refresh option on the more options
menu () is clickable and shows an
Refresh of environment initiatedmessage too, but it does not refresh the data in the backend.
- DWX-13759: DAS server and DAS WebApp pods exists even after disabling DAS
- After you disable DAS from the Advanced Settings page, you may still see the DAS server and DAS WebApp pods running in the Database Catalog and Virtual Warehouse namespaces respectively.
- Delete and recreate the Database Catalog and Virtual Warehouse.
Known Issues before 1.4.1
- OPSAPS-58019: “SERVICE_PRINCIPAL is required for kinit” error while activating a new environment
- If the
/etc/krb5.conffile on the Cloudera Manager host contains "include" or "includedir" directives, then you may encounter Kerberos-related failures on both Embedded Container Service and Red Hat OpenShift platforms. You may see the following error in the Database Catalog's metastore pod logs: SERVICE_PRINCIPAL is required for kinit.
- To resolve this issue:
- SSH into the Cloudera Manager host as an administrator.
- Open the
/etc/krb5.conffile for editing.
- Comment the lines containing
- Save the changes and exit.
- Recreate the environment.
- DWX-10403: Executor pods get stuck in pending state with a warning
- In rare circumstances, when Impala or Hive executors start up either due to autoscaling or by manually restarting the executors, the pods may get stuck in a pending state with a warning such as "volume node affinity conflict". This happens due to a race condition in the storage class that provides local volumes.
- Restart the pods so that they can be rescheduled on new nodes with enough resources.
- DWX-8525 and DWX-8526: CDW does not support upgrades from CDW 1.2 to 1.3.2
- You may encounter issues while creating a new Virtual Warehouse on an existing Database Catalog (version 1.2) in an environment that is upgraded from 1.2 to 1.3.1.
- The existing Virtual Warehouses will continue to operate. But if you want to create a new Virtual Warehouse, then you must reactivate the CDW environment after upgrading the base cluster from 1.2 to 1.3.2.
- DWX-8502: HMS health check does not check port 9083
- The HMS health check script does not check the health of its service port 9083 and may provide incorrect health status.