General known issues

This topic describes the general service-wide known issues for Cloudera Data Warehouse (CDW) Private Cloud.

Known issues identified in 1.5.2

DWX-16826: Executor pods do not get scheduled on the same node after the scale-down
When the executor pods scale down and scale back up, they make an attempt to get scheduled on the same node because their PersistentVolumeClaim (PVC) is bound to a Persistent Volume on that node. This can cause issues if other pods are scheduled on the node on which the executor pod is attempting to get scheduled between a scale-down and scale-up cycle, taking away the resources from the executor.
  1. Cordon the node on which the executor pod is attempting to get scheduled, which evicts the other pods that got scheduled on this particular node.
  2. Wait until the pods that got evicted are scheduled on other available nodes within the cluster.
  3. Uncordon the node, so that the executor pod can get be scheduled again on the specific node.
DWX-17179: Hue backend and Impala catalog pods are scheduled on the same node in an HA deployment
You may notice that multiple replicas of Hue backend, frontend, and Impala catalog pods get scheduled on the same node in HA mode.
You can manually move pods to other nodes by adding anti-affinity rules to the deployments.
VIZ-2269: Issue with impersonation in CDW Impala connections
You may see the following error while creating or editing the data connection to a CDW Impala or Hive Virtual Warehouse and using the connection details auto-populated from the CDW Warehouse drop-down: User <username> is not authorized to delegate to <username>. This happens because the Impersonation and Trusted Impersonation options are both enabled. This affects CDV 7.1.6.
If the message appears when creating a new data connection, try refreshing the page. This will reset the Impersonation and Trusted Impersonation options. If the message appears on the Edit Data Connection modal, then copy the hostname, username, HTTP path, and other values present on the Advanced tab, and manually edit the existing CDW connections. A manual edit will typically not trigger this bug.

Known issues identified in 1.5.1

DWX-15142 Character restriction on environment name when using FreeIPA server version 4.9.8 and higher
FreeIPA is supported as an authentication mechanism starting with the 1.5.1 release. If you are using FreeIPA version 4.9.8 and higher, then note that the host names are limited to 64 characters. Because the environment name is part of the host name, the environment name must not exceed 17 characters.
None.

Known issues identified in 1.5.0

DWX-18903: Service "postgres-service-default-warehouse" is invalid: spec.externalName error
You see the following error during the Database Catalog creation stage after activating the environment in CDW:
Service "postgres-service-default-warehouse" is invalid: spec.externalName
a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
. This could happen because if the value of the Hive Metastore Database Host (hive_metastore_database_host) property on the base cluster is not specified in lowercase.
Go to Cloudera Manager > Clusters > Hive service > Configuration and change the value specified in the Hive Metastore Database Host field to be in lowercase.

Known issues identified before 1.4.1

DWX-10403: Executor pods get stuck in pending state with a warning
In rare circumstances, when Impala or Hive executors start up either due to autoscaling or by manually restarting the executors, the pods may get stuck in a pending state with a warning such as "volume node affinity conflict". This happens due to a race condition in the storage class that provides local volumes.
Restart the pods so that they can be rescheduled on new nodes with enough resources.
DWX-8502: HMS health check does not check port 9083
The HMS health check script does not check the health of its service port 9083 and may provide incorrect health status.
None.