General known issues

This topic describes the general service-wide known issues for Cloudera Data Warehouse (CDW) Private Cloud.

Known issues identified in 1.5.4

DWX-19477: Pods are stuck in pending state when you activate an environment with quota management enabled
Data Visualization get stuck in the pending state and wait for allocation when you activate an environment with quota management enabled because of a bug in the resource calculation for the Data Visualization instance that is created from the Data Warehouse UI. You may observe the following output when you run the kubectl get pods command:
kubectl get pods -n viz-rand-uru
NAME                                 READY   STATUS      RESTARTS   AGE
service-discovery-56cc8ddc94-jpr5m   1/1     Running     0          6m4s
viz-webapp-0                         0/1     Pending     0          5m46s
viz-webapp-vizdb-create-job-588bs    0/1     Completed   0          6m3s 
Disable quota management before creating a Data Visualization instance from the Data Warehouse service.
DWX-18558: The executor pods in Impala Virtual Warehouse do not update when you change it to use a different resource template
Suppose you created an Impala Virtual Warehouse with a certain resource template. If you apply a different resource template later having a different local storage size, the operation fails silently and the following pods are not updated: hiveserver 2, impala-coordinator, impala-executor and hue-backend. This happens because changing the storage size is not supported by Kubernetes.
None. Cloudera recommends that you avoid changing resource templates with different volume sizes and select the right size while creating the Virtual Warehouse.

Known issues identified in 1.5.3

DWX-17880: Hive Virtual Warehouse does not start if the bind user contains special characters
The Hive virtual warehouse may fail to start up if you have specified the following special characters in the LDAP bind credential password: < > & ' ". This happens because the HiveServer2 (HS2) pod gets stuck in the CrashLoopBackOff state with the following error in its logs: error parsing conf file:/etc/hive/conf/hive-site.xml com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character '&' (code 38) in content after '<' (malformed start element?). at [row,col,system-id]: [388,13,"file:/etc/hive/conf/hive-site.xml"].
  1. Change the LDAP bind credentials in the Management Console. ensure that they do not contain the following unsupported special characters: < > & ' ".
  2. Reactivate the environment in CDW.

Known issues identified in 1.5.1

DWX-15142 Character restriction on environment name when using FreeIPA server version 4.9.8 and higher
FreeIPA is supported as an authentication mechanism starting with the 1.5.1 release. If you are using FreeIPA version 4.9.8 and higher, then note that the host names are limited to 64 characters. Because the environment name is part of the host name, the environment name must not exceed 17 characters.
None.

Known issues identified in 1.5.0

DWX-18903: Service "postgres-service-default-warehouse" is invalid: spec.externalName error
You see the following error during the Database Catalog creation stage after activating the environment in CDW:
Service "postgres-service-default-warehouse" is invalid: spec.externalName
a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
. This could happen because if the value of the Hive Metastore Database Host (hive_metastore_database_host) property on the base cluster is not specified in lowercase.
Go to Cloudera Manager > Clusters > Hive service > Configuration and change the value specified in the Hive Metastore Database Host field to be in lowercase.

Known issues identified before 1.4.1

DWX-10403: Executor pods get stuck in pending state with a warning
In rare circumstances, when Impala or Hive executors start up either due to autoscaling or by manually restarting the executors, the pods may get stuck in a pending state with a warning such as "volume node affinity conflict". This happens due to a race condition in the storage class that provides local volumes.
Restart the pods so that they can be rescheduled on new nodes with enough resources.
DWX-8502: HMS health check does not check port 9083
The HMS health check script does not check the health of its service port 9083 and may provide incorrect health status.
None.