Known issues and limitations

This section lists known issues and limitations that you might run into while using the Data Warehouse service.

General Known Issues in Data Warehouse service

DWX-3420: For Azure environments, the Show Kubeconfig option does not work
Problem: The more menu option used to display the kubeconfig file for Azure environments is grayed out and does not work:

Workaround: Use a combination of the Azure CLI az aks get-credentials command and then use the Kubernetes kubectl config view command to view the kubeconfig file for your AKS cluster. For more information, see Get and verify the configuration information in the Microsoft Azure documentation.
DWX-2049: For the private networking feature on AWS environments. Only the default DHCP option set created when the VPC is created in AWS is supported.
Problem: When the VPC is created in AWS, a default DHCP option set is created. For this default DHCP option set the domain name option is set to <REGION>.compute.internal, where <REGION> is the AWS region where the VPC was created. The Data Warehouse service sets up an nginx ingress controller (the LoadBalancer service) with externalTrafficPolicy set to Local (externalTrafficPolicy=Local) for better performance because it means there is one less network hop.
Workaround: For this feature to work correctly, the domain name in the DHCP option set cannot be changed. Only the default DHCP option set is supported. Changing the domain name to a custom domain or having multiple domain names causes the kube-proxy to not start correctly. If the Kubernetes network proxy (kube-proxy) does not start correctly, the Amazon ELB (load balancer) does not have healthy targets. This causes workload endpoints, such as Data Analytics Studio (DAS), JDBC, or Hue, to return 503 errors. This is a known issue in Kubernetes and is yet to be fixed.

Data Analytics Studio (DAS) in Data Warehouse service

DWX-4020: Add column functionality via upload table option doesn't work.
Problem: You may not be able to add or delete columns or change the table schema after creating a new table using the upload table feature.
Workaround: N/A
DWX-929: DAS UI displays the internal JDBC URL.
Problem: DAS displays the internal JDBC URL on its About page instead of the correct JDBC URL to use to connect to the data warehouse.
Workaround: To copy the correct JDBC URL to use to connect to the data warehouse, in the Data Warehouse service Overview page, go to Virtual Warehouse > , and then click Copy JDBC URL.
DWX-2592: DAS cannot parse certain characters in strings and comments.
Problem: DAS cannot parse semicolons (;) and double hyphens (--) in strings and comments. For example if you have a semicolon in a query such as the following, the query might fail:

SELECT * FROM properties WHERE prop_value = "name1;name2";
Queries with double hyphens (--) might also fail. For example:

SELECT * FROM test WHERE option = '--name';
Workaround: If a semicolon is present in a comment, then remove the semicolon before running the query or remove the comment entirely. For example:

SELECT * FROM test; -- SELECT * FROM test;
Should be changed to:

SELECT * FROM test; /* comment; comment */
In the same manner, remove any double-hyphens before running queries to avoid failure in DAS.
Older versions of Google Chrome browser might cause issues.
Problem: You might experience problems while using faceted search in older versions of the Google Chrome browser.
Workaround: Use the latest version (71.x or later) of Google Chrome.
BUG-94611: Visual Explain for the same query shows different graphs.
Problem: Visual Explain for the same query shows different graphs on the Compose page and the Query Details page.
Workaround: N/A

Database Catalog

There are no known issues.

Hive 3 in Data Warehouse service

Result caching:
This feature is limited to 10 GB.
Data caching:
This feature is limited to 200 GB per compute node, multiplied by the total number of compute nodes.
DWX-3443: ANALYZE TABLE…COMPUTE STATISTICS fails with NullPointerException on Virtual Warehouse version
Problem: The ANALYZE TABLE…COMPUTE STATISTICS statement is run to gather statistics on a table for writing to the metastore. For example:
However, if you run this statement against a table in a Hive Virtual Warehouse version, a NullPointerException (NPE) might be returned.

To determine the version of the Virtual Warehouse:

  1. In the Data Warehouse service UI, select Virtual Warehouses in the left navigation menu.
  2. On the Virtual Warehouses page, locate the Virtual Warehouse that is returning the error, and click on its Name.
  3. On the details page for the Virtual Warehouse, the version is listed at the top:

Workaround: Upgrade to a later version of Cloudera Runtime for the Virtual Warehouse:

  1. In the Data Warehouse service UI, select Overview in the left navigation menu.
  2. In the Overview page, click More… in the Environments column to expand it and search for the environment that is being used for the Virtual Warehouse which is returning the error:

  3. After you locate the environment, click the delete icon in the upper right corner of the environment tile:

    Clicking this icon launches the Action dialog box, but it does not delete the environment.

  4. In the Action dialog box, click OK:

    Clicking OK in the Action dialog box de-activates the environment.

  5. After the environment has been de-activated, an activation icon appears on the tile. Click the activation icon to re-activate the environment:

    When you re-activate the environment, it automatically refreshes the Cloudera Runtime version for the Virtual Warehouse and you should no longer get the NPE error.

DWX-2690: Older versions of Beeline return SSLPeerUnverifiedException when submitting a query

Problem: When submitting queries to Virtual Warehouses that use Hive, older Beeline clients return an SSLPeerUnverifiedException error: Host name ‘’ does not
match the certificate subject provided by the peer (CN=* (state=08S01,code=0)

Workaround: Only use Beeline clients from CDP Runtime version or later.

DWX-1952: Cloned Hive Virtual Warehouses do not have query executors or query coordinators
Problem: When you clone an existing Hive Virtual Warehouse, it is created with only HiveServer and Data Analytics Studio (DAS) application container groups (Kubernetes pods). This means that the cloned Virtual Warehouse cannot execute queries.

To manually add query executors and query coordinators to the cloned Hive Virtual Warehouse:

  1. Click the options menu on the cloned Virtual Warehouse, and then select Edit:

  2. In the Virtual Warehouse edit page, change a value, such as the AutoSuspend Timeout setting, and then click Apply:

    This causes the Data Warehouse service to create query executors and query coordinators so you can execute queries on the cloned Virtual Warehouse.

Impala in Data Warehouse service

DWX-3914: Collect Diagnostic Bundle option does not work on older environments
The Collect Diagnostic Bundle menu option in Impala Virtual Warehouses does not work for older environments:

Data caching:
This feature is limited to 200 GB per compute node, multiplied by the total number of compute nodes.
Sessions with Impala continue to run for 15 minutes after the connection is disconnected.
When a connection to Impala is disconnected, the session continues to run for 15 minutes in case so the user or client can reconnect to the same session again by presenting the session_token. After 15 minutes, the client must re-authenticate to Impala to establish a new connection.