Known Issues in Hue

Learn about the known issues in Hue, the impact or changes to the functionality, and the workaround.

Known Issues identified in Cloudera Runtime 7.3.1.500 SP3:

CDPD-88964: Hue Logs Missing in Hue UI
7.3.1.500
You may experience an issue where the Hue UI displays only a few lines of logs instead of the complete Hue logs. This can occur due to leftover Gunicorn processes that interfere with the proper logging and display of logs within the Hue interface
  1. Stop the Hue service.
  2. Terminate any remaining Gunicorn processes to clear hung or orphan processes that may be causing the issue. Run the following command (use sudo if not running as root):
    
    # pids=$(ps -efwww | grep rungunicornserver | grep -v "grep" | grep "rungunicornserver"| awk '{ print $2 }') && for i in $pids; do kill -9 $i ; done
  3. Restart the Hue service.
CDPD-90510: Defunct Hue gunicorn worker processes
7.3.1.500
Hue on Ubuntu 22 using Oracle Database can accumulate defunct rungunicornserver worker processes due to incomplete process termination and stale database connections. This led to a cluttered process table, which does not critically impact service functionality.
Periodically clean defunct worker processes with:
pgrep -f 'hue rungunicornserver' | xargs -r kill -9

Known Issues identified in Cloudera Runtime 7.3.1.400 SP2:

There are no new known issues identified in this release.

Known Issues identified in Cloudera Runtime 7.3.1.300 SP1 CHF 1

CDPD-83015: Issues with files or directories with the % character may fail to cpen or Copy
7.3.1.300, 7.3.1.400, 7.3.1.500
On RHEL 9.5, files or directories containing the %character may fail to open or copy due to Apache HTTPD version 2.4.62.
None.

Known Issues identified in Cloudera Runtime 7.3.1.200 SP1

There are no new known issues identified in this release.

Known Issues identified in Cloudera Runtime 7.3.1.100 CHF 1

There are no new known issues identified in this release.

Known Issues in Cloudera Runtime 7.3.1

CDPD-58978: Batch query execution using Hue fails with Kerberos error
7.2.16 SPs and its higer versions, 7.2.17 SPs and its higer versions, 7.3.1 and its higer versions
When you run Impala queries in a batch mode, you enounter failures with a Kerberos error even if the keytab is configured correctly. This is because submitting Impala, Sqoop, Pig, or pyspark queries in a batch mode launches a shell script Oozie job from Hue and this is not supported on a secure cluster.
There is no workaround. You can submit the queries individually.
CDPD-54376: Clicking the home button on the File Browser page redirects to HDFS user directory
7.2.17 SPs and its higer versions, 7.3.1 and its higer versions
When you are previewing a file on any supported filesystem, such as S3 or ABFS, and you click on the Home button, you are redirected to the HDFS user home directory instead of the user home directory on the said filesystem.
None.
CDPD-43293: Unable to import Impala table using Importer
7.2.16 SPs and its higer versions, 7.2.17 SPs and its higer versions, 7.3.1 and its higer versions
Creating Impala tables using the Hue Importer may fail.

If you have both Hive and Impala services installed on your cluster, then you can import the table using by selecting the Hive dialect from Tables > Sources.

If only Impala service is installed on your cluster, then go to Cloudera Manager > Clusters > Hue > Configurations and add the following line in the Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini field:
[beeswax]
max_number_of_sessions=1
CDPD-64541, CDPD-63617: Creating managed tables using Hue Importer fails on RAZ-enabled GCP environments
7.2.18 SPs and its higer versions, 7.3.1 SPs and its higer versions
On Google Cloud Platform (GCP) environments, creating managed tables in both Hive and Impala dialects fails and temporary (tmp) tables are dumped (created). This is most likely because Hive and Impala cannot load data inpath from Google Storage (outside of Hue).
None.
CDPD-56888: Renaming a folder with special characters results in a duplicate folder with a new name on AWS S3.
7.2.17 SPs and its higer versions, 7.2.18 SPs and its higer versions, 7.3.1 SPs and its higer versions
On AWS S3, if you try to rename a folder with special characters in its name, a new folder is created as a copy of the original folder with its contents. Also, you may not be able to delete the folder containing special characters.
You can rename or delete a directory having special characters using the HDFS commands as follows:
  1. SSH into your Cloudera environment host.
  2. To delete a directory within your S3 bucket, run the following command:
    hdfs dfs -rm -r [***COMPLETE-PATH-TO-S3-BUCKET***]/[***DIRECTORY-NAME***]
  3. To rename a folder, create a new directory and run the following command to move files from the source directory to the target directory:
    hdfs dfs -mkdir [***DIRECTORY-NAME***]
    hdfs dfs -mv [***COMPLETE-PATH-TO-S3-BUCKET***]/[***SOURCE-DIRECTORY***] [***COMPLETE-PATH-TO-S3-BUCKET***]/[***TARGET-DIRECTORY***]
CDPD-48146: Error while browsing S3 buckets or ADLS containers from the left-assist panel
7.2.17 SPs and its higer versions, 7.2.18 SPs and its higer versions, 7.3.1 SPs and its higer versions
You may see the following error while trying to access the S3 buckets or ADLS containers from the left-assist panel in Hue: Failed to retrieve buckets: :1:0: syntax error.
Access the S3 buckets or ADLS containers using the File Browser.
CDPD-42619: Unable to import a large CSV file from the local workstation
7.2.16 SPs and its higer versions, 7.2.17 SPs and its higer versions, 7.2.18 SPs and its higer versions, 7.3.1 SPs and its higer versions
You may see an error message while importing a CSV file into Hue from your workstation, stating that you cannot import files of size more than 200 KB.
Upload the file to S3 or ABFS and then import it into Hue using the Importer.
Hue Importer is not supported in the Data Engineering template
When you create a Cloudera Data Hub cluster using the Cloudera Data Engineering template, the Importer application is not supported in Hue:


Unsupported features

CDPD-59595: Spark SQL does not work with all Livy servers that are configured for High Availability
SparkSQL support in Hue with Livy servers in HA mode is not supported. Hue does not automatically connect to one of the Livy servers. You must specify the Livy server in the Hue Advanced Configuration Snippet as follows:
[desktop]
[spark]
livy_server_url=http(s)://[***LIVY-FOR-SPARK3-SERVER-HOST***]:[***LIVY-FOR-SPARK3-SERVER-PORT***] 
Moreover, you may see the following error in Hue when you submit a SparkSQL query: Expecting value: line 2 column 1 (char 1). This happens when the Livy server does not respond to the request from Hue.
Specify all different Livy servers in the livy_server_url property one at a time and use the one which does not cause the issue.
Importing and exporting Oozie workflows across clusters and between different CDH versions is not supported

You can export Oozie workflows, schedules, and bundles from Hue and import them only within the same cluster if the cluster is unchanged. You can migrate bundle and coordinator jobs with their workflows only if their arguments have not changed between the old and the new cluster. For example, hostnames, NameNode, Resource Manager names, YARN queue names, and all the other parameters defined in the workflow.xml and job.properties files.

Using the import-export feature to migrate data between clusters is not recommended. To migrate data between different versions of CDH, for example, from CDH 5 to Cloudera 7, you must take the dump of the Hue database on the old cluster, restore it on the new cluster, and set up the database in the new environment. Also, the authentication method on the old and the new cluster should be the same because the Oozie workflows are tied to a user ID, and the exact user ID needs to be present in the new environment so that when a user logs into Hue, they can access their respective workflows.