Learn about the known issues in Hue, the impact or changes to the functionality, and
the workaround.
Known Issues identified in Cloudera Runtime 7.3.1.500 SP3:
CDPD-88964: Hue Logs Missing in Hue UI
7.3.1.500
You may experience an issue where the Hue UI displays only a few
lines of logs instead of the complete Hue logs. This can occur due to leftover Gunicorn
processes that interfere with the proper logging and display of logs within the Hue
interface
Stop the Hue service.
Terminate any remaining Gunicorn processes to clear hung or orphan
processes that may be causing the issue. Run the following command (use sudo if not
running as root):
# pids=$(ps -efwww | grep rungunicornserver | grep -v "grep" | grep "rungunicornserver"| awk '{ print $2 }') && for i in $pids; do kill -9 $i ; done
Restart the Hue service.
CDPD-90510: Defunct Hue gunicorn worker processes
7.3.1.500
Hue on Ubuntu 22 using Oracle Database can accumulate defunct
rungunicornserver worker processes due to incomplete process termination and stale
database connections. This led to a cluttered process table, which does not critically
impact service functionality.
Known Issues identified in Cloudera Runtime 7.3.1.400 SP2:
There are no new known issues identified in this release.
Known Issues identified in Cloudera Runtime 7.3.1.300 SP1 CHF
1
CDPD-83015: Issues with files or directories with the %
character may fail to cpen or Copy
7.3.1.300, 7.3.1.400, 7.3.1.500
On RHEL 9.5, files or directories containing the %character may fail to open or copy due to Apache HTTPD
version 2.4.62.
None.
Known Issues identified in Cloudera Runtime 7.3.1.200 SP1
There are no new known issues identified in this release.
Known Issues identified in Cloudera Runtime 7.3.1.100 CHF 1
There are no new known issues identified in this release.
Known Issues in Cloudera Runtime 7.3.1
OPSAPS-75134: LDAP and Kerberos dual authentication fails with
HiveOnTez in HTTP Transport Mode
7.3.1 and its higher vesions, Cloudera Manager 7.13.1 and its higher versions
Enabling LDAP for the HiveOnTez service in a Kerberos
environment with transport mode set to HTTP, Hue fails to load database information due
to unsupported server authentication combinations. The HiveOnTez requires
hive.server2.authentication=LDAP, KERBEROS parameter, but Hue supports the
KERBEROS and LDAP values separately, but not
combined, causing a conflict and preventing a successful connection to HiveServer2.
Log into Cloudera Manager > Hive-On-Tez > Configuration > Hive Service Advanced Configuration Snippet (Safety Valve) for
hive-site.xml and set the following value:
hive.server2.authentication=LDAP,KERBEROS
Go to Hue > Configuration > Hue Server Advanced Configuration Snippet (Safety Valve) for
hive-site.xml and set the following value:
hive.server2.authentication=KERBEROS
OPSAPS-69659: Hue service fails on restart with "Unable to find
psycopg2 2.5.4" error
7.1.7 SP1 and its CHFs
Hue service fails to restart and you see the following error:
Unable to find psycopg2 2.5.4. This could be because you
have installed Python in a non-default location and Hue is unable to locate the psycopg2
PostgreSQL database adapter.
You must specify the path where you have installed Python
in the PYTHONPATH property in the Hue Advanced Configuration Snippet
using Cloudera Manager.
Log in to Cloudera Manager as an Administrator.
Go to Clusters > Hue > Configurations and add the following key and value in the Hue Service
Environment Advanced Configuration Snippet (Safety Valve)
field:
Key: PYTHONPATH
Value: [***PYTHON-PATH***]
Replace [***PYTHON-PATH***] with the actual location where you have installed Python. For example, /opt/cloudera/parcels/CDH/lib/hue/build/env/lib/python3.8/site-packages
Click Save Changes.
Restart the Hue service.
OPSAPS-73942: The upgrade fails due to configuration issues with
the query processor service.
7.3.1 and its higher versions
When upgrading from CDP Private Cloud Base 7.1.x versions with
the Query Processor service installed to Cloudera 7.3.1, the upgrade wizard fails during
the configuration validation phase with the following warning:
"The version of the
service query-processor can't be upgraded. You must remove the service before
upgrading to the Cloudera 7.3.1."
To proceed with the upgrade:
Remove the Query Processor service from the
cluster.
Perform the upgrade to Cloudera 7.3.1.
Re-add the Query Processor service post-upgrade if
needed.
CDPD-58978: Batch query execution using Hue fails with Kerberos
error
7.1.9 SP1 and its CHFs, 7.3.1
and its higher versions
When you run Impala queries in a batch mode, you enounter
failures with a Kerberos error even if the keytab is configured correctly. This is
because submitting Impala, Sqoop, Pig, or pyspark queries in a batch mode launches a
shell script Oozie job from Hue and this is not supported on a secure cluster.
There is no workaround. You can submit the queries
individually.
CDPD-59677: Unable to view Phoenix tables on the left assist in
Hue
7.1.9 SP1 and its CHFs, 7.3.1
and its higher versions
On clusters secured with Knox, you may not be able to see
Phoenix tables on the left assist that are present under the default database (that is,
an empty('') database).
None.
CDPD-58142: A query is not pre-populated in the Hue editor after
clicking on the Re Execute button
7.1.9 SP1 and its CHFs, 7.3.1
and its higher versions
When you click Re Execute to rerun a
query from the Job Browser > Queries > Query Details page, the query does not get populated on the Hue editor, as
expected.
None.
CDPD-39330: Unable to use the pip command in Cloudera
7.1.7 SP1 and its CHFs
You may not be able to use the pip command in
Cloudera 7.1.7 or higher and may see the following
error when using pip in a command: “ImportError: cannot
import name chardet”.
Hue UI is blank upon login after upgrading to Cloudera 7.1.7 from CDH 6
7.1.7 SP1 and its CHFs
If your cluster was secured using Knox, and if you have upgraded
from CDH 6 to Cloudera 7.1.7, then you may see a blank
Hue screen. This could happen because the knox_proxyhosts parameter is
newly introduced in Cloudera, and it is possible that
this parameter is not configured in Cloudera Manager under Hue
configuration.
Specify the host on which you have installed Knox in the
Hue Knox Proxy Hosts configuration as follows:
Log in to Cloudera Manager as an Administrator.
Obtain the host name of the Knox Gateway by going to Clusters > Knox service > Instances.
Go to Clusters > Hue service > Configuration and search for the Knox Proxy Hosts
field.
Specify the Knox Gateway hostname in the Knox Proxy Hosts
field.
Click Save Changes and restart the Hue service.
Error while rerunning Oozie workflow
You may see an error such as the following while rerunning an an already executed and
finished Oozie workflow through the Hue web interface: E0504: App directory
[hdfs:/cdh/user/hue/oozie/workspaces/hue-oozie-1571929263.84] does not
exist.
To resolve this issue, add the following property in the Hue Load Balancer Advanced
Configuration Snippet:
Sign in to Cloudera Manager as an administrator.
Go to Clusters > Hue service > Configurations > Load Balancer and search for the Load Balancer Advanced Configuration
Snippet (Safety Valve) for httpd.conf field.
Specify MergeSlashes OFF in the Load Balancer
Advanced Configuration Snippet (Safety Valve) for httpd.conf
field.
Click Save Changes.
Restart the Hue Load Balancer.
Unsupported features
CDPD-59595: Spark SQL does not work with all Livy servers that
are configured for High Availability
SparkSQL support in Hue with Livy servers in HA mode is not
supported. Hue does not automatically connect to one of the Livy servers. You must
specify the Livy server in the Hue Advanced Configuration Snippet as
follows:
Moreover,
you may see the following error in Hue when you submit a SparkSQL query:
Expecting value: line 2 column 1 (char 1). This happens
when the Livy server does not respond to the request from Hue.
Specify all different Livy servers in the
livy_server_url property one at a time and use the one which does not
cause the issue.
CDPD-18491: PySpark and SparkSQL are not supported with Livy in
Hue
Hue does not support configuring and using PySpark and SparkSQL
with Livy in Cloudera Base on premises.
Importing and exporting Oozie workflows across clusters and
between different CDH versions is not supported
You can export Oozie workflows, schedules, and bundles from Hue and import them only
within the same cluster if the cluster is unchanged. You can migrate bundle and
coordinator jobs with their workflows only if their arguments have not changed between
the old and the new cluster. For example, hostnames, NameNode, Resource Manager names,
YARN queue names, and all the other parameters defined in the
workflow.xml and job.properties files.
Using the import-export feature to migrate data between clusters is not recommended.
To migrate data between different versions of CDH, for example, from CDH 5 to Cloudera 7, you must take
the dump of the Hue database on the old cluster, restore it on the new cluster, and
set up the database in the new environment. Also, the authentication method on the old
and the new cluster should be the same because the Oozie workflows are tied to a user
ID, and the exact user ID needs to be present in the new environment so that when a
user logs into Hue, they can access their respective workflows.