Learn about the known issues in Hive, the impact or changes to the functionality, and
the workaround.
Known Issues identified in Cloudera Runtime 7.3.1.600 SP3
CHF1
- DWX-22436: DL upgrade recovery fails due to Metastore schema
incompatibility
- 7.3.1.600
- When attempting a Data Lake (DL) upgrade recovery from
version 7.2.18.1100 to Cloudera Runtime 7.3.1.500, the process fails
because the Hive Metastore schema versions are incompatible. The error indicates a
mismatch between the Hive version (3.1.3000.7.3.1.500-182) and the database schema
version (3.1.3000.7.2.18.0-Update2). This blocks Data Lake recovery if an upgrade fails,
impacting customers.
- Before you initiate the recovery process, manually update
the Hive Metastore schema to match the target version by using the
schematool utility.
- Obtain the Hive database password: Run the following command to retrieve the
password from the pillar configuration:
cat /srv/pillar/postgresql/postgre.sls
- Back up the existing configuration: Move the current configuration directory to a
backup location:
mv /etc/hive/conf /etc/hive/conf_backup
mkdir /etc/hive/conf
- Prepare the temporary configuration: Copy the process files to the new
configuration directory:
scp /var/run/cloudera-scm-agent/process/<process-id>-hive-metastore-create-tables/* /etc/hive/conf/
- Update the connection password: Open the
/etc/hive/conf/hive-site.xml file and perform the following
modifications:
- Set the javax.jdo.option.ConnectionPassword property to
your Hive database password.
- Comment out the hadoop.security.credential.provider.path
property.
- Run the schema upgrade tool: Execute the schematool to
synchronize the version:
/opt/cloudera/parcels/CDH/lib/hive/bin/schematool -dbType postgres -initOrUpgradeSchema --verbose
- Restore the original configuration: Remove the temporary directory and restore
your backup:
rm -rf /etc/hive/conf
mv /etc/hive/conf_backup /etc/hive/conf
- Restart the cluster: Restart the services to initialize the Hive Metastore with
the updated schema.
Known Issues identified in Cloudera Runtime 7.3.1.500 SP3
- CDPD-88865: Unicode character support with a MySQL backend
- When a cluster's backend is a MySQL database,
CREATE TABLE statements with more than two Unicode column names can
fail. This is a known bug, HIVE-18083, where the MySQL database does not support
non-ASCII characters in column names.
- None
Known Issues identified in Cloudera Runtime 7.3.1.400 SP2
There are no new known issues identified for Hive in this release.
Known Issues identified in Cloudera Runtime 7.3.1.300 SP1
CHF1
There are no new known issues identified for Hive in this release.
Known issues identified in Cloudera Runtime 7.3.1.200 SP 1
There are no new known issues identified for Hive in this release.
Known Issues identified in Cloudera Runtime 7.3.1.100 CHF1
The following section lists the known issues identified in this release:
- CDPD-77738: Atlas hook authorization issue causing
HiveCreateSysDb timeout
- 7.3.1.100, and its
higher versions
- Atlas hook authorization error causes
HiveCreateSysDb command to time out due to repeated retries.
- None
- CDPD-78490: HiveCreateSysDb command fails
- 7.3.1.0, 7.3.1.100, and
its higher versions
- Hive services fail to start due to HiveCreateSysDb command failure during the first
run.
- None
- CDPD-72605: Optimizing partition authorization in
HiveMetaStore
- 7.3.1.200
- 7.3.1.0
- The add_partitions() API in HiveMetaStore
unnecessarily authorizes both new and existing partitions, increasing processing time
and load on the authorization service.
- None
Known Issues identified in Cloudera Runtime 7.3.1
- CDPD-74680: DAG not retried after failure
- 7.3.1 and its higher versions
- When executing a Hive query, if the ApplicationMaster container
fails, Hive does not retry the DAG if the failure message contains some diagnostic
information including a line break, leading to query failure (instead of retry).
- None
- HiveServer2 goes into a hung state intermittently
- 7.3.1 and its higher versions
- HiveServer2 can intermittently hang or crash due to heap
out-of-memory (OOM) errors triggered by the default 1 GB cache limit for fetch tasks.
This occurs with certain queries that exceed the available heap space.
-
- Disable the fetch task caching feature by setting:
hive.fetch.task.caching=false.
- You can adjust the
hive.fetch.task.conversion.threshold property to a lower value in
the megabyte range. The default value in Cloudera on premises 7.3.1 is 1 GB.