Review the list of Hive issues that are resolved in Cloudera Runtime
7.3.1, its service packs and cumulative hotfixes.
Cloudera Runtime 7.3.1.500 SP3:
- CDPD-83511/CDPD-81079: Metastore did not enforce maximum thrift
message size
- 7.3.1.500
- The Metastore server always used the default 100 MB Thrift
message size, even if a higher limit was set on the client. Large client requests caused
silent connection drops and unclear exceptions.
- The issue was addressed by applying the configured maximum
Thrift message size on the server, ensuring consistent behavior and avoiding unexpected
disconnections.
Apache Jira: HIVE-28824
- CDPD-85632: Wrong results when CASE expressions has function
calls referencing CHAR type expressions or columns
- 7.3.1.500
- Queries using CASE expressions with nested function calls (such
as UPPER()) having CHAR type expressions or columns as parameters returned
incorrect results due to type mismatches and whitespace handling during execution.
Example:
case upper(col1) when 'A' then 'OK' else 'N/A' end Where col1 is a CHAR type column.
- The issue was addressed by ensuring consistent type casting
during query planning so that CHAR values are compared correctly after conversion.
Apache Jira: HIVE-28792
- CDPD-85631: Jetty version
- 7.3.1.500
- Jetty version was upgraded to 9.4.57.v20241219 and mina-core pkg
was excluded to fix CVEs.
- CDPD-84566: Stable Oozie Hive actions for DML operations
- 7.3.1.500
- Oozie Hive actions for DML operations previously failed because
a Tez session was unnecessarily started during Hive CLI startup. The Oozie action would
complete and delete temporary directories before the Tez session could finish, causing
the session to fail with an exception:
FileNotFoundException
.
- This issue is now resolved by adding a setting to disable the
premature opening of a Tez session during Hive CLI startup. This prevents the
unnecessary session from failing and ensures Oozie Hive actions for DML operations
complete successfully.
Apache Jira: HIVE-27023
- CDPD-84564: Improved query error reporting for analysis
- 7.3.1.500
- Query error messages were previously not saved at the driver
level, which made it difficult to analyze the reason for a simple query failure for
later reference or query-tracking purposes.
- This issue is now resolved by storing the query error message in
a variable within the
DriverContext.Apache Jira: HIVE-28312
- CPDP-60943: Data Loss during compaction
- 7.3.1.500
- Data loss no longer occurs during compaction when Apache Ranger policies for masking or row filtering are enabled and compaction users are included in the policies. Compaction queries are now automatically excluded from ALL ranger policies.
Apache Jira: HIVE-27643
- Housekeeping task to clean up local folders
- 7.3.1.500
- When a Hive LLAP daemon crashes, it can leave behind unnecessary files in the LLAP local directories, which can lead to disk overflow and cause Hive queries to fail with an invalid disk error exception.
- A new housekeeping task has been added to clean up these local folders on Hive Virtual Warehouse startup and periodically after startup. This resolves the disk overflow issue. The following properties are introduced to manage this cleanup process:
hive.llap.local.dir.cleaner.cleanup.interval: Specifies the time interval for the LocalDirCleaner service to check for stale files. Default value: 2 hours.
hive.llap.local.dir.cleaner.file.modify.time.threshold: Specifies the threshold time. Any file older than this time is deleted. Default value: 24 hours.
- Incremental rebuild accuracy for materialized views
- 7.3.1.500
- Optimized the accuracy of representing incremental rebuild availability for materialized views containing aggregate functions in their definition query.
Apache Jira: HIVE-28006
- Fix for partition metadata fetch issue during DST shift
- 7.3.1.500
- Fetching partition metadata from Hive Metastore using Java Data Objects (JDO) by timestamp now does not provide incorrect results in daylight saving time (DST) shift in partition pruning.
Apache Jira: HIVE-27775
- Materialized view rebuild issue with delete operations
- 7.3.1.500
- Addressed data correctness issues when rebuilding a materialized view incrementally in scenarios involving delete operations in one of its source tables. This is evident when the materialized view definition query includes joins and aggregates.
Apache Jira: HIVE-27924
- CDPD-66779: Partitioned Iceberg table not getting loaded with insert select query from Hive
- 7.3.1.500
- If a partitioned table is created in Iceberg and a user attempts
to insert data from another table using an insert into ... select
query, an error occurs.
- This issue is now fixed.
- CDPD-74640: Improved query consistency and data writing for Beeline and Hive queries
- 7.3.1.500
- In concurrent workflows using Beeline, queries occasionally returned incorrect results due to non-thread-safe file handling, especially when
hive.query.result.cached.enabled was disabled. Additionally, INSERT OVERWRITE DIRECTORY operations failed to write data correctly to specified directories when query result caching was enabled.
- The issue was addressed by implementing thread-safe file handling in HiveSequenceFileInputFormat and adjusting cache handling for directory overwrite queries, ensuring reliable query results and consistent data writes in concurrent workflows.
Apache Jira: HIVE-28530, HIVE-21386, HIVE-25907
- CDPD-72985: Compatibility issue in HMS thrift struct for Hive column stats
- 7.3.1.500
- Hive4 introduced a new required "engine" field to differentiate the stats generated by different engines. This broke compatibility with clients using Hive 3 or other engines using customized thrift API, such as TrinoDB.
- The "engine" field in the Hive Metastore (HMS) API was made optional, which restores compatibility with Hive 3 clients for column stats retrieval.
Apache Jira: HIVE-27984
- CDPD-71484: Improve LLAP performance by reusing FileSystem objects across tasks
- 7.3.1.500
- Frequent closure of FileSystem objects disabled Hadoop's FileSystem cache, reducing LLAP efficiency.
- Adjusted FileSystem handling to close objects once per query and daemon rather than per task, enhancing reuse and maintaining cache functionality.
Apache Jira: HIVE-27884
- CDPD-70956: Queries over JDBC tables fail due to column types mismatch
- 7.3.1.500
- Queries over JDBC tables fail at runtime when there is a mismatch between the Hive type and the database type for some columns and the Cost-Based Optimizer (CBO) is not used.
Apache Jira: HIVE-28285
- DWX-17619: HPL/SQL built-in function unexpected output
- 7.3.1.500
- Certain HPL/SQL built-in functions, such as
lower and trim, were not functioning correctly when used in INSERT statements. This issue occurred after a previous fix that removed UDFs required for HPL/SQL's local and offline modes.
- The issue was resolved by re-adding the necessary UDFs to HPL/SQL to ensure compatibility with local and offline modes. Related issues with these UDFs were also fixed to restore their functionality in
INSERT and other SELECT statements.
Apache Jira: HIVE-28143
- CDPD-74205: SharedWorkOptimizer leaves residual unused operator tree
- 7.3.1.500
- The shared work optimizer left behind unused operator trees that sent dynamic partition pruning (DPP) events to non-existing table scan operators. This caused errors during query execution, such as 'No work found for tablescan TS[53]', disrupting workflows and query processing.
- The issue was fixed by removing any leftover operator trees that sent dynamic partition pruning events to unknown operators during the optimization process. The fix ensures smoother query execution and prevents such errors.
Apache Jira: HIVE-28484
- CDPD-73269: RexLiteral to ExprNode conversion issue with empty string
- 7.3.1.500
- The conversion from
RexLiteral to ExprNode failed when the literal was an empty string. This issue, introduced in HIVE-23892, caused the Cost-Based Optimizer (CBO) to fail for queries containing filters with empty literals.
- The issue was fixed by ensuring that an empty literal in the filter still produces a valid
RexNode during the conversion process. This fix prevents CBO failures for such queries.
Apache Jira: HIVE-28431
- CDPD-44551: Avro table import or download fails with ODBC driver due to missing property
- 7.3.1.500
- The absence of
metastore.storage.schema.reader.impl caused Avro table import or download failures in Cloudera 7.1.7 when using the ODBC driver. The issue was addressed by ensuring that all records are correctly preserved during major compaction.
Apache Jira: HIVE-26952
- CDPD-72605: Optimized partition authorization in HiveMetaStore to reduce overhead
- 7.3.1.500
- The
add_partitions() API in HiveMetaStore was authorizing both new and existing partitions, leading to unnecessary processing and increased load on the authorization service.
- The issue was addressed by modifying the
add_partitions() API to authorize only new partitions, improving performance and reducing authorization overhead.
Apache Jira: HIVE-28371
- CDPD-73046: Removal of duplicated proto reader/writer classes
- 7.3.1.500
- Duplicate Java files for proto reader/writer classes were present in Hive, which were already available in Apache Tez. These duplicates caused redundancy and missed improvements introduced in Tez, such as those from TEZ-4296, TEZ-4105, and TEZ-4305.
- The issue was fixed by removing the duplicated proto reader/writer classes from Hive, ensuring the use of the improved versions available in Apache Tez.
Apache Jira: HIVE-28028
- CDPD-74539: Maria DB falls back to MySQL in Hive
- 7.3.1.500
- Hive downstream had errors in supporting Maria DB.
- The issue was addressed by making Maria DB automatically fall back to MySQL.
- CDPD-77713: Deadlock occurs in TxnStoreMutex when acquiring lock
- 7.3.1.500
- Deadlocks occurred in Hive Metastore due to MySQL’s
REPEATABLE-READ isolation level, which caused locking conflicts during housekeeping tasks.
- The issue was addressed by restoring the TxnHandler's isolation level to
READ-COMMITTED.
Apache Jira: HIVE-28669
- CDPD-77905: MRCompactor causes data loss during major compaction
- 7.3.1.500
- During a major compaction, records matching certain conditions were lost due to incorrect handling in MRCompactor.
- The issue was addressed by ensuring that all records are correctly preserved during major compaction.
Apache Jira: HIVE-28700
- CDPD-75656: OOM when compiling query with many GROUP BY columns aliased multiple times
- 7.3.1.500
- HiveServer2 became unresponsive and crashed with an
OutOfMemoryError when compiling queries that include GROUP BY columns aliased multiple times in the SELECT clause.
- The issue was addressed by customizing the metadata handler to limit the growth of unique key derivation.
Apache Jira: HIVE-28582
- DWX-20754: Error while running LATERAL VIEW query on non-native tables
- 7.3.1.500
- When running
LATERAL VIEW queries on non-native Iceberg tables, users encountered the error org.apache.hadoop.hive.ql.parse.SemanticException: Line 0:-1 Invalid column reference 'BLOCK__OFFSET__INSIDE__FILE'.
- This issue occurred because Iceberg tables were incorrectly classified as native tables, which led to the addition of incorrect virtual columns in the
RowResolver. This issue has been resolved, and LATERAL VIEW queries on non-native Iceberg tables should now work as expected.Apache Jira: HIVE-28938
Cloudera Runtime 7.3.1.400 SP2:
- CDPD-81766: Database Setting Consistency in Spark3 HWC
- 7.3.1.400
- Spark3's Hive Warehouse Connector (HWC) did not consistently
apply the database setting when validating if a table existed during append mode writes.
This led to inconsistencies where the database setting was not used for validation, even
though data was correctly written to the intended database.
- This issue was resolved and is now in a patch-ready state. This
ensures the database setting is consistently applied during table validation in Spark3
HWC, preventing prior inconsistencies.
- CDPD-81122: Enhanced Concurrent Access in HWC Secure Mode
- 7.3.1.400
- Spark applications running multiple concurrent queries in HWC's
SECURE_ACCESS mode encountered failures and correctness problems.
This happened because the system faced difficulties when generating temporary table
names and managing staging directories simultaneously for multiple reads.
- This issue was addressed by improving the handling of concurrent
operations within HWC's
SECURE_ACCESS mode.
- CDPD-81453: Efficient Handling of Timed-Out Transactions in
Replication
- 7.3.1.400
- Hive replication did not log transactions that timed out as
'ABORTED'. This caused these transactions to remain on the target cluster for an
extended period.
- This issue was resolved by ensuring that transactions aborted
due to timeout are now properly logged. This allows their abort event to be replicated,
leading to prompt removal from the target environment.
Apache Jira: HIVE-27797
- CDPD-81420: Table Filtering for Ranger Policies
- 7.3.1.400
- Ownership details for tables were not correctly carried through
the system during filtering, which prevented Ranger from applying policies based on who
owned the tables.
- This issue was resolved by ensuring that ownership information
is now consistently included when tables are filtered. This allows Ranger to accurately
enforce policies based on table ownership, leading to improved performance when
filtering databases and tables.
- CDPD-77626: Improving performance of ALTER PARTITION operations
using direct SQL
- 7.3.1.400
- Running
ALTER PARTITION operations using direct
SQL failed for some databases. The failures occurred due to missing data type
conversions for CLOB and Boolean fields, causing the system to fall back to slower ORM
(Object Relational Mapping) paths.
- The issue was addressed by adding proper handling for CLOB and
Boolean type conversions. With this fix,
ALTER PARTITION operations now
run successfully using direct SQL.Apache Jira: HIVE-28271, HIVE-27530
Cloudera Runtime 7.3.1.300 SP1 CHF 1
- CDPD-64950: Deadlock during Spark shutdown due to duplicate
transaction cleanup
- 7.3.1.300
- During Spark application shutdown, transactions were being
closed by two separate mechanisms at the same time. This parallel cleanup could result
in a deadlock, especially when the heartbeat interval was set to a low value.
- The issue was addressed by ensuring that transaction cleanup
occurs through a single mechanism during shutdown, avoiding concurrent execution and
potential deadlocks.
- CDPD-78334: Support custom delimiter in
SkippingTextInputFormat
- 7.3.1.300
- Queries like
SELECT COUNT(*) returned wrong
results when a custom record delimiter was used. The input file was read as a single
line because the custom delimiter was ignored.
- The issue was addressed by ensuring that the custom record
delimiter is considered while reading the file, so that queries work as
expected.
Apache Jira: HIVE-27498
- CDPD-79237: Hive Metastore schema upgrade fails due to NULL
values
- 7.3.1.300
- Upgrading from CDP Private Cloud Base 7.1.7.2052 to 7.1.9.1010
fails during the Hive Metastore schema upgrade. The upgrade script issues the following
command:
ALTER TABLE "DBS" ALTER COLUMN "TYPE" SET DEFAULT 'NATIVE', ALTER COLUMN "TYPE" SET NOT NULL;
This
fails because the DBS.TYPE column contains NULL values. These NULLs are
introduced by canary databases created by Cloudera Manager, which
insert entries in the HMS database without setting the TYPE.
- The issue was addressed by ensuring that canary databases
created by Cloudera Manager correctly populate the TYPE column in the
DBS table, preventing NULL values and allowing the schema upgrade to
proceed.
Cloudera Runtime 7.3.1.200 SP1
- CDPD-78342/CDPD-72605: Optimized partition authorization in
HiveMetaStore to reduce overhead
- 7.3.1.200
- The
add_partitions() API in
HiveMetastore was authorizing both new and existing partitions, leading to unnecessary
processing and increased load on the authorization service.
- The issue was addressed by modifying the
add_partitions() API to authorize only new partitions, improving
performance and reducing authorization overhead.
- CDPD-77990: Upgraded MySQL Connector/J to 8.2.0 to fix
CVE-2023-22102
- 7.3.1.200
- The existing MySQL Connector/J version was vulnerable to
CVE-2023-22102.
- The issue was addressed by upgrading mysql-connector-j to
version 8.2.0 in packaging/src/docker/Dockerfile.
- CDPD-62654/CDPD-77985: Hive Metastore now sends a single
AlterPartitionEvent for bulk partition updates
- 7.3.1.200
- HiveMetastore previously sent individual
AlterPartitionEvent for each altered partition, leading to inefficiencies and pressure
on the back db.
- The issue was addressed by modifying Hive Metastore to send a
single AlterPartitionEvents containing a list of partitions for bulk updates,
hive.metastore.alterPartitions.notification.v2.enabledto turn on
this feature.
Apache Jira:HIVE-27746
- CDPD-73669: Secondary pool connection starvation caused by
updatePartitionColumnStatisticsInBatch API
- 7.3.1.200
- Hive queries intermittently failed with
Connection is
not available, request timed out errors. The issue occurred because the
updatePartitionColumnStatisticsInBatch method in ObjectStore used
connections from the secondary pool, which had a pool size of only two, leading to
connection starvation.
- The fix ensures that the
updatePartitionColumnStatisticsInBatch API now requests connections
from the primary connection pool, preventing connection starvation in the secondary pool.
Apache Jira:
HIVE-28456
- CDPD-61676/CDPD-78341: Drop renamed external table fails due to
missing update in PART_COL_STATS
- 7.3.1.200
- When hive.metastore.try.direct.sql.ddl is set to false, dropping
an external partitioned table after renaming it fails due to a foreign key constraint
error in the
PART_COL_STATS table. The table name in
PART_COL_STATS is not updated during the rename, causing issues
during deletion.
- The issue was addressed by ensuring that the
PART_COL_STATS table is updated during the rename operation, making
partition column statistics usable after the rename and allowing the table to be dropped
successfully.Apache Jira: HIVE-27539
- CDPD-79469: Selecting data from a bucketed table with a decimal
column throws NPE
- 7.3.1.200
- When hive.tez.bucket.pruning is enabled,
selecting data from a bucketed table with a decimal column type fails with a
NullPointerException. The issue occurs due to a mismatch in decimal
precision and scale while determining the bucket number, causing an overflow and
returning null.
- The issue was addressed by ensuring that the correct decimal
type information is used from the actual field object inspector instead of the default
type info, preventing the overflow and
NullPointerException.Apache Jira: HIVE-28076
- CDPD-74095: Connection timeout while inserting Hive partitions
due to secondary connection pool limitation
- 7.3.1.200
- Since HIVE-26419, Hive uses a secondary connection pool (size 2)
for schema and value generation. However, this pool also handles nontransactional
connections, causing the
updatePartitionColumnStatisticsInBatch request
to fail with a Connection is not available, request timed out error
when the pool reaches its limit during slow insert or update operations.
- The issue was addressed by ensuring that time-consuming API
requests use the primary connection pool instead of the secondary pool, preventing
connection exhaustion.
Apache Jira: HIVE-28456
- CDPD-78331: HPLSQL built-in functions fail in insert
statement
- 7.3.1.200
- After the HIVE-27492 fix, some HPLSQL built-in functions like
trim and lower stopped working in INSERT statements. This happened because UDFs already
present in Hive were removed to avoid duplication, but HPLSQL's local and offline modes
still required them.
- The issue was addressed by restoring the removed UDFs in HPLSQL
and fixing related function issues to ensure compatibility in all execution
modes.
Apache Jira: HIVE-28143
- CDPD-78343: Syntax error in HPL/SQL error handling
- 7.3.1.200
- In HPL/SQL, setting hplsql.onerror using
the SET command resulted in a syntax error because the grammar file (Hplsql.g4) only
allowed identifiers without dots (.).
- The issue was addressed by updating the grammar to support
qualified identifiers, allowing the SET command to accept dot (.) notation.
Example: EXECUTE 'SET hive.merge.split.update=true';
Apache Jira:
HIVE-28253
- CDPD-78330: HPL/SQL built-in functions like sysdate not
working
- 7.3.1.200
- HPL/SQL built-in functions that are not available in Hive, such
as sysdate, were failing with a SemanticException when used in queries. Only functions
present in both HPL/SQL and Hive were working.
- The issue was addressed by modifying the query parsing logic.
Now, HPL/SQL built-in functions are executed directly, and only functions also available
in Hive are forwarded to Hive for execution.
Apache Jira: HIVE-27492
- CDPD-78345: Signalling CONDITION HANDLER is not working in
HPLSQL
- 7.3.1.200
- The user-defined
CONDITION HANDLERs in HPLSQL
are not being triggered as expected. Instead of running the handlers, the system only
logs the conditions, so the handlers aren't available when needed.
- The issue was addressed by ensuring that user-defined condition
handlers are properly registered and invoked when a SIGNAL statement raises a
corresponding condition.
Apache Jira: HIVE-28215
- CDPD-78333: EXECUTE IMMEDIATE throwing ClassCastException in
HPL/SQL
- 7.3.1.200
- When executing a
select count(*) query, it
returns a long value, but HPLSQL expects a string. This mismatch causes the following
error:Caused by: java.lang.ClassCastException: class java.lang.Long cannot be cast to class java.lang.String
at org.apache.hive.service.cli.operation.hplsql.HplSqlQueryExecutor$OperationRowResult.get
- The issue was addressed by converting the result to a string
when the expected type is a string.
Apache Jira: HIVE-28215
- CDPD-79844: EXECUTE IMMEDIATE displaying error despite
successful data load
- 7.3.1.200
- Running
EXECUTE IMMEDIATE 'LOAD DATA INPATH
''/tmp/test.txt'' OVERWRITE INTO TABLE test_table' displayed an error on the
console, even though the data was successfully loaded into the table. This occurred
because HPL/SQL attempted to check the result set metadata after execution, but LOAD
DATA queries do not return a result set, leading to a
NullPointerException.
- The issue was addressed by ensuring that result set metadata is
accessed only when a result set is present.
Apache Jira: HIVE-28766
- CDPD-67033: HWC for Spark 3 compatibility with Spark 3.5
- 7.3.1.200
- The Spark 3.5, based on Cloudera on cloud 7.2.18 libraries, caused a failure in the
HWC for Spark 3 build. Canary builds indicate that broke compatibility.
- The issue was addressed by updating HWC for Spark 3 to align
with Spark 3.5 changes and ensuring compatibility with Cloudera on cloud 7.2.18 dependencies
- CDPD-80097: Datahub recreation fails due to Hive Metastore
schema validation error
- 7.3.1.200
- Datahub recreation on Azure fails because Hive Metastore schema
validation cannot retrieve the schema version due to insufficient permissions on the
VERSION table.
- This issue is now fixed.
Cloudera Runtime 7.3.1.100 CHF 1
- CDPD-74456: Spark3 hwc.setDatabase() writes to the correct
database
- 7.3.1.100
- When setting the database using hive.setDatabase("DB") and
performing CREATE TABLE or write operations with Hive Warehouse Connector (HWC), the
operations were executed in a default database. This issue is now resolved and the
operations are executed in the correct database.
- The issue is now fixed.
- CDPD-74373: Timestamp displays incorrectly in Spark HWC with
JDBC_READER mode
- 7.3.1.100
- When using Spark HWC with JDBC_READER mode, timestamps were
displayed incorrectly. For example, 0001-01-01 00:00:00.0 was interpreted as 0000-12-30
00:00:00.
- This issue is addressed by correcting timestamp handling in
JDBC_READER mode to ensure accurate representation of timestamps before the Gregorian
calendar was adopted.
- CDPD-76932: Incorrect query results due to TableScan merge in
shared work optimizer
- 7.3.1.100
- During shared work optimization, TableScan operators were merged
even when they had different Dynamic Partition Pruning (DPP) parent operators. This
caused the filter from the missing DPP operator to be ignored, leading to incorrect
query results.
- This issue is resolved by modifying the shared work optimizer to
check the parents of TableScan operators and skip merging when DPP edges
differ.
Apache Jira: HIVE-26968
- CDPD-78115: Thread safety issue in
HiveSequenceFileInputFormat
- 7.3.1.100
- Concurrent queries returned incorrect results when query result
caching was disabled due to a thread safety issue in HiveSequenceFileInputFormat.
- This issue is now resolved and the files are now set in a
thread-safe manner to ensure correct query results.
- CDPD-78129: Materialized view rebuild failure due to stale
locks
- 7.3.1.100
- If a materialized view rebuild is aborted, the lock entry in the
materialization_rebuild_locks table is not removed. This prevents subsequent rebuilds of
the same materialized view, causing
error
Error: Error while compiling statement: FAILED: SemanticException
org.apache.hadoop.hive.ql.parse.SemanticException: Another process is rebuilding the materialized view view_name (state=42000, code=40000)
- The fix ensures that the materialized view rebuild lock is
removed when a rebuild transaction is aborted. The
MaterializationRebuildLockHeartbeater now checks the transaction
state before heartbeating, allowing outdated locks to be cleaned properly.Apache
Jira: HIVE-28416
- CDPD-78166: Residual operator tree in shared work optimizer
causes dynamic partition pruning errors
- 7.3.1.100
- Shared work optimizer left unused operator trees that sent
dynamic partition pruning events to non-existent operators. This caused query failures
when processing these events, leading to errors in building the physical operator
tree.
- The issue was addressed by ensuring that any residual unused
operator trees are removed during the operator merge process in shared work optimizer,
preventing invalid dynamic partition pruning event processing.
Apache Jira:
HIVE-28484
- CDPD-78113: Conversion failure from RexLiteral to ExprNode for
empty strings
- 7.3.1.100
- Conversion from RexLiteral to ExprNode failed when the literal
was an empty string, causing the cost-based optimizer to fail for queries.
- The issue was addressed by ensuring that an empty string literal
in a filter produces a valid RexNode, preventing cost-based optimizer
failures.
Apache Jira: HIVE-28431
Cloudera Runtime 7.3.1
- CDPD-57121: ThreadPoolExecutorWithOomHook handling
OutOfMemoryError
- 7.3.1
- The
ThreadPoolExecutorWithOomHook wasn't
effectively handling OutOfMemoryError when executing tasks, as the
exception was wrapped in ExecutionException, making it harder to
detect.
- The issue was fixed by updating ThreadPoolExecutorWithOomHook to
properly invoke OutOfMemoryError hooks and stop HiveServer2 when required.
Apache
Jira:
HIVE-24411, HIVE-26955, IMPALA-8518
- CDPD-31172: Hive: Intermittent ConcurrentModificationException
in HiveServer2 during mondrian testset
- Fixed an exception by using ConcurrentHashMap instead of HashMap
to avoid the race condition between threads occurring because of concurrent modification
of PerfLogger endTimes/startTimes maps.