Known Issues in Apache Impala
Learn about the known issues in Impala, the impact or changes to the functionality, and the workaround.
- CDPD-60862: Rolling restart fails during ZDU when DDL operations are in progress
-
During a Zero Downtime Upgrade (ZDU), the rolling restart of services that support Data Definition Language (DDL) statements might fail if DDL operations are in progress during the upgrade. As a result, ensure that you do not run DDL statements during ZDU.
The following services support DDL statements:- Impala
- Hive – using HiveQL
- Spark – using SparkSQL
- HBase
- Phoenix
- Kafka
Data Manipulation Lanaguage (DML) statements are not impacted and can be used during ZDU. Following the successful upgrade, you can resume running DDL statements.
- CDPD-60489: Jackson-dataformat-yaml 2.12.7 and Snakeyaml 2.0 are not compatible.
- You must not use Jackson-dataformat-yaml through the platform for YAML parsing.
- CDPD-59625: Impala shell in RHEL 9 with Python 2 as default does not work
- If you try to run impala-shell on RHEL 9 by setting the default python executable available in PATH to Python 2, it will fail since RHEL 9 is compatible only with Python 3.
- CDPD-42958: After upgrading the CDH 7.1.9 from CDH 6.x, under certain conditions you cannot insert data into a table
- Under the following conditions, after upgrading from CDH 6.x
to CDH 7.1.9 you cannot insert data into a table from Impala:
- On CDH 6.x, you created a database with Impala in a user specified HDFS location.
- Using Hive, you then created a table in the database.
- Impala cannot update table if the 'external.table.purge' property is not set to true
-
Impala cannot update a table using DDL statements if the 'external.table.purge' property is FALSE. ALTER TABLE statements return success with no changes to the table.
- Impala's known limitation when querying compacted tables
- When the compaction process deletes the files for a table from the underlying HDFS
location, the Impala service does not detect the changes as the compactions does not
allocate new write ids. When the same table is queried from Impala it throws a 'File does
not exist' exception that looks something like
this:
Query Status: Disk I/O error on <node>:22000: Failed to open HDFS file hdfs://nameservice1/warehouse/tablespace/managed/hive/<database>/<table>/xxxxx Error(2): No such file or directory Root cause: RemoteException: File does not exist: /warehouse/tablespace/managed/hive/<database>/<table>/xxxx
- CDPD-28431: Intermittent errors could be potentially encountered when Impala UI is accessed from multiple Knox nodes.
- You must use a single Knox node to access Impala UI.
- Impala api calls via knox require configuration if the knox customized kerberos principal name is a default service user name
- To access impala api calls via knox, if the knox customized kerberos principal name is a default service user name, then configure "authorized_proxy_user_config" by clicking Clusters->impala->configuration. Include the knox customized kerberos principal name in the comma separated list of values <knox_custom_kerberos_principal_name>=*" where <knox_custom_kerberos_principal_name> is the value of the Kerberos Principal in the Knox service. Select Clusters>Knox>Configuration and search for Kerberos Principal to display this value.
- CDPD-21828: Multiple permission assignment through grant is not working
- None
- Problem configuring masking on tables using Ranger
- The following Knowledge Base article describes the behavior when we configure masking on tables using Ranger. This configuration works for Hive, but breaks queries in some scenarios for Impala.
- IMPALA-11871: INSERT statement does not respect Ranger policies for HDFS
-
In a cluster with Ranger auth (and with legacy catalog mode), even if you provide RWX to cm_hdfs -> all-path for the user impala, inserting into a table whose HDFS POSIX permissions happen to exclude impala access will result in "AnalysisException: Unable to INSERT into target table (default.t1) because Impala does not have WRITE access to HDFS location: hdfs://XXXXXXXXXXXX"
- IMPALA-532: Impala should tolerate bad locale settings
- If the
LC_*
environment variables specify an unsupported locale, Impala does not start. - Avro Scanner fails to parse some schemas
- The default value in Avro schema must match type of first union
type, e.g. if the default value is
null
, then the first type in theUNION
must be"null"
. - IMPALA-691: Process mem limit does not account for the JVM's memory usage
- Some memory allocated by the JVM used internally by Impala is not counted against the memory limit for the impalad daemon.
- IMPALA-1024: Impala BE cannot parse Avro schema that contains a trailing semi-colon
- If an Avro table has a schema definition with a trailing semicolon, Impala encounters an error when the table is queried.
- IMPALA-1652: Incorrect results with basic predicate on CHAR typed column
- When comparing a
CHAR
column value to a string literal, the literal value is not blank-padded and so the comparison might fail when it should match. - IMPALA-1821: Casting scenarios with invalid/inconsistent results
- Using a
CAST()
function to convert large literal values to smaller types, or to convert special values such asNaN
orInf
, produces values not consistent with other database systems. This could lead to unexpected results from queries. - IMPALA-2005: A failed CTAS does not drop the table if the insert fails
- If a
CREATE TABLE AS SELECT
operation successfully creates the target table but an error occurs while querying the source table or copying the data, the new table is left behind rather than being dropped. - IMPALA-2422: % escaping does not work correctly when occurs at the end in a LIKE clause
- If the final character in the RHS argument of a
LIKE
operator is an escaped\%
character, it does not match a%
final character of the LHS argument. - IMPALA-2603: Crash: impala::Coordinator::ValidateCollectionSlots
- A query could encounter a serious error if includes multiple
nested levels of
INNER JOIN
clauses involving subqueries. - IMPALA-3094: Incorrect result due to constant evaluation in query with outer join
- IMPALA-3509: Breakpad minidumps can be very large when the thread count is high
- The size of the breakpad minidump files grows linearly with the number of threads. By default, each thread adds 8 KB to the minidump size. Minidump files could consume significant disk space when the daemons have a high number of threads.
- IMPALA-4978: Impala requires FQDN from hostname command on Kerberized clusters
- The method Impala uses to retrieve the host name while
constructing the Kerberos principal is the
gethostname()
system call. This function might not always return the fully qualified domain name, depending on the network configuration. If the daemons cannot determine the FQDN, Impala does not start on a Kerberized cluster. - IMPALA-6671: Metadata operations block read-only operations on unrelated tables
- Metadata operations that change the state of a table, like
COMPUTE STATS
orALTER RECOVER PARTITIONS
, may delay metadata propagation of unrelated unloaded tables triggered by statements likeDESCRIBE
orSELECT
queries. - IMPALA-7072: Impala does not support Heimdal Kerberos
- None
- OPSAPS-46641: A single parameter exists in Cloudera Manager for specifying the Impala Daemon Load Balancer. Because BDR and Hue need to use different ports when connecting to the load balancer, it is not possible to configure the load balancer value so that BDR and Hue will work correctly in the same cluster.
- The workaround is to use the load balancer configuration either without a port specification, or with the Beeswax port: this will configure BDR. To configure Hue use the "Hue Server Advanced Configuration Snippet (Safety Valve) for impalad_flags" to specify the the load balancer address with the HiveServer2 port.
- CDPD-28139: Set spark.hadoop.hive.stats.autogather to false by default
- As an Impala user, if you submit a query against a table containing data ingested using Spark and you are concerned about the quality of the query plan, you must run COMPUTE STATS against such a table in any case after an ETL operation because numRows created by Spark could be incorrect. Also, use other stats computed by COMPUTE STATS, e.g., Number of Distinct Values (NDV) and NULL count for good selectivity estimates.
- IMPALA-6841
- IMPALA-635