Known Issues in Apache Iceberg

Learn about the known issues in Iceberg, the impact or changes to the functionality, and the workaround.

CDPD-75667: Querying an Iceberg table with a TIMESTAMP_LTZ column can result in data loss
When you query an Iceberg table that has a TIMESTAMP_LTZ column, the query could result in data loss.
When creating Iceberg tables from Spark, set the following Spark configuration to avoid creating columns with the TIMESTAMP_LTZ type:
spark.sql.timestampType=TIMESTAMP_NTZ
Apache JIRA: IMPALA-13484
CDPD-75649: Spark-Iceberg queries fail due to a Java Virtual Machine (JVM) error
While running longevity tests on Spark-Iceberg queries, the query might fail due to the following JVM error - "A fatal error has been detected by the Java Runtime Environment".
Perform the following steps to resolve the issue:
  1. From Cloudera Manager, go to Clusters > SPARK3 ON YARN > Configuration.
  2. Search for the "Spark 3 Client Advanced Configuration Snippet (Safety Valve) for spark3-conf/spark-defaults.conf" property and add the following values:
    spark.driver.extraJavaOptions=-XX:-UseAES
    spark.executor.extraJavaOptions=-XX:-UseAES
    
  3. Click Save Changes and restart the Spark 3 service for the changes to take effect.
CDPD-72942: Unable to read Iceberg table from Hive after writing data through Apache Flink
If you create an Iceberg table with default values using Hive and insert data into the table through Apache Flink, you cannot then read the Iceberg table from Hive using the Beeline client, and the query fails with the following error:
Error while compiling statement: java.io.IOException: java.io.IOException: Cannot create an instance of InputFormat class org.apache.hadoop.mapred.FileInputFormat as specified in mapredWork!

The issue persists even after you use the ALTER TABLE statement to set the engine.hive.enabled table property to "true".

None.
Apache JIRA: HIVE-28525
CDPD-71962: Hive cannot write to a Spark Iceberg table bucketed by date column
If you have used Spark to create an Iceberg table that is bucketed by the "date" column and then try inserting or updating this Iceberg table using Hive, the query fails with the following error:
Error: Error while compiling statement: FAILED: RuntimeException org.apache.hadoop.hive.ql.exec.UDFArgumentException:  ICEBERG_BUCKET() only takes STRING/CHAR/VARCHAR/BINARY/INT/LONG/DECIMAL/FLOAT/DOUBLE types as first argument, got DATE (state=42000,code=40000)

This issue does not occur if the Iceberg table is created through Hive.

None.