Prerequisites and limitations for using Iceberg
To use Apache Iceberg in CDE, you'll need the following prerequisites:
- CDE Virtual Cluster with Spark 3.2.1 or higher
- CDP Private Cloud Base 7.1.7 SP2 or 7.1.8
Limitations
- The use of Iceberg tables as Structured Streaming sources or sinks is not supported.
- PyIceberg is not supported. Using Spark SQL to query Iceberg tables in PySpark is supported.
Iceberg table format version 2
Iceberg table format version 2 (v2) is available starting in Iceberg 0.14. Iceberg table format v2 uses row-level UPDATE and DELETE operations that add deleted files to encoded rows that were deleted from existing data files. The DELETE, UPDATE, and MERGE operations function by writing delete files instead of rewriting the affected data files. Additionally, upon reading the data, the encoded deletes are applied to the affected rows that are read. This functionality is called merge-on-read.
To use Iceberg table format v2, you'll need the following prerequisites:
- Iceberg 0.14
- Spark 3.2 or higher
With Iceberg table format version 1 (v1), the above-mentioned operations are only supported
with copy-on-write where data files are rewritten in their entirety when rows in the files
are deleted. Merge-on-read is more efficient for writes, while copy-on-write is more
efficient for reads.