Iceberg library dependencies for Spark applications

If your Spark application only uses Spark SQL to create, read, or write Iceberg tables, and does not use any Iceberg APIs, you do not need to build it against any Iceberg dependencies. The runtime dependencies needed for Spark to use Iceberg are in the Spark classpath by default. If your code uses Iceberg APIs, then you need to build it against Iceberg dependencies.

Cloudera publishes Iceberg artifacts to a Maven repository with versions matching the Iceberg in CDE.

Cloudera publishes Iceberg artifacts to a Maven repository with versions matching the Iceberg in CDS.

<dependency>
     <groupId>org.apache.iceberg</groupId>
     <artifactId>iceberg-core</artifactId>
     <version>${iceberg.version}</version>
     <scope>provided</scope>
</dependency>
<!-- for org.apache.iceberg.hive.HiveCatalog -->
<dependency>
     <groupId>org.apache.iceberg</groupId>
     <artifactId>iceberg-hive-metastore</artifactId>
     <version>${iceberg.version}</version>
     <scope>provided</scope>
</dependency>
<!-- for org.apache.iceberg.spark.* classes if used -->
<dependency>
    <groupId>org.apache.iceberg</groupId>
    <artifactId>iceberg-spark</artifactId>
    <version>${iceberg.version}</version>
    <scope>provided</scope>
</dependency>
Alternatively, the following dependency can be used:
<dependency>
   <groupId>org.apache.iceberg</groupId>
   <artifactId>iceberg-spark3-runtime</artifactId>
   <version>${iceberg.version}</version>
   <scope>provided</scope>
</dependency>
Alternatively, the following dependency can be used:
<dependency>
   <groupId>org.apache.iceberg</groupId>
   <artifactId>iceberg-spark-runtime-3.3_2.12</artifactId>
   <version>${iceberg.version}</version>
   <scope>provided</scope>
</dependency>

The iceberg-spark3-runtime JAR contains the necessary Iceberg classes for Spark runtime support, and includes the classes from the dependencies above.

After compiling the job, you can create and run CDE jobs. For more information see, Creating Spark jobs and Running a Spark job.