Configuring Catalog

When using Spark SQL to query an Iceberg table from Spark, you refer to a table using the following dot notation:

<catalog_name>.<database_name>.<table_name>

The default catalog used by Spark is named spark_catalog. When referring to a table in a database known to spark_catalog, you can omit <catalog_name>.

Iceberg provides a SparkCatalog that understands Iceberg tables, and a SparkSessionCatalog that understands both Iceberg and non-Iceberg tables (under the hood, it delegates to SparkCatalog to load Iceberg tables and to Spark’s built-in catalog to load non-Iceberg tables). You can replace Spark’s default catalog by Iceberg’s SparkSessionCatalog by setting spark_catalog to SparkSessionCatalog; set the following properties when you define the Spark job:
spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkSessionCatalog
spark.sql.catalog.spark_catalog.type=hive 
You can use the session catalog to query tables, but to inspect an Iceberg table (query metadata tables) such as to view its history, you need to configure and use a another catalog. For example:
spark.sql.catalog.iceberg_catalog=org.apache.iceberg.spark.SparkCatalog
spark.sql.catalog.iceberg_catalog.type=hive
You can configure more than one catalog in the same Spark job. For more information, see the Iceberg documentation.