Confirm that a Data Share is correctly set up and accessible to an external Data
Consumer by exchanging the Client ID and Secret for a Knox access token and listing the
shared Iceberg tables using the Cloudera Iceberg REST Catalog
API.
After publishing a Data Share and distributing credentials to a Data Consumer, you
can verify end-to-end access without a compute engine such as Spark, Snowflake, or
Databricks. This procedure uses curl to perform two checks: it
exchanges the Client ID and Secret for a short-lived Knox JWT access token, then
uses that token to call the Cloudera Iceberg REST Catalog API
and list the namespaces and tables that are visible to the consumer. Successful
completion confirms that Knox credential exchange, Ranger authorization, and Iceberg
REST routing are all functioning correctly for the Data Share.
The Data Share is published (Shared status) and the
target Iceberg table has been added as an asset.
The response contains the full Iceberg table metadata. Confirm that the
metadata.schemas[0].fields array lists the expected
columns, each with a name, type, and
required field.
A JSON response with populated metadata and
metadata-location fields confirms that the Data Consumer
can read the table schema. The Data Share is correctly set up and externally
accessible.
The external consumer's Client ID and Secret are successfully exchanged for a Knox
access token, and the Cloudera Iceberg REST Catalog confirms
access to the shared namespace, table, and schema. The Data Share end-to-end
verification is complete.
Once verified, the Data Consumer can configure their Iceberg REST Catalog compatible
compute engine using the same Client ID and Secret. For an example using PySpark,
see the Related information.