Integrating Apache Hive with Kafka, Spark, and BI
Also available as:
PDF

Catalog operations

Catalog operations include creating, dropping, and describing a Hive database and table from Spark.

Catalog operations

  • Set the current database for unqualified Hive table references

    hive.setDatabase(<database>)

  • Execute a catalog operation and return a DataFrame

    hive.execute("describe extended web_sales").show(100)

  • Show databases

    hive.showDatabases().show(100)

  • Show tables for the current database

    hive.showTables().show(100)

  • Describe a table

    hive.describeTable(<table_name>).show(100)

  • Create a database

    hive.createDatabase(<database_name>,<ifNotExists>)

  • Create an ORC table

    hive.createTable("web_sales").ifNotExists().column("sold_time_sk", "bigint").column("ws_ship_date_sk", "bigint").create()

    See the CreateTableBuilder interface section below for additional table creation options. Note: You can also create tables through standard Hive using hive.executeUpdate.

  • Drop a database

    hive.dropDatabase(<databaseName>, <ifExists>, <useCascade>)

  • Drop a table

    hive.dropTable(<tableName>, <ifExists>, <usePurge>)