Integrating Apache Hive with Kafka, Spark, and BI
Also available as:
PDF

Zeppelin configuration for using the Hive Warehouse Connector

You can use the Hive Warehouse Connector in Zeppelin notebooks using the spark2 interpreter by modifying or adding properties to your spark2 interpreter settings.

Interpreter properties

  • spark.jars

    /usr/hdp/current/hive_warehouse_connector/hive-warehouse-connector-assembly-<version>.jar

  • spark.submit.pyFiles

    /usr/hdp/current/hive_warehouse_connector/pyspark_hwc-<version>.zip

  • spark.hadoop.hive.llap.daemon.service.hosts

    App name for LLAP service. In Ambari, copy the value from Services > Hive > Configs > Advanced hive-interactive-site > hive.llap.daemon.service.hosts.

  • spark.sql.hive.hiveserver2.jdbc.url

    URL for HiveServer2 Interactive. In Ambari, copy the value from Services > Hive > Summary > HIVESERVER2 INTERACTIVE JDBC URL.

  • spark.yarn.security.credentials.hiveserver2.enabled

    Only enable for kerberized cluster-mode.

  • spark.sql.hive.hiveserver2.jdbc.url.principal

    Kerberos principal for HiveServer2 Interactive. In Ambari, copy the value from Advanced hive-site > hive.server2.authentication.kerberos.principal.

  • spark.hadoop.hive.zookeeper.quorum

    ZooKeeper hosts used by LLAP. In Ambari, copy the value from Services > Hive > Configs > Advanced hive-site > hive.zookeeper.quorum.