Configuring JDBC execution mode
In two steps, you configure Apache Spark to connect to HiveServer (HS2). An example shows how to configure this mode while launching the Spark shell.
- Accept the default and recommended
spark.datasource.hive.warehouse.read.jdbc.mode=cluster
for the location of query execution. - Accept the default
spark.datasource.hive.warehouse.load.staging.dir
for the temporary staging location required by HWC. - Check that
spark.hadoop.hive.zookeeper.quorum
is configured. - Set Kerberos configurations for HWC, or for an unsecured cluster, set
spark.security.credentials.hiveserver2.enabled
=false
.