JDBC mode configuration properties

You need to know the property names and valid values for configuring JDBC mode.

In configuration/spark-defaults.conf, or using the --conf option in spark-submit/spark-shell set the following properties:

Name: spark.sql.extensions
Value: com.hortonworks.spark.sql.rule.Extensions
Name: spark.datasource.hive.warehouse.read.mode
Value: JDBC_CLUSTER or JDBC_CLIENT
Configures the driver location.
Name: spark.sql.hive.hiveserver2.jdbc.url
Value:
The JDBC endpoint for HiveServer. For more information, see the Apache Hive Wiki (link below). For Knox, provide the HiveServer, not Knox, endpoint.
Name: spark.datasource.hive.warehouse.load.staging.dir
Value: Temporary staging location required by HWC. Set the value to a file system location where the HWC user has write permission.
Name: spark.hadoop.hive.zookeeper.quorum