Using JDBC Connection with PySpark

PySpark can be used with Java Database Connectivity (JDBC), but it is not recommended. The recommended approach is to use Impyla for JDBC connections.

  1. In your session, open the workbench and add the following code.
  2. Obtain the JDBC connection string, and paste it into the script where the “jdbc” string is shown. You will also need to insert your user name and password, or create environment variables for holding those values.

This example shows how to read external Hive tables using Spark and a Hive Virtual Warehouse.

from pyspark.sql import SparkSession
from pyspark_llap.sql.session import HiveWarehouseSession

spark = SparkSession\
.config("", "client")\

hive = HiveWarehouseSession.session(spark).build()
hive.sql("select * from foo").show()