Use JDBC Connection with PySpark

PySpark can be used with JDBC connections, but it is not recommended. The recommended approach is to use Impyla for JDBC connections. For more information, see Connect to CDW.

  1. In your session, open the workbench and add the following code.
  2. Obtain the JDBC connection string, as described above, and paste it into the script where the “jdbc” string is shown. You will also need to insert your user name and password, or create environment variables for holding those values.

This example shows how to read external Hive tables using Spark and a Hive Virtual Warehouse.

from pyspark.sql import SparkSession
from pyspark_llap.sql.session import HiveWarehouseSession

spark = SparkSession\
.config("", "client")\

hive = HiveWarehouseSession.session(spark).build()
hive.sql("select * from foo").show()