Use JDBC Connection with PySpark

  1. In your session, open the workbench and add the following code.
  2. Obtain the JDBC connection string, as described above, and paste it into the script where the “jdbc” string is shown. You will also need to insert your user name and password, or create environment variables for holding those values.
from pyspark.sql import SparkSession
from pyspark_llap.sql.session import HiveWarehouseSession

spark = SparkSession\
.config("", "client")\

hive = HiveWarehouseSession.session(spark).build()
hive.sql("select * from foo").show()