Managing Dependencies for Spark 2 Jobs
As with any Spark job, you can add external packages to the executor on startup.
external dependencies to Spark jobs, specify the libraries you want added by using the
appropriate configuration parameter in a
spark-defaults.conf file. The following table lists the most commonly used
configuration parameters for adding dependencies and how they can be used:
Comma-separated list of files to be placed in the working directory of each Spark executor.
Comma-separated list of
Comma-separated list of local jars to include on the Spark driver and Spark executor classpaths.
Comma-separated list of Maven coordinates of jars to include
on the Spark driver and Spark executor classpaths. When
configured, Spark will search the local Maven repo, and then
Maven central and any additional remote repositories
Comma-separated list of additional remote repositories to
search for the coordinates given with
Here is a sample
that uses some of the Spark configuration parameters discussed in the previous section to
add external packages on startup.
The pre-existing jar,
my_sample.jar, residing in the root of this project will be included on the Spark driver and executor classpaths.
The two sample data sets,
test_data_2.csv, from the
/datadirectory of this project will be distributed to the working directory of each Spark executor.
For more advanced configuration options, visit the Apache Spark 2 reference documentation.