Developing Apache Spark Applications
Also available as:

Building and Running a Secure Spark Streaming Job

Use the following steps to build and run a secure Spark streaming job.

Depending on your compilation and build processes, one or more of the following tasks might be required before running a Spark Streaming job:

  • If you are using maven as a compile tool:

    1. Add the Hortonworks repository to your pom.xml file:
          <name>hortonworks repo</name>
    2. Specify the Hortonworks version number for Spark streaming Kafka and streaming dependencies to your pom.xml file:

      Note that the correct version number includes the Spark version and the HDP version.

    3. (Optional) If you prefer to pack an uber .jar rather than use the default ("provided"), add the maven-shade-plugin to your pom.xml file:
  • Instructions for submitting your job depend on whether you used an uber .jar file or not:

    • If you kept the default .jar scope and you can access an external network, use --packages to download dependencies in the runtime library:

      spark-submit --master yarn-client \
          --num-executors 1 \
          --packages org.apache.spark:spark-streaming-kafka_2.10: \
          --repositories \
          --class <user-main-class> \
          <user-application.jar> \
          <user arg lists>

      The artifact and repository locations should be the same as specified in your pom.xml file.

    • If you packed the .jar file into an uber .jar, submit the .jar file in the same way as you would a regular Spark application:

      spark-submit --master yarn-client \
          --num-executors 1 \
          --class <user-main-class> \
          <user-uber-application.jar> \
          <user arg lists>

For a sample pom.xml file, see "Sample pom.xml file for Spark Streaming with Kafka" in this guide.