Installing Spark
When you install Spark, the following directories are created:
/usr/hdp/current/spark-client
for submitting Spark jobs/usr/hdp/current/spark-history
for launching Spark master processes, such as the Spark History Server/usr/hdp/current/spark-thriftserver
for the Spark Thrift Server
To install Spark:
Search for Spark in the HDP repo:
For RHEL or CentOS:
yum search spark
For SLES:
zypper install spark
For Ubuntu and Debian:
apt-cache spark
This shows all the versions of Spark available. For example:
spark_<version>_<build>-master.noarch : Server for Spark master spark_<version>_<build>-python.noarch : Python client for Spark spark_<version>_<build>-worker.noarch : Server for Spark worker spark_<version>_<build>.noarch : Lightning-Fast Cluster Computing
Install the version corresponding to the HDP version you currently have installed.
For RHEL or CentOS:
yum install spark_<version>-master spark_<version>-python
For SLES:
zypper install spark_<version>-master spark_<version>-python
For Ubuntu and Debian:
apt-get install spark_<version>-master spark_<version>-python
Before you launch the Spark Shell or Thrift Server, make sure that you set
$JAVA_HOME:
export JAVA_HOME=<path to JDK 1.8>
Change owner of
/var/log/spark
tospark:hadoop
.chown spark:hadoop /var/log/spark