Configuring HBase-Spark connector using Cloudera Manager when HBase and Spark are on the same cluster

Learn how to configure the HBase-Spark connector when both the HBase and Spark are on the same cluster.

  • Ensure that every Spark node has the HBase Master, Region Server, or Gateway role assigned to it. If no HBase role is assigned to a Spark node, add the HBase Gateway role to it, which ensures that the HBase configuration files are available on the Spark node. For more information, see Managing Roles.
  1. Go to the Spark service.
  2. Click the Configuration tab.
  3. Ensure that the HBase service is selected in Spark Service as a dependency.
  4. Select Scope > Gateway.
  5. Select Category > Advanced.
  6. Locate the spark-defaults.conf.
    Locate the Spark 3 Client Advanced Configuration Snippet (Safety Valve) for spark3-conf/spark-defaults.conf property or search for it by typing its name in the Search box.
  7. Add the required properties to ensure that all required Phoenix and HBase platform dependencies are available on the classpath for the Spark executors and drivers.
    1. Upload all necessary jar files to the distributed filesystem, for example HDFS (it can be GS, ABFS, or S3A). If the CDH version is different on the remote HBase cluster, run the hbase mapredcp command on the HBase cluster and copy them to /path/hbase_jars_common location so that the Spark applications can use them.
      • Spark3 related files:
        hdfs dfs -mkdir /path/hbase_jars_spark3
      • Common files for Spark:
        hdfs dfs -mkdir /path/hbase_jars_common
        hdfs dfs -put `hbase mapredcp | tr : " "` /path/hbase_jars_common
    2. Download the /etc/hbase/conf/hbase-site.xml from the remote HBase cluster and update the truststore password in the hbase-site.xml file with the Data Engineering DataHub truststore password.
    3. Create the hbase-site.xml.jar file. The hbase-site.xml is added to the classpath with the spark.jars parameter because it is part of the jar file’s root path.
      jar cf hbase-site.xml.jar hbase-site.xml
      hdfs dfs -put hbase-site.xml.jar /path/hbase_jars_common
    4. Download the truststore JKS file from the remote HBase cluster.
    5. Upload the Spark related files:
      hdfs dfs -put /opt/cloudera/parcels/CDH/lib/hbase_connectors_for_spark3/lib/hbase-spark3.jar /path/hbase_jars_spark3
      hdfs dfs -put /opt/cloudera/parcels/CDH/lib/hbase_connectors_for_spark3/lib/hbase-spark3-protocol-shaded.jar /path/hbase_jars_spark3
    6. Add all the Spark version related files and the hbase mapredcp files to the spark.jars parameter:
      • spark.jars=hdfs:///path/hbase_jars_common/hbase-site.xml.jar,hdfs:///path/hbase_jars_spark3/hbase-spark3.jar,hdfs:///path/hbase_jars_spark3/hbase-spark3-protocol-shaded.jar,/path/hbase_jars_common(other common files)...
  8. Enter a Reason for change, and then click Save Changes to commit the changes.
  9. Restart the role and service when Cloudera Manager prompts you to restart.

    Perform the following steps while using HBase RegionServer:

    Edit the HBase RegionServer configuration for running Spark Filter. Spark Filter is used when Spark SQL Where clauses are in use.

    1. In Cloudera Manager, select the HBase service.
    2. Click the Configuration tab.
    3. Search for regionserver environment.
    4. Find the RegionServer Environment Advanced Configuration Snippet (Safety Valve).
    5. Click the plus icon to add the following property:

      Key: HBASE_CLASSPATH

      Value:
      /opt/cloudera/parcels/CDH/lib/hbase_connectors_for_spark3/lib/hbase-spark3.jar:/opt/cloudera/parcels/CDH/lib/hbase_connectors_for_spark3/lib/hbase-spark3-protocol-shaded.jar:/opt/cloudera/parcels/CDH/lib/hbase_connectors_for_spark3/lib/scala-library.jar
    6. Ensure that the listed jars have the correct version number in their name.
    7. Click Save Changes.
    8. Restart the Region Server.

Build a Spark application using the dependencies that you provide when you run your application. If you follow the previous instructions, Cloudera Manager automatically configures the connector for Spark. If you have not:

  • Consider the following example for the Spark application:
    spark-shell --conf spark.jars=hdfs:///path/hbase_jars_common/hbase-site.xml.jar,hdfs:///path/hbase_jars_spark3/hbase-spark3-protocol-shaded.jar,hdfs:///path/hbase_jars_spark3/hbase-spark3.jar,hdfs:///path/hbase_jars_common/hbase-shaded-mapreduce-***VERSION NUMBER***.jar,hdfs:///path/hbase_jars_common/opentelemetry-api-***VERSION NUMBER***.jar,hdfs:///path/hbase_jars_common/opentelemetry-context-***VERSION NUMBER***.jar