Configuring HBase-Spark connector using Cloudera Manager when HBase is on a remote cluster

Learn how to configure the HBase-Spark connector when HBase is residing on a remote cluster.

  1. Go to the Spark service.
  2. Click the Configuration tab.
  3. Select Scope > Gateway.
  4. Select Category > Advanced.
  5. Locate the spark-defaults.conf property or search for it by typing its name in the Search box.
    • Locate the Spark 3 Client Advanced Configuration Snippet (Safety Valve) for spark3-conf/spark-defaults.conf property or search for it by typing its name in the Search box.
  6. Add the required properties to ensure that all required Phoenix and HBase platform dependencies are available on the classpath for the Spark executors and drivers.
  7. Enter a Reason for change, and then click Save Changes to commit the changes.
  8. Restart the role and service when Cloudera Manager prompts you to restart.

    Perform the following steps while using HBase RegionServer:

    Edit the HBase RegionServer configuration for running Spark Filter. Spark Filter is used when Spark SQL Where clauses are in use.
    1. In Cloudera Manager, select the HBase service.
    2. Click the Configuration tab.
    3. Search for regionserver environment.
    4. Find the RegionServer Environment Advanced Configuration Snippet (Safety Valve).
    5. Click the plus icon to add the following property:

      Key: HBASE_CLASSPATH

      Value:
      /opt/cloudera/parcels/CDH/lib/hbase_connectors_for_spark3/lib/hbase-spark3.jar:/opt/cloudera/parcels/CDH/lib/hbase_connectors_for_spark3/lib/hbase-spark3-protocol-shaded.jar:/opt/cloudera/parcels/CDH/lib/hbase_connectors_for_spark3/lib/scala-library.jar
    6. Ensure that the listed jars have the correct version number in their name.
    7. Click Save Changes.
    8. Restart the Region Server.
Build a Spark application using the dependencies that you provide when you run your application. If you follow the previous instructions, Cloudera Manager automatically configures the connector for Spark. If you have not:
  • Consider the following example while using a Spark application:
    spark3-shell --conf spark.jars=hdfs:///path/hbase_jars_common/hbase-site.xml.jar,hdfs:///path/hbase_jars_spark3/hbase-spark3-protocol-shaded.jar,hdfs:///path/hbase_jars_spark3/hbase-spark3.jar,hdfs:///path/hbase_jars_common/hbase-shaded-mapreduce-***VERSION NUMBER***.jar,hdfs:///path/hbase_jars_common/opentelemetry-api-***VERSION NUMBER***.jar,hdfs:///path/hbase_jars_common/opentelemetry-context-***VERSION NUMBER***.jar