Configuring HBase-Spark connector using Cloudera Manager when HBase is on a remote cluster
Learn how to configure the HBase-Spark connector when HBase is residing on a remote cluster.
Build a Spark application using the dependencies that you provide when you
run your application. If you follow the previous instructions, Cloudera Manager
automatically configures the connector for Spark. If you have not:
- Consider the following example while using a Spark
application:
spark3-shell --conf spark.jars=hdfs:///path/hbase_jars_common/hbase-site.xml.jar,hdfs:///path/hbase_jars_spark3/hbase-spark3-protocol-shaded.jar,hdfs:///path/hbase_jars_spark3/hbase-spark3.jar,hdfs:///path/hbase_jars_common/hbase-shaded-mapreduce-***VERSION NUMBER***.jar,hdfs:///path/hbase_jars_common/opentelemetry-api-***VERSION NUMBER***.jar,hdfs:///path/hbase_jars_common/opentelemetry-context-***VERSION NUMBER***.jar