Known Issues in Apache HBase

This topic describes known issues and workarounds for using HBase in this release of Cloudera Runtime.


If you have migrated from HDP 3.1.5 to Cloudera Runtime 7.1.6, the HBase Thrift Server may fail to start because of incorrectly updated links.

Manually find and update the version numbers of all the instances of hbase-client to[***BUILD NUMBER***]. Identify other links that point to the pre-upgrade version of hbase-client, and resolve the links to point to[***BUILD NUMBER***]. You can do this using the hdp-select | grep hbase command in your terminal on each of the HBase nodes. If the ls or hdp-select show you any pre-upgrade versions, you must create a manual link to each of the pre-upgrade versions. For example,[***BUILD NUMBER***] is the old version and should be linked to the new[***BUILD NUMBER***] version.
Apache Issue: N/A
Operational Database with SQL template Data Hub cluster fails to initialize if you are reusing a cloud storage location that was used by an older OpDB Data Hub cluster
Stop HBase using Cloudera Manager before deleting an operational database Data Hub cluster.
HDFS encryption with HBase

Cloudera has tested the performance impact of using HDFS encryption with HBase. The overall overhead of HDFS encryption on HBase performance is in the range of 3 to 4% for both read and update workloads. Scan performance has not been thoroughly tested.

AccessController postOperation problems in asynchronous operations
When security and Access Control are enabled, the following problems occur:
  • If a Delete Table fails for a reason other than missing permissions, the access rights are removed but the table may still exist and may be used again.
  • If hbaseAdmin.modifyTable() is used to delete column families, the rights are not removed from the Access Control List (ACL) table. The portOperation is implemented only for postDeleteColumn().
  • If Create Table fails, full rights for that table persist for the user who attempted to create it. If another user later succeeds in creating the table, the user who made the failed attempt still has the full rights.
Apache Issue: HBASE-6992
Bulk load is not supported when the source is the local HDFS
The bulk load feature (the completebulkload command) is not supported when the source is the local HDFS and the target is an object store, such as S3/ABFS.
Use distcp to move the HFiles from HDFS to S3 and then run bulk load from S3 to S3.
Apache Issue: N/A