Known Issues in Apache HBase
This topic describes known issues and workarounds for using HBase in this release of Cloudera Runtime.
- OpDB Data Hub cluster fails to initialize if you are reusing a cloud storage location that was used by an older OpDB Data Hub cluster
- Workaround: Stop HBase using Cloudera Manager before deleting an operational database Data Hub cluster.
IntegrationTestReplication
fails if replication does not finish before theverify
phase begins-
During
IntegrationTestReplication
, if theverify
phase starts before thereplication
phase finishes, the test will fail because the target cluster does not contain all of the data. If the HBase services in the target cluster does not have enough memory, long garbage-collection pauses might occur. - HDFS encryption with HBase
-
Cloudera has tested the performance impact of using HDFS encryption with HBase. The overall overhead of HDFS encryption on HBase performance is in the range of 3 to 4% for both read and update workloads. Scan performance has not been thoroughly tested.
- AccessController postOperation problems in asynchronous operations
-
When security and Access Control are enabled, the following problems occur:
- If a
Delete Table
fails for a reason other than missing permissions, the access rights are removed but the table may still exist and may be used again. - If
hbaseAdmin.modifyTable()
is used to delete column families, the rights are not removed from the Access Control List (ACL) table. TheportOperation
is implemented only forpostDeleteColumn()
. - If
Create Table
fails, full rights for that table persist for the user who attempted to create it. If another user later succeeds in creating the table, the user who made the failed attempt still has the full rights.
- If a
- Bulk load is not supported when the source is the local HDFS
- The bulk load feature (the
completebulkload
command) is not supported when the source is the local HDFS and the target is an object store, such as S3/ABFS.
Technical Service Bulletins
- TSB 2023-667: HBase snapshot export failure can lead to data loss
- When using Replication Manager for Apache HBase (HBase) snapshot replication, data loss will occur if both of the following conditions are met: (i) the external account used for the operation has delete access to the target storage location, and (ii) the snapshot export fails. If these conditions are met, the cleanup operation, which is automatically performed after the failure, would delete all data in the root folder of the snapshot, not only the snapshot files. If the user account does not have the delete permission on the target folder, the data remains unaffected.
- Knowledge article
- For the latest update on this issue see the corresponding Knowledge article: TSB 2023-667: HBase snapshot export failure can lead to data loss