Known Issues in HBase
This topic describes known issues and workarounds for using HBase in Cloudera Runtime 7.1.9 SP1.
- CDPD-60862: Rolling restart fails during ZDU when DDL operations are in progress
-
During a Zero Downtime Upgrade (ZDU), the rolling restart of services that support Data Definition Language (DDL) statements might fail if DDL operations are in progress during the upgrade. As a result, ensure that you do not run DDL statements during ZDU.
The following services support DDL statements:- Impala
- Hive – using HiveQL
- Spark – using SparkSQL
- HBase
- Phoenix
- Kafka
Data Manipulation Lanaguage (DML) statements are not impacted and can be used during ZDU. Following the successful upgrade, you can resume running DDL statements.
IntegrationTestReplication
fails if replication does not finish before theverify
phase begins-
During
IntegrationTestReplication
, if theverify
phase starts before thereplication
phase finishes, the test will fail because the target cluster does not contain all of the data. If the HBase services in the target cluster does not have enough memory, long garbage-collection pauses might occur. - HDFS encryption with HBase
-
Cloudera has tested the performance impact of using HDFS encryption with HBase. The overall overhead of HDFS encryption on HBase performance is in the range of 3 to 4% for both read and update workloads. Scan performance has not been thoroughly tested.
- AccessController postOperation problems in asynchronous operations
-
When security and Access Control are enabled, the following problems occur:
- If a
Delete Table
fails for a reason other than missing permissions, the access rights are removed but the table may still exist and may be used again. - If
hbaseAdmin.modifyTable()
is used to delete column families, the rights are not removed from the Access Control List (ACL) table. TheportOperation
is implemented only forpostDeleteColumn()
. - If
Create Table
fails, full rights for that table persist for the user who attempted to create it. If another user later succeeds in creating the table, the user who made the failed attempt still has the full rights.
- If a
- Snappy compression with /tmp directory mounted with noexec option
-
Using the HBase client applications such as hbase hfile on the cluster with Snappy compression could result in
UnsatisfiedLinkError
. - HBase shutdown can lead to inconsistencies in META
-
Cloudera Manager uses an incorrect shutdown command. This prevents graceful shutdown of the HBase service and forces Cloudera Manager to kill the processes instead. It can lead to inconsistencies in Meta.
- The custom JWT Principal claim is not usable for HBase
-
The hbase.security.oauth.jwt.token.principal.claim configuration property does not exist in the 7.1.9 SP1 version. The purpose of this property is to allow the use of a Subject or Principal claim different from the default
sub
claim. - HBase procedures might hang because WriteAheadLog for ProcedureStore does not progress
- HBase Master might hang and cause longer recovery when the WriteAheadLog is stuck while persisting to the ProcedureStore. HBase Master eventually aborts itself, however, the operations that rely on the ProcedureStore (for example, region movements and table creation) do not progress during this period.