Learn about the known issues in HBase, the impact or changes to the functionality, and
the workaround.
Known issues identified in Cloudera Runtime 7.3.1.500 SP3
There are no new known issues identified in this release.
Known issues identified in Cloudera Runtime 7.3.1.400 SP2
There are no new known issues identified in this release.
Known issues identified in Cloudera Runtime 7.3.1.300 SP1
CHF1
There are no new known issues identified in this release.
Known issues identified in Cloudera Runtime 7.3.1.200 SP1
There are no new known issues identified in this release.
Known Issues in Cloudera Runtime 7.3.1.100 CHF 1
There are no new known issues identified in this release.
Known Issues in Cloudera Runtime 7.3.1
CDPD-60862: Rolling restart fails during ZDU when DDL operations
are in progress
7.3.1 and its higher versions
During a Zero Downtime Upgrade (ZDU), the rolling restart of services that support
Data Definition Language (DDL) statements might fail if DDL operations are in progress
during the upgrade. As a result, ensure that you do not run DDL statements during
ZDU.
The following services support DDL statements:
Impala
Hive – using HiveQL
Spark – using SparkSQL
HBase
Phoenix
Kafka
Data Manipulation Lanaguage (DML) statements are not impacted and can be used during
ZDU. Following the successful upgrade, you can resume running DDL statements.
Cloudera recommends modifying applications to not use DDL
statements for the duration of the upgrade. If the upgrade is already in progress, and
you have experienced a service failure, you can remove the DDLs in-flight and resume the
upgrade from the point of failure.
OpDB Data Hub cluster fails to initialize if you are reusing a
cloud storage location that was used by an older OpDB Data Hub cluster
7.3.1 and its higher versions
Stop HBase using Cloudera Manager before deleting an OpDB
Data Hub cluster.
IntegrationTestReplication fails if replication
does not finish before the verify phase begins
7.3.1 and its higher versions
During IntegrationTestReplication, if the verify
phase starts before the replication phase finishes, the test will
fail because the target cluster does not contain all of the data. If the HBase
services in the target cluster does not have enough memory, long garbage-collection
pauses might occur.
Use the -t flag to set the timeout value
before starting verification.
HDFS encryption with HBase
7.3.1 and its higher versions
Cloudera has tested the performance impact of using HDFS encryption with HBase. The
overall overhead of HDFS encryption on HBase performance is in the range of 3 to 4%
for both read and update workloads. Scan performance has not been thoroughly tested.
N/A
Snappy compression with /tmp directory mounted with noexec
option
7.3.1 and its higher versions
Using the HBase client applications such as hbase hfile on the
cluster with Snappy compression could result in
UnsatisfiedLinkError.
Add
-Dorg.xerial.snappy.tempdir=/var/hbase/snappy-tempdir to
Client Java Configuration Options in Cloudera Manager that
points to a directory where exec option is allowed.
AccessController postOperation problems in asynchronous
operations
7.3.1 and its higher versions
When security and Access Control are enabled, the following problems occur:
If a Delete Table fails for a reason other than missing
permissions, the access rights are removed but the table may still exist and may
be used again.
If hbaseAdmin.modifyTable() is used to delete column families,
the rights are not removed from the Access Control List (ACL) table. The
portOperation is implemented only for
postDeleteColumn().
If Create Table fails, full rights for that table persist for
the user who attempted to create it. If another user later succeeds in creating
the table, the user who made the failed attempt still has the full rights.
HBase shutdown can lead to inconsistencies in META
7.3.1 and its higher versions
Cloudera Manager uses an incorrect shutdown command. This prevents graceful shutdown
of the HBase service and forces Cloudera Manager to kill the processes instead. It can
lead to inconsistencies in Meta.
Run the following command instead of shutting down the
HBase service using Cloudera
Manager.
hbase master stop --shutDownCluster
The
command output must end with Closing master protocol: MasterService
phrase. You can verify the command execution by checking the master logs. The log must
contain Cluster shutdown requested of master=xxx and the closing of
regions. Upon successful execution, the RegionServers start shutting down.
If you find any inconsistencies, please contact
Cloudera Support.
Bulk load is not supported when the source is the local
HDFS
7.3.1 and its higher versions
The bulk load feature (the completebulkload
command) is not supported when the source is the local HDFS and the target is
an object store, such as S3/ABFS.
Use distcp to move the HFiles from HDFS to S3 and then run
bulk load from S3 to S3.
Apache Issue: N/A
Storing Medium Objects (MOBs) in HBase is currently not
supported
7.3.1 and its higher versions
Storing MOBs in HBase relies on bulk loading files, and this is
not currently supported when HBase is configured to use cloud storage (S3).
N/A
Apache Issue: N/A
CDPD-77399: HBase fails to register the servlet metrics and
throws ClassNotFoundException: org.apache.hadoop.metrics.MetricsServlet
7.3.1, 7.3.1.100 CHF 1
7.3.1.200 SP1
The MetricsServlet class is a Hadoop 2-based metric servlet
unavailable in Hadoop 3 deployments.
Ignore this WARN log message during HBase Master and
RegionServer startup.
HBase throws a NullPointerException on MemstoreFlusher when
flush is triggered by too many WALs after a WAL rolling
7.3.1 and its higher versions
When rolling the current wall, if HBase reaches
hbase.regionserver.maxlogs, a memstore flush is triggered, and
HBase uses SequenceIdAccounting.findLower to retrieve the region and stores with the
lower sequence ID in the WAL. The problem occurs when there is a flush entry in the WAL.
SequenceIdAccounting.findLower does not filter the entries, and HBase returns a
METAFAMILY column family, which later cannot be resolved by HRegion.getSpecificStores,
resulting in a NullPointerException.
Increase the maximum number of WAL files.
Perform periodic manual flushes so that the flush is not triggered due to the
Too many WALs condition.