Apache HBase Known Issues
Except for version-specific sections, these issues affect each release of CDH.
- Corruption of HBase data stored with MOB feature on upgrade from CDH 5 and HDP 2
- HBase suffers data loss during system recovery when a custom WAL directory is configured
- Potential privilege escalation for user of HBase “Thrift 1” API Server over HTTP CVE-2018-8025
- Known Issues In CDH 5.12.0
- Known Issues In CDH 5.7.0
- Known Issues in CDH 5.6.0
- Known Issues In CDH 5.5.1
- Known Issues In CDH 5.5.0
- Known Issues In CDH 5.4 and Higher
- Known Issues In CDH 5.3 and Higher
- Silent Data Loss in Apache HBase Replication
- HBase in CDH 5 Dependent on Protobuf 2.5
- Some HBase Features Not Supported in CDH 5
- Medium Object Blob (MOB) Data Loss of a Snapshot After MOB Compaction
- Medium Object Blob (MOB) Data Loss After MOB Compaction
- UnknownScannerException Messages After Upgrade
- HBase moves to Protoc 2.5.0
- HBase may not tolerate HDFS root directory changes
- AccessController postOperation problems in asynchronous operations
- Native library not included in tarballs
Corruption of HBase data stored with MOB feature on upgrade from CDH 5 and HDP 2
CDH 5 and HDP 2 supported the HBase MOB feature prior to that feature making it into an upstream release. When the feature was finalized in the upstream community a different serialization format was used for MOB related configuration settings. CDH 6, HDP 3, and some versions of CDP used this newer serialization format.
When a CDH5 or HDP2 cluster is upgraded to CDH 6, HDP 3 or CDP, this difference in serialization formats causes HBase to incorrectly interpret MOB related configurations. A more severe incorrect interpretation results in the HBase cluster treating the MOB feature as disabled. Clients attempting to read values that were stored in the MOB system will get back HBase internals from the MOB system rather than the underlying data. Additionally, snapshot data which is migrated from CDH 5 and HDP 2 clusters into CDH 6, HDP 3, or CDP clusters may also cause MOB data to be unavailable in the same manner.
If clients perform read-modify-write workloads without recognizing that the value is not what they would expect then the cell value will not be fixed even if the MOB feature is turned back on. For example, using the cell level grant command in the hbase-shell will read cells, update the cell level ACLs on each, and then write the updated cell back to HBase; doing this grant while HBase believes the MOB features is off for a column family with MOB data corrupts any granted cells.
- MOB configs in table descriptions and HBase web UIs show binary strings instead of sensible values.
- Reading MOB cells results in unexpected values that look like a int followed by a MOB filename.
- If the IS_MOB configuration is fixed but MOB_THRESHOLD is not, then regions with such configuration settings will fail to open with a NumberFormatException.
- CDH
- HDP
- CDH 5.x
- HDP 2.x
Users affected: Those who use the MOB feature in the affected releases and are planning an upgrade to CDH 6, HDP 3, or CDP Private Cloud.
Severity (Low/Medium/High): High
Impact: Potential data corruption risk for upgrading customers.
- As part of upgrading a cluster that makes use of the MOB feature you must:
- Prior to shutting down the cluster for upgrade, disable any tables which use the MOB feature (look for table descriptions that have column families with IS_MOB => true.)
- Follow normal upgrade steps.
- Describe any tables which use the MOB feature to confirm that the configuration settings are not correctly displayed.
- Alter any tables and set the needed MOB configuration settings anew.
- Describe the tables to confirm the settings show up correctly.
- Enable the tables.
- Snapshots
Snapshots created on MOB-enabled tables prior to upgrade or exported from a CDH 5 and HDP 2 will also result in incorrect MOB_THRESHOLD value in the upgraded system.
- Export a snapshot to a remote filesystem location.
- Contact Cloudera Support to obtain a tool and instructions to correct the snapshot, prior to importing the snapshot into the CDH 6, HDP 3 or CDP HBase filesystem root directory.
- Use the clone_snapshot or the restore_snapshot command in HBase on a snapshot originally created in CDH 5 and HDP 2.
- Repair
If you haveupdated a cluster with the MOB feature and data was corrupted prior to correcting the MOB configuration, you can perform a repair using the HFiles still stored in the MOB area of HDFS:
- Stop all client access.
- Disable table.
- Move the table's mobdir to tmp directory somewhere else in hdfs:
hdfs dfs -mv /hbase/mobdir/data/default/<tablename>/<mob region>/ /hbase/.tmp/reload-<table>/
- Enable table.
- Bulkload temp mobdir from #3:
hbase org.apache.hadoop.hbase.tool.LoadIncrementalHFiles /hbase/.tmp/reload/<mob region> <table_name>
- Verify data.
- Things to note:
- The bulkload should be quick as it is moving the file from the tmp location into the table data location.
- This process only repair if cell updates did not change timestamp, if it did, you need to delete the cell prior to bulkload.
- As mentioned, bulk loading moves files, it is suggested that you make additional copies rather than just a move to tmpdir. This requires more space on HDFS during the process.
- Restored data will not be mobdir until compaction occurs. You can run normal compaction or wait until it happens on its own.
- ACLs updates made between upgrade and running the alter for IS_MOB/MOB_THRESHOLD will be lost.
Knowledge article: For the latest update on this issue see the corresponding Knowledge article: TSB 2021-465: Corruption of HBase data stored with MOB feature on upgrade from CDH 5 and HDP 2
HBase suffers data loss during system recovery when a custom WAL directory is configured
HBASE-20723 covers a critical data loss bug. It is present when an HBase deployment is configured to use a non-default location for storing its write-ahead-log. If hbase.wal.dir is set to a different location than hbase.rootdir, then the recovery process will mistakenly believe there are no edits to replay in the event of process failure of a region server.
Products affected: HBase
- CDH 5.11.x-5.14.x
- CDH 5.15.0, 5.15.1
- CDH 6.0.0
User affected: Anyone setting the configuration value hbase.wal.dir to a setting other than the default. For Cloudera Manager users, this would require setting a safety valve for the hbase-site.xml file.
User with non-default setting can determine if they are affected by looking for INFO log messages that indicates edits have been skipped. The following is an example of such message:
2018-06-12 22:08:40,455 INFO [RS_LOG_REPLAY_OPS-wn2-duohba:16020-0-Writer-1] wal.WALSplitter: This region's directory doesn't exist: hdfs://mycluster/walontest/data/default/tb1/b7fd7db5694eb71190955292b3ff7648. It is very likely that it was already split so it's safe to discard those edits.
Note that the above message is normally harmless, but in this specific edge case the recovery code is looking at the incorrect location to determine region status.
Severity (Low/Medium/High): High
Impact: Data loss is unrecoverable once write-ahead-logs have been deleted as part of routine system processes. There is one exception, if the cluster uses data center replication to ship edits to another cluster and that cluster had not experienced similar data loss.
Immediate action required: Upgrade to a CDH version with the fix.
Addressed in release/refresh/patch: CDH 5.15.2 and higher, CDH 5.16.1 and higher; CDH 6.0.1 and higher
Knowledge article: For the latest update on this issue see the corresponding Knowledge article - TSB 2019-320: HBase suffers data loss during system recovery when a custom WAL directory is configured
Potential privilege escalation for user of HBase “Thrift 1” API Server over HTTP CVE-2018-8025
CVE-2018-8025 describes an issue in Apache HBase that affects the optional "Thrift 1" API server when running over HTTP. There is a race-condition that could lead to authenticated sessions being incorrectly applied to users, e.g. one authenticated user would be considered a different user or an unauthenticated user would be treated as an authenticated user.
Products affected: HBase Thrift Server
- CDH 5.4.x - 5.12.x
- CDH 5.13.0, 5.13.1, 5.13.2, 5.13.3
- CDH 5.14.0, 5.14.2, 5.14.3
- CDH 5.15.0
- CDH 5.14.4
- CDH 5.15.1
Users affected: Users with the HBase Thrift 1 service role installed and configured to work in “thrift over HTTP” mode. For example, those using Hue with HBase impersonation enabled.
Severity: High
Potential privilege escalation.
CVE: CVE-2018-8025
Immediate action required: Upgrade to a CDH version with the fix, or, disable the HBase Thrift-over-HTTP service. Disabling the HBase Thrift-over-HTTP service will render Hue impersonation inoperable and all HBase access via Hue will be performed using the “hue” user instead of the authenticated user.
Knowledge article: For the latest update on this issue see the corresponding Knowledge article - TSB: 2018-315: Potential privilege escalation for user of HBase “Thrift 1” API Server over HTTP
Known Issues In CDH 5.12.0
IOException from Timeouts
CDH 5.12.0 includes the fix HBASE-16604, where the internal scanner that retries in case of IOException from timeouts could potentially miss data. Java clients were properly updated to account for the new behavior, but thrift clients will now see exceptions where the previous missing data would be.
Workaround: Create a new scanner and retry the operation when encountering this issue.
Known Issues In CDH 5.7.0
Unsupported Features of Apache HBase 1.2
- Although Apache HBase 1.2 allows replication of hbase:meta, this feature is not supported by Cloudera and should not be used on CDH clusters until further notice.
- The FIFO compaction policy has not been thoroughly tested and is not supported in CDH 5.7.0.
- Although Apache HBase 1.2 adds a new permissive mode to allow mixed secure and insecure clients, this feature is not supported by Cloudera and should not be used on CDH clusters until further notice.
The ReplicationCleaner process can abort if its connection to ZooKeeper is inconsistent.
Bug: HBASE-15234
WARN org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner: Aborting ReplicationLogCleaner because Failed to get list of replicatorsUnprocessed WALs will accumulate.
Fixed in Versions: CDH 5.3.10 , 5.4.11, 5.5.4, 5.6.1, 5.7.1, 5.8.0, 5.9.0 and above.
Workaround: Restart the HMaster occasionally. The ReplicationCleaner will restart if necessary and process the unprocessed WALs.
IntegrationTestReplication fails if replication does not finish before the verify phase begins.
Bug: None.
During IntegrationTestReplication, if the verify phase starts before the replication phase finishes, the test will fail because the target cluster does not contain all of the data. If the HBase services in the target cluster does not have enough memory, long garbage-collection pauses might occur
Workaround: Use the -t flag to set the timeout value before starting verification.
Known Issues in CDH 5.6.0
The ReplicationCleaner process can abort if its connection to ZooKeeper is inconsistent.
Bug: HBASE-15234
WARN org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner: Aborting ReplicationLogCleaner because Failed to get list of replicatorsUnprocessed WALs will accumulate.
Fixed in Versions: CDH 5.3.10, 5.4.11, 5.5.4, 5.6.1, 5.7.1, 5.8.0, 5.9.0 and above.
Workaround: Restart the HMaster occasionally. The ReplicationCleaner will restart if necessary and process the unprocessed WALs.
ExportSnapshot or DistCp operations may fail on the Amazon s3a:// protocol.
Bug: None.
ExportSnapshot or DistCP operations may fail on on AWS when using certain JDK 8 versions, due to an incompatibility between the AWS Java SDK 1.9.x and the joda-time date-parsing module.
Workaround: Use joda-time 2.8.1 or higher, which is included in AWS Java SDK 1.10.1 or higher.
Reverse scans do not work when Bloom blocks or leaf-level inode blocks are present.
Bug: HBASE-14283
Because the seekBefore() method calculates the size of the previous data block by assuming that data blocks are contiguous, and HFile v2 and higher store Bloom blocks and leaf-level inode blocks with the data, reverse scans do not work when Bloom blocks or leaf-level inode blocks are present when HFile v2 or higher is used.
Fixed in Versions: CDH 5.3.9, 5.4.9, 5.5.2, 5.6.0, 5.7.0, 5.8.0, 5.9.0 and above.
Workaround: None.
Known Issues In CDH 5.5.1
The ReplicationCleaner process can abort if its connection to ZooKeeper is inconsistent.
Bug: HBASE-15234
WARN org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner: Aborting ReplicationLogCleaner because Failed to get list of replicatorsUnprocessed WALs will accumulate.
Fixed in Versions: CDH 5.3.10, 5.4.11, 5.5.4, 5.6.1, 5.7.1, 5.8.0, 5.9.0 and above.
Workaround: Restart the HMaster occasionally. The ReplicationCleaner will restart if necessary and process the unprocessed WALs.
Extra steps must be taken when upgrading from CDH 4.x to CDH 5.5.1.
The fix for TSB 2015-98 disables legacy object serialization. This will cause direct upgrades on HBase clusters from CDH 4.x to CDH 5.5.1 to fail if one of the workarounds below is not used.
Bug: HBASE-14799
Cloudera Bug: CDH-34565
Fixed in Versions: CDH 5.3.9, 5.4.9, 5.5.1, 5.6.0, 5.7.0, 5.8.0, 5.9.0 and above.
- Upgrade to a CDH 5 version prior to CDH 5.5.1, and then upgrade from that version to CDH 5.5.1, or
- Set the hbase.allow.legacy.object.serialization to true in the Advanced Configuration Snippet for hbase-site.xml if using Cloudera Manager, or directly in hbase-site.xml on an unmanaged cluster. Upgrade your cluster to CDH 5.5.1. Remove the hbase.allow.legacy.object.serialization property or set it to false after migration is complete.
Known Issues In CDH 5.5.0
An operating-system level tuning issue in RHEL7 causes significant latency regressions
There are two distinct causes for the regressions, depending on the workload:
- For a cached workload, the regression may be up to 11%, as compared to RHEL6. The cause relates to differences in the CPU's C-state (power saving state) behavior. With the same workload, the CPU is around 40% busier in RHEL7, and the CPU spends more time transitioning between C-states in RHEL7. Transitions out of deeper C-states add latency. When CPUs are configured to never enter a C-state lower than 1, RHEL7 is slightly faster than RHEL6 on the cached workload. The root cause is still under investigation and may be hardware-dependent.
- For an IO-bound workload, the regression may be up to 8%, even with common C-state settings. A 6% difference in average disk service time has been observed, which in turn seems to be caused by a 10% higher average read size at the drive on RHEL7. The read sizes issued by HBase are the same in both cases, so the root cause seems to be a change in the EXT4 filesystem or the Linux block IO later. The root cause is still under investigation.
Bug: None
Severity: Medium
Workaround: Avoid using RHEL 7 if you have a latency-critical workload. For a cached workload, consider tuning the C-state (power-saving) behavior of your CPUs.
A RegionServer under extreme duress due to back-to-back garbage collection combined with heavy load on HDFS can lock up while attempting to append to the WAL.
124028 2015-11-14 05:54:48,659 WARN org.apache.hadoop.hbase.util.Sleeper: We slept 42911ms instead of 3000ms, this is likely due to a long garbage collecting pause and it's usually bad, see http://hbase.# 124029 2015-11-14 05:54:48,659 WARN org.apache.hadoop.hbase.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 41110ms
1806 2015-11-14 04:58:09,952 INFO org.apache.hadoop.hbase.regionserver.wal.FSHLog: Slow sync cost: 2734 ms, current pipeline: [DatanodeInfoWithStorage[10.17.198.17:20002,DS-56e2cf88-f267-43a8-b964-b29858# 1807 2015-11-14 04:58:09,952 INFO org.apache.hadoop.hbase.regionserver.wal.FSHLog: Slow sync cost: 2963 ms, current pipeline: [DatanodeInfoWithStorage[10.17.198.17:20002,DS-56e2cf88-f267-43a8-b964-b29858#
Bug: HBASE-14374
Workaround: Restart the RegionServer. To avoid the problem, adjust garbage-collection settings, give the RegionServer more RAM, and reduce the load on HDFS.
Known Issues In CDH 5.4 and Higher
The ReplicationCleaner process can abort if its connection to ZooKeeper is inconsistent.
Bug: HBASE-15234
WARN org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner: Aborting ReplicationLogCleaner because Failed to get list of replicatorsUnprocessed WALs will accumulate.
Fixed in Versions: CDH 5.3.10, 5.4.11, 5.5.4, 5.6.1, 5.7.1, 5.8.0, 5.9.0 and above.
Workaround: Restart the HMaster occasionally. The ReplicationCleaner will restart if necessary and process the unprocessed WALs.
Increments and CheckAnd* operations are much slower in CDH 5.4 and higher (since HBase 1.0.0) than in CDH 5.3 and earlier.
This is due to the unification of mvcc and sequenceid done in HBASE-8763.
Bug: HBASE-14460
Workaround: None
Known Issues In CDH 5.3 and Higher
Export to Azure Blob Storage (the wasb:// or wasbs:// protocol) is not supported.
Bug: HADOOP-12717
CDH 5.3 and higher supports Azure Blob Storage for some applications. However, a null pointer exception occurs when you specify a wasb:// or wasbs:// location in the --copy-to option of the ExportSnapshot command or as the output directory (the second positional argument) of the Export command.
Workaround: None.
Silent Data Loss in Apache HBase Replication
In deployments with slow or unreliable network links, HBase’s cross-datacenter replication code may believe it has reached the end of a write-ahead-log prior to successfully parsing the entire file. The remaining content is ignored and never sent to destination clusters. Eventually, the write-ahead-logs are cleaned up as a normal part of HBase operations, preventing later manual recovery.
The issue is fixed by HBASE-15984, which detects when a parsing error has occurred prior to reaching the end of a write-ahead-log file and subsequently retries.
Products affected: HBase and Search
Releases affected: All CDH 5 releases prior to CDH 5.8.0
Users affected: Clusters where HBase replication is enabled.
Severity (Low/Medium/High): High
Impact: The destination cluster will fail to receive all updates, causing inconsistencies between clusters replicating data.
Immediate action required: Customers relying on HBase data center replication should upgrade to CDH 5.8.0 or higher.
Fixed in Versions: CDH 5.8.0 and higher.
HBase in CDH 5 Dependent on Protobuf 2.5
NoClassDefFoundError: Could notinitialize class org.apache.hadoop.hbase.util.ByteStringer
Workaround: In CDH 5.8 and higher, use the Apache Maven Shade Plugin to rename protobuf 3.0 packages in the byte code. The Java code looks the same and uses the original package name. However, the byte code contains a different name, so when the HBase client classes load protobuf 2.5, there are no conflicting classes.
Some HBase Features Not Supported in CDH 5
- Visibility labels
- Transparent server-side encryption
- Stripe compaction
- Distributed log replay
Medium Object Blob (MOB) Data Loss of a Snapshot After MOB Compaction
When taking a snapshot of a Medium Object Blobs (MOBs)-enabled table, a race condition can cause data to be lost after a MOB compaction operation completes.
Normally, a table snapshot flushes each region in the table and builds a snapshot manifest with metadata pointing to all the contents of the snapshot. These snapshot manifests are later used to determine which data files can be safely removed after a compaction process. When taking a snapshot, each region's flush can happen in parallel since each region is independent of all the other regions.
When the MOB feature is enabled, there is a separate special MOB region that stores the MOB items. Prior to applying the referenced patch for HBASE-16841, the MOB region was treated as if it was also independent of the other regions. This was not the case – when regions were flushed in a MOB-enabled table, new MOB data may have been written into the special MOB region. Because new MOB data files may have been written out after the MOB region's contents were added to the manifest, some of the MOB data may not have been recorded in the snapshot manifest.
Snapshots on MOB enabled tables were not susceptible to data loss until after a MOB compaction is run. However, after a MOB compaction was run, the MOB data files that were not captured by the snapshot manifest may no longer have had references and were thus eligible for deletion and susceptible to data loss.
This issue is fixed by HBASE-16841.
Releases affected:
CDH 5.3.0, 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.5, 5.3.7, 5.3.8, 5.3.9, 5.3.10
CDH 5.4.0, 5.4.1, 5.4.2, 5.4.3, 5.4.4, 5.4.5, 5.4.7, 5.4.8, 5.4.9, 5.4.10, 5.4.11
CDH 5.5.0, 5.5.1, 5.5.2, 5.5.4, 5.5.5, 5.5.6
CDH 5.6.0, 5.6.1
CDH 5.7.0, 5.7.1, 5.7.2, 5.7.3, 5.7.4, 5.7.5
CDH 5.8.0, 5.8.1, 5.8.2, 5.8.3
CDH 5.9.0, 5.9.1
CDH 5.10.0
Users affected: Only HBase users who enable MOB and take snapshots. Snapshots on regular tables are not affected.
Severity (Low/Medium/High): High
Impact: MOB data may not be properly restored from a HBase Snapshot.
Immediate action required: Upgrade CDH to version 5.7.6, 5.8.4, 5.9.2, 5.10.1 or 5.11.0 and higher
Addressed in release/refresh/patch: Fixed in CDH 5.7.6, 5.8.4, 5.9.2, 5.10.1 or 5.11.0 and higher
Medium Object Blob (MOB) Data Loss After MOB Compaction
If you enable Medium Object Blobs (MOBs) on a table, data loss can occur after a MOB compaction.
When there are no outstanding scanners for HBase regions, by way of optimization, HBase drops cell sequence IDs during normal region compaction for MOB-enabled tables. If a file with no sequence IDs is compacted with an older file that has overlapping cells, the wrong cells may be returned on subsequent compactions. The result is incorrect MOB file references.
The problem manifests as an inability to Scan or Get values from these overlapping rows, and the following WARN-level messages appear in RegionServer logs:
WARN HStore Fail to read the cell, the mob file <file name> doesn't exist java.io.FileNotFoundException: File does not exist:
This issue is fixed by HBASE-13922.
Releases affected:
CDH 5.3.0, 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.5, 5.3.7, 5.3.8, 5.3.9, 5.3.10
CDH 5.4.0, 5.4.1, 5.4.2, 5.4.3, 5.4.4, 5.4.5, 5.4.7, 5.4.8, 5.4.9, 5.4.10, 5.4.11
CDH 5.5.0, 5.5.1, 5.5.2, 5.5.4, 5.5.5
CDH 5.6.0, 5.6.1
Users affected: HBase users who enable MOB
Severity (Low/Medium/High): High
Impact: MOB data cannot be retrieved from HBase tables.
Immediate action required: Upgrade CDH to version 5.5.6 or 5.7.0 and higher
UnknownScannerException Messages After Upgrade
org.apache.hadoop.hbase.UnknownScannerException: org.apache.hadoop.hbase.UnknownScannerException: Name: 10092964, already closed?
In this upgrade scenario, these messages are caused by restarting the RegionServer during the upgrade. Restart the HBase client to stop seeing the exceptions. The log message has been improved in CDH 5.8.0 and higher.
HBase moves to Protoc 2.5.0
This change may cause JAR conflicts with applications that have older versions of protobuf in their Java classpath.
Bug: None
Workaround: Update applications to use Protoc 2.5.0.
HBase may not tolerate HDFS root directory changes
While HBase is running, do not stop the HDFS instance running under it and restart it again with a different root directory for HBase.
Bug: None
Cloudera Bug: CDH-5697
Workaround: None
AccessController postOperation problems in asynchronous operations
When security and Access Control are enabled, the following problems occur:
- If a Delete Table fails for a reason other than missing permissions, the access rights are removed but the table may still exist and may be used again.
- If hbaseAdmin.modifyTable() is used to delete column families, the rights are not removed from the Access Control List (ACL) table. The postOperation is implemented only for postDeleteColumn().
- If Create Table fails, full rights for that table persist for the user who attempted to create it. If another user later succeeds in creating the table, the user who made the failed attempt still has the full rights.
Bug: HBASE-6992
Cloudera Bug: CDH-8566
Workaround: None
Native library not included in tarballs
The native library that enables RegionServer page pinning on Linux is not included in tarballs. This could impair performance if you install HBase from tarballs.
Bug: None
Cloudera Bug: CDH-7304c
Workaround: Use parcels