Apache HBase Known Issues
— MapReduce over HBase Snapshot bypasses HBase-level security
The MapReduce over HBase Snapshot bypasses HBase-level security completely since the files are read from the HDFS directly. The user who is running the scan/job has to have read permissions to the data and snapshot files.
Bug: HBASE-8369
Severity: Medium
Workaround: MapReduce users must be trusted to process/view all data in HBase.
— HBase snapshots now saved to the /<hbase>/.hbase-snapshot directory
HBase snapshots now saved to the /<hbase>/.hbase-snapshot directory instead of the /.snapshot directory. This was a conflict introduced by the HDFS snapshot feature in Hadoop 2.2/CDH 5 HDFS.
Bug: HBASE-8352
Severity: High
Workaround: This should be handled in the upgrade process.
— HBase moves to Protoc 2.5.0.
This change may cause JAR conflicts with applications that have older versions of protobuf in their Java classpath.
Bug: None
Severity: Medium
Workaround: Update applications to use Protoc 2.5.0. Work on a longer-term solution in progress.
— Write performance may be a little slower in CDH 5 than in CDH 4
Must explicitly add permissions for owner users before upgrading from 4.1.x
In CDH 4.1.x, an HBase table could have an owner. The owner user had full administrative permissions on the table (RWXCA). These permissions were implicit (that is, they were not stored explicitly in the HBase acl table), but the code checked them when determining if a user could perform an operation.
The owner construct was removed as of CDH 4.2.0, and the code now relies exclusively on entries in the acl table. Since table owners do not have an entry in this table, their permissions are removed on upgrade from CDH 4.1.x to CDH 4.2.0 or later.
Bug: None
Severity: Medium
Anticipated Resolution: None; use workaround
PERMISSIONS = 'RWXCA' tables.each do |t| table_name = t.getNameAsString owner = t.getOwnerString LOG.warn( "Granting " + owner + " with " + PERMISSIONS + " for table " + table_name) user_permission = UserPermission. new(owner.to_java_bytes, table_name.to_java_bytes, nil, nil, PERMISSIONS.to_java_bytes) protocol.grant(user_permission) end
— Change in default splitting policy from ConstantSizeRegionSplitPolicy to IncreasingToUpperBoundRegionSplitPolicy may create too many splits
This affects you only if you are upgrading from CDH 4.1 or earlier.
Split size is the number of regions that are on this server that all are part of the same table, squared, times the region flush size or the maximum region split size, whichever is smaller. For example, if the flush size is 128MB, then on first flush we will split, making two regions that will split when their size is 2 * 2 * 128MB = 512MB. If one of these regions splits, there are three regions and now the split size is 3 * 3 * 128MB = 1152MB, and so on until we reach the configured maximum file size, and then from then, we'll use that.
This new default policy could create many splits if you have many tables in your cluster.
This default split size has also changed - from 64MB to 128MB; and the region eventual split size, hbase.hregion.max.filesize, is now 10GB (it was 1GB).
Bug: None
Severity: Medium
Anticipated Resolution: None; use workaround
Workaround: If find you are getting too many splits, either go back to the old split policy or increase the hbase.hregion.memstore.flush.size.
— In a non-secure cluster, MapReduce over HBase does not properly handle splits in the BulkLoad case
You may see errors because of:
- missing permissions on the directory that contains the files to bulk load
- missing ACL rights for the table/families
Bug: None
Severity: Medium
— Pluggable compaction and scan policies via coprocessors (HBASE-6427) not supported
Cloudera does not provide support for user-provided custom coprocessors.
Bug: HBASE-6427
Severity: Low
Workaround: None
— Custom constraints coprocessors (HBASE-4605) not supported
The constraints coprocessor feature provides a framework for constrains and requires you to add your own custom code. Cloudera does not support user-provided custom code, and hence does not support this feature.
Bug: HBASE-4605
Severity: Low
Workaround: None
— Pluggable split key policy (HBASE-5304) not supported
Cloudera supports the two split policies that are supplied and tested: ConstantSizeSplitPolicy and PrefixSplitKeyPolicy. The code also provides a mechanism for custom policies that are specified by adding a class name to the HTableDescriptor. Custom code added via this mechanism must be provided by the user. Cloudera does not support user-provided custom code, and hence does not support this feature.
Bug: HBASE-5304
Severity: Low
Workaround: None
— HBase may not tolerate HDFS root directory changes
While HBase is running, do not stop the HDFS instance running under it and restart it again with a different root directory for HBase.
Bug: None
Severity: Medium
Workaround: None
— AccessController postOperation problems in asynchronous operations
When security and Access Control are enabled, the following problems occur:
- If a Delete Table fails for a reason other than missing permissions, the access rights are removed but the table may still exist and may be used again.
- If hbaseAdmin.modifyTable() is used to delete column families, the rights are not removed from the Access Control List (ACL) table. The postOperation is implemented only for postDeleteColumn().
- If Create Table fails, full rights for that table persist for the user who attempted to create it. If another user later succeeds in creating the table, the user who made the failed attempt still has the full rights.
Bug: HBASE-6992
Severity: Medium
Workaround: None
— Native library not included in tarballs
The native library that enables Region Server page pinning on Linux is not included in tarballs. This could impair performance if you install HBase from tarballs.
Bug: None
Severity: Low
Workaround: None
<< Apache Hadoop Known Issues | Apache Hive Known Issues >> | |