public class HRegion extends java.lang.Object implements PropagatingConfigurationObserver
An Region is defined by its table and its key extent.
Locking at the Region level serves only one purpose: preventing the region from being closed (and consequently split) while other operations are ongoing. Each row level operation obtains both a row lock and a region read lock for the duration of the operation. While a scanner is being constructed, getScanner holds a read lock. If the scanner is successfully constructed, it holds a read lock until it is closed. A close takes out a write lock and consequently will block for ongoing operations and will block new operations from starting while the close is in progress.
Modifier and Type | Class and Description |
---|---|
static interface |
HRegion.BulkLoadListener
Listener class to enable callers of
bulkLoadHFile() to perform any necessary
pre/post processing of a given bulkload call
|
static interface |
HRegion.FlushResult |
static class |
HRegion.FlushResultImpl
Objects from this class are created when flushing to describe all the different states that
that method ends up in.
|
static class |
HRegion.RowLockImpl
Class used to represent a lock on a row.
|
Modifier and Type | Field and Description |
---|---|
protected Configuration |
conf |
static long |
DEEP_OVERHEAD |
static int |
DEFAULT_CACHE_FLUSH_INTERVAL
Default interval for the memstore flush
|
static long |
DEFAULT_FLUSH_PER_CHANGES |
static int |
DEFAULT_HBASE_REGIONSERVER_MINIBATCH_SIZE |
static int |
DEFAULT_MAX_CELL_SIZE |
static long |
FIXED_OVERHEAD |
static java.lang.String |
HBASE_MAX_CELL_SIZE_KEY |
static java.lang.String |
HBASE_REGIONSERVER_MINIBATCH_SIZE |
protected long |
lastReplayedCompactionSeqId |
protected long |
lastReplayedOpenRegionSeqId
The sequence id of the last replayed open region event from the primary region.
|
static java.lang.String |
LOAD_CFS_ON_DEMAND_CONFIG_KEY |
static long |
MAX_FLUSH_PER_CHANGES
The following MAX_FLUSH_PER_CHANGES is large enough because each KeyValue has 20+ bytes
overhead.
|
static java.lang.String |
MEMSTORE_FLUSH_PER_CHANGES
Conf key to force a flush if there are already enough changes for one region in memstore
|
static java.lang.String |
MEMSTORE_PERIODIC_FLUSH_INTERVAL
Conf key for the periodic flush interval
|
protected java.util.Map<byte[],HStore> |
stores |
static int |
SYSTEM_CACHE_FLUSH_INTERVAL
Default interval for System tables memstore flush
|
Constructor and Description |
---|
HRegion(HRegionFileSystem fs,
WAL wal,
Configuration confParam,
TableDescriptor htd,
RegionServerServices rsServices)
HRegion constructor.
|
HRegion(Path tableDir,
WAL wal,
FileSystem fs,
Configuration confParam,
RegionInfo regionInfo,
TableDescriptor htd,
RegionServerServices rsServices)
Deprecated.
Use other constructors.
|
Modifier and Type | Method and Description |
---|---|
void |
addRegionToSnapshot(SnapshotDescription desc,
ForeignExceptionSnare exnSnare)
Complete taking the snapshot on the region.
|
Result |
append(Append append) |
Result |
append(Append mutation,
long nonceGroup,
long nonce) |
boolean |
areWritesEnabled() |
OperationStatus[] |
batchMutate(Mutation[] mutations) |
OperationStatus[] |
batchMutate(Mutation[] mutations,
boolean atomic,
long nonceGroup,
long nonce) |
OperationStatus[] |
batchMutate(Mutation[] mutations,
long nonceGroup,
long nonce) |
OperationStatus[] |
batchReplay(MutationReplay[] mutations,
long replaySeqId) |
void |
blockUpdates() |
java.util.Map<byte[],java.util.List<Path>> |
bulkLoadHFiles(java.util.Collection<<any>> familyPaths,
boolean assignSeqId,
HRegion.BulkLoadListener bulkLoadListener)
Attempts to atomically load a group of hfiles.
|
java.util.Map<byte[],java.util.List<Path>> |
bulkLoadHFiles(java.util.Collection<<any>> familyPaths,
boolean assignSeqId,
HRegion.BulkLoadListener bulkLoadListener,
boolean copyFile)
Attempts to atomically load a group of hfiles.
|
boolean |
checkAndMutate(byte[] row,
byte[] family,
byte[] qualifier,
CompareOperator op,
ByteArrayComparable comparator,
TimeRange timeRange,
Mutation mutation) |
boolean |
checkAndRowMutate(byte[] row,
byte[] family,
byte[] qualifier,
CompareOperator op,
ByteArrayComparable comparator,
TimeRange timeRange,
RowMutations rm) |
void |
checkFamilies(java.util.Collection<byte[]> families)
Check the collection of families for validity.
|
protected void |
checkReadOnly() |
protected void |
checkReadsEnabled() |
byte[] |
checkSplit()
Return the splitpoint.
|
void |
checkTimestamps(java.util.Map<byte[],java.util.List<Cell>> familyMap,
long now)
Check the collection of families for valid timestamps
|
java.util.Map<byte[],java.util.List<HStoreFile>> |
close()
Close down this HRegion.
|
java.util.Map<byte[],java.util.List<HStoreFile>> |
close(boolean abort)
Close down this HRegion.
|
void |
closeRegionOperation() |
void |
closeRegionOperation(Operation operation) |
void |
compact(boolean majorCompaction)
Synchronously compact all stores in the region.
|
boolean |
compact(CompactionContext compaction,
HStore store,
ThroughputController throughputController)
Called by compaction thread and after region is opened to compact the
HStores if necessary.
|
boolean |
compact(CompactionContext compaction,
HStore store,
ThroughputController throughputController,
User user) |
void |
compactStores()
This is a helper function that compact all the stores synchronously.
|
static HDFSBlocksDistribution |
computeHDFSBlocksDistribution(Configuration conf,
TableDescriptor tableDescriptor,
RegionInfo regionInfo)
This is a helper function to compute HDFS block distribution on demand
|
static HDFSBlocksDistribution |
computeHDFSBlocksDistribution(Configuration conf,
TableDescriptor tableDescriptor,
RegionInfo regionInfo,
Path tablePath)
This is a helper function to compute HDFS block distribution on demand
|
static HRegion |
createHRegion(RegionInfo info,
Path rootDir,
Configuration conf,
TableDescriptor hTableDescriptor,
WAL wal) |
static HRegion |
createHRegion(RegionInfo info,
Path rootDir,
Configuration conf,
TableDescriptor hTableDescriptor,
WAL wal,
boolean initialize)
Convenience method creating new HRegions.
|
void |
decrementCompactionsQueuedCount() |
void |
delete(Delete delete) |
void |
deregisterChildren(ConfigurationManager manager)
Needs to be called to deregister the children from the manager.
|
protected void |
doRegionCompactionPrep()
Do preparation for pending compaction.
|
boolean |
equals(java.lang.Object o) |
com.google.protobuf.Message |
execService(com.google.protobuf.RpcController controller,
CoprocessorServiceCall call)
Executes a single protocol buffer coprocessor endpoint
Service method using
the registered protocol handlers. |
HRegion.FlushResult |
flush(boolean force)
Flush the cache.
|
HRegion.FlushResultImpl |
flushcache(boolean forceFlushAllStores,
boolean writeFlushRequestWalMarker,
FlushLifeCycleTracker tracker)
Flush the cache.
|
Result |
get(Get get) |
java.util.List<Cell> |
get(Get get,
boolean withCoprocessor) |
java.util.List<Cell> |
get(Get get,
boolean withCoprocessor,
long nonceGroup,
long nonce) |
long |
getBlockedRequestsCount() |
CellComparator |
getCellComparator() |
long |
getCheckAndMutateChecksFailed() |
long |
getCheckAndMutateChecksPassed() |
CompactionState |
getCompactionState() |
int |
getCompactPriority() |
RegionCoprocessorHost |
getCoprocessorHost() |
long |
getDataInMemoryWithoutWAL() |
long |
getEarliestFlushTimeForAllStores() |
protected Durability |
getEffectiveDurability(Durability d)
Returns effective durability from the passed durability and
the table descriptor.
|
FileSystem |
getFilesystem() |
long |
getFilteredReadRequestsCount() |
HDFSBlocksDistribution |
getHDFSBlocksDistribution() |
ClientProtos.RegionLoadStats |
getLoadStatistics() |
java.util.concurrent.ConcurrentHashMap<HashedBytes,org.apache.hadoop.hbase.regionserver.HRegion.RowLockContext> |
getLockedRows() |
long |
getMaxFlushedSeqId() |
java.util.Map<byte[],java.lang.Long> |
getMaxStoreSeqId() |
long |
getMemStoreDataSize() |
long |
getMemStoreFlushSize() |
long |
getMemStoreHeapSize() |
long |
getMemStoreOffHeapSize() |
MetricsRegion |
getMetrics() |
MultiVersionConcurrencyControl |
getMVCC() |
protected long |
getNextSequenceId(WAL wal)
Method to safely get the next sequence number.
|
long |
getNumMutationsWithoutWAL() |
long |
getOldestHfileTs(boolean majorCompactionOnly) |
long |
getOldestSeqIdOfStore(byte[] familyName) |
long |
getOpenSeqNum() |
int |
getReadLockCount() |
long |
getReadPoint() |
long |
getReadPoint(IsolationLevel isolationLevel) |
long |
getReadRequestsCount() |
static Path |
getRegionDir(Path rootdir,
RegionInfo info)
Deprecated.
For tests only; to be removed.
|
static Path |
getRegionDir(Path tabledir,
java.lang.String name)
Deprecated.
For tests only; to be removed.
|
HRegionFileSystem |
getRegionFileSystem() |
RegionInfo |
getRegionInfo() |
RegionServicesForStores |
getRegionServicesForStores() |
java.util.NavigableMap<byte[],java.lang.Integer> |
getReplicationScope() |
RowLock |
getRowLock(byte[] row)
Get an exclusive ( write lock ) lock on a given row.
|
RowLock |
getRowLock(byte[] row,
boolean readLock) |
protected RowLock |
getRowLockInternal(byte[] row,
boolean readLock,
RowLock prevRowLock) |
org.apache.hadoop.hbase.regionserver.HRegion.RegionScannerImpl |
getScanner(Scan scan) |
org.apache.hadoop.hbase.regionserver.HRegion.RegionScannerImpl |
getScanner(Scan scan,
java.util.List<KeyValueScanner> additionalScanners) |
long |
getSmallestReadPoint() |
RegionSplitPolicy |
getSplitPolicy() |
HStore |
getStore(byte[] column) |
java.util.List<java.lang.String> |
getStoreFileList(byte[][] columns) |
protected java.util.concurrent.ThreadPoolExecutor |
getStoreFileOpenAndCloseThreadPool(java.lang.String threadNamePrefix) |
protected java.util.concurrent.ThreadPoolExecutor |
getStoreOpenAndCloseThreadPool(java.lang.String threadNamePrefix) |
java.util.List<HStore> |
getStores() |
TableDescriptor |
getTableDescriptor() |
WAL |
getWAL() |
long |
getWriteRequestsCount() |
int |
hashCode() |
boolean |
hasReferences() |
long |
heapSize() |
Result |
increment(Increment increment) |
Result |
increment(Increment mutation,
long nonceGroup,
long nonce) |
void |
incrementCompactionsQueuedCount() |
void |
incrementFlushesQueuedCount() |
long |
initialize()
Deprecated.
use HRegion.createHRegion() or HRegion.openHRegion()
|
protected HStore |
instantiateHStore(ColumnFamilyDescriptor family) |
protected RegionScanner |
instantiateRegionScanner(Scan scan,
java.util.List<KeyValueScanner> additionalScanners) |
protected org.apache.hadoop.hbase.regionserver.HRegion.RegionScannerImpl |
instantiateRegionScanner(Scan scan,
java.util.List<KeyValueScanner> additionalScanners,
long nonceGroup,
long nonce) |
protected HRegion.FlushResultImpl |
internalFlushcache(WAL wal,
long myseqid,
java.util.Collection<HStore> storesToFlush,
MonitoredTask status,
boolean writeFlushWalMarker,
FlushLifeCycleTracker tracker)
Flush the memstore.
|
protected HRegion.FlushResultImpl |
internalFlushCacheAndCommit(WAL wal,
MonitoredTask status,
org.apache.hadoop.hbase.regionserver.HRegion.PrepareFlushResult prepareResult,
java.util.Collection<HStore> storesToFlush) |
protected org.apache.hadoop.hbase.regionserver.HRegion.PrepareFlushResult |
internalPrepareFlushCache(WAL wal,
long myseqid,
java.util.Collection<HStore> storesToFlush,
MonitoredTask status,
boolean writeFlushWalMarker,
FlushLifeCycleTracker tracker) |
boolean |
isAvailable() |
boolean |
isClosed() |
boolean |
isClosing() |
boolean |
isLoadingCfsOnDemandDefault() |
boolean |
isMergeable() |
boolean |
isReadOnly() |
boolean |
isSplittable() |
void |
mutateRow(RowMutations rm) |
void |
mutateRowsWithLocks(java.util.Collection<Mutation> mutations,
java.util.Collection<byte[]> rowsToLock,
long nonceGroup,
long nonce)
Perform atomic (all or none) mutations within the region.
|
void |
onConfigurationChange(Configuration conf)
This method would be called by the
ConfigurationManager
object when the Configuration object is reloaded from disk. |
protected HRegion |
openHRegion(CancelableProgressable reporter)
Open HRegion.
|
static HRegion |
openHRegion(Configuration conf,
FileSystem fs,
Path rootDir,
Path tableDir,
RegionInfo info,
TableDescriptor htd,
WAL wal,
RegionServerServices rsServices,
CancelableProgressable reporter)
Open a Region.
|
static HRegion |
openHRegion(Configuration conf,
FileSystem fs,
Path rootDir,
RegionInfo info,
TableDescriptor htd,
WAL wal)
Open a Region.
|
static HRegion |
openHRegion(Configuration conf,
FileSystem fs,
Path rootDir,
RegionInfo info,
TableDescriptor htd,
WAL wal,
RegionServerServices rsServices,
CancelableProgressable reporter)
Open a Region.
|
static HRegion |
openHRegion(HRegion other,
CancelableProgressable reporter)
Useful when reopening a closed region (normally for unit tests)
|
static HRegion |
openHRegion(Path rootDir,
RegionInfo info,
TableDescriptor htd,
WAL wal,
Configuration conf)
Open a Region.
|
static HRegion |
openHRegion(Path rootDir,
RegionInfo info,
TableDescriptor htd,
WAL wal,
Configuration conf,
RegionServerServices rsServices,
CancelableProgressable reporter)
Open a Region.
|
static Region |
openHRegion(Region other,
CancelableProgressable reporter) |
static HRegion |
openHRegion(RegionInfo info,
TableDescriptor htd,
WAL wal,
Configuration conf)
Open a Region.
|
static HRegion |
openHRegion(RegionInfo info,
TableDescriptor htd,
WAL wal,
Configuration conf,
RegionServerServices rsServices,
CancelableProgressable reporter)
Open a Region.
|
static HRegion |
openReadOnlyFileSystemHRegion(Configuration conf,
FileSystem fs,
Path tableDir,
RegionInfo info,
TableDescriptor htd)
Open a Region on a read-only file-system (like hdfs snapshots)
|
void |
prepareDelete(Delete delete)
Prepare a delete for a row mutation processor
|
void |
prepareDeleteTimestamps(Mutation mutation,
java.util.Map<byte[],java.util.List<Cell>> familyMap,
byte[] byteNow)
Set up correct timestamps in the KVs in Delete object.
|
void |
processRowsWithLocks(RowProcessor<?,?> processor) |
void |
processRowsWithLocks(RowProcessor<?,?> processor,
long nonceGroup,
long nonce) |
void |
processRowsWithLocks(RowProcessor<?,?> processor,
long timeout,
long nonceGroup,
long nonce) |
void |
put(Put put) |
boolean |
refreshStoreFiles() |
protected boolean |
refreshStoreFiles(boolean force) |
void |
registerChildren(ConfigurationManager manager)
Needs to be called to register the children to the manager.
|
boolean |
registerService(com.google.protobuf.Service instance)
Registers a new protocol buffer
Service subclass as a coprocessor endpoint to
be available for handling Region#execService(com.google.protobuf.RpcController,
org.apache.hadoop.hbase.protobuf.generated.ClientProtos.CoprocessorServiceCall) calls. |
protected long |
replayRecoveredEditsIfAny(Path regiondir,
java.util.Map<byte[],java.lang.Long> maxSeqIdInStores,
CancelableProgressable reporter,
MonitoredTask status)
Read the edits put under this region by wal splitting process.
|
void |
reportCompactionRequestEnd(boolean isMajor,
int numFiles,
long filesSizeCompacted) |
void |
reportCompactionRequestFailure() |
void |
reportCompactionRequestStart(boolean isMajor) |
void |
requestCompaction(byte[] family,
java.lang.String why,
int priority,
boolean major,
CompactionLifeCycleTracker tracker) |
void |
requestCompaction(java.lang.String why,
int priority,
boolean major,
CompactionLifeCycleTracker tracker) |
void |
requestFlush(FlushLifeCycleTracker tracker) |
protected void |
restoreEdit(HStore s,
Cell cell,
MemStoreSizing memstoreAccounting)
Used by tests
|
static boolean |
rowIsInRange(RegionInfo info,
byte[] row)
Determines if the specified row is within the row range specified by the
specified RegionInfo
|
static boolean |
rowIsInRange(RegionInfo info,
byte[] row,
int offset,
short length) |
void |
setClosing(boolean closing)
Exposed for some very specific unit tests.
|
void |
setCoprocessorHost(RegionCoprocessorHost coprocessorHost) |
void |
setReadsEnabled(boolean readsEnabled) |
void |
setTimeoutForWriteLock(long timeoutForWriteLock)
The
doClose(boolean, org.apache.hadoop.hbase.monitoring.MonitoredTask) will block forever if someone tries proving the dead lock via the unit test. |
void |
startRegionOperation() |
void |
startRegionOperation(Operation op) |
java.lang.String |
toString() |
void |
unblockUpdates() |
void |
waitForFlushes()
Wait for all current flushes of the region to complete
|
boolean |
waitForFlushes(long timeout) |
void |
waitForFlushesAndCompactions()
Wait for all current flushes and compactions of the region to complete
|
static void |
warmupHRegion(RegionInfo info,
TableDescriptor htd,
WAL wal,
Configuration conf,
RegionServerServices rsServices,
CancelableProgressable reporter) |
public static final java.lang.String LOAD_CFS_ON_DEMAND_CONFIG_KEY
public static final java.lang.String HBASE_MAX_CELL_SIZE_KEY
public static final int DEFAULT_MAX_CELL_SIZE
public static final java.lang.String HBASE_REGIONSERVER_MINIBATCH_SIZE
public static final int DEFAULT_HBASE_REGIONSERVER_MINIBATCH_SIZE
protected volatile long lastReplayedOpenRegionSeqId
protected volatile long lastReplayedCompactionSeqId
protected final java.util.Map<byte[],HStore> stores
protected final Configuration conf
public static final java.lang.String MEMSTORE_PERIODIC_FLUSH_INTERVAL
public static final int DEFAULT_CACHE_FLUSH_INTERVAL
public static final int SYSTEM_CACHE_FLUSH_INTERVAL
public static final java.lang.String MEMSTORE_FLUSH_PER_CHANGES
public static final long DEFAULT_FLUSH_PER_CHANGES
public static final long MAX_FLUSH_PER_CHANGES
public static final long FIXED_OVERHEAD
public static final long DEEP_OVERHEAD
@Deprecated public HRegion(Path tableDir, WAL wal, FileSystem fs, Configuration confParam, RegionInfo regionInfo, TableDescriptor htd, RegionServerServices rsServices)
createHRegion(RegionInfo, Path, Configuration, TableDescriptor, WAL, boolean)
or openHRegion(RegionInfo, TableDescriptor, WAL, Configuration)
method.tableDir
- qualified path of directory where region should be located,
usually the table directory.wal
- The WAL is the outbound log for any updates to the HRegion
The wal file is a logfile from the previous execution that's
custom-computed for this HRegion. The HRegionServer computes and sorts the
appropriate wal info for this HRegion. If there is a previous wal file
(implying that the HRegion has been written-to before), then read it from
the supplied path.fs
- is the filesystem.confParam
- is global configuration settings.regionInfo
- - RegionInfo that describes the region
is new), then read them from the supplied path.htd
- the table descriptorrsServices
- reference to RegionServerServices
or nullpublic HRegion(HRegionFileSystem fs, WAL wal, Configuration confParam, TableDescriptor htd, RegionServerServices rsServices)
createHRegion(RegionInfo, Path, Configuration, TableDescriptor, WAL, boolean)
or openHRegion(RegionInfo, TableDescriptor, WAL, Configuration)
method.fs
- is the filesystem.wal
- The WAL is the outbound log for any updates to the HRegion
The wal file is a logfile from the previous execution that's
custom-computed for this HRegion. The HRegionServer computes and sorts the
appropriate wal info for this HRegion. If there is a previous wal file
(implying that the HRegion has been written-to before), then read it from
the supplied path.confParam
- is global configuration settings.htd
- the table descriptorrsServices
- reference to RegionServerServices
or nullpublic long getSmallestReadPoint()
@Deprecated public long initialize() throws java.io.IOException
java.io.IOException
- epublic boolean hasReferences()
public void blockUpdates()
public void unblockUpdates()
public HDFSBlocksDistribution getHDFSBlocksDistribution()
public static HDFSBlocksDistribution computeHDFSBlocksDistribution(Configuration conf, TableDescriptor tableDescriptor, RegionInfo regionInfo) throws java.io.IOException
conf
- configurationtableDescriptor
- TableDescriptor of the tableregionInfo
- encoded name of the regionjava.io.IOException
public static HDFSBlocksDistribution computeHDFSBlocksDistribution(Configuration conf, TableDescriptor tableDescriptor, RegionInfo regionInfo, Path tablePath) throws java.io.IOException
conf
- configurationtableDescriptor
- TableDescriptor of the tableregionInfo
- encoded name of the regiontablePath
- the table directoryjava.io.IOException
public RegionInfo getRegionInfo()
public long getReadRequestsCount()
public long getFilteredReadRequestsCount()
public long getWriteRequestsCount()
public long getMemStoreDataSize()
public long getMemStoreHeapSize()
public long getMemStoreOffHeapSize()
public RegionServicesForStores getRegionServicesForStores()
public long getNumMutationsWithoutWAL()
public long getDataInMemoryWithoutWAL()
public long getBlockedRequestsCount()
public long getCheckAndMutateChecksPassed()
public long getCheckAndMutateChecksFailed()
public MetricsRegion getMetrics()
public boolean isClosed()
public boolean isClosing()
public boolean isReadOnly()
public boolean isAvailable()
public boolean isSplittable()
public boolean isMergeable()
public boolean areWritesEnabled()
public MultiVersionConcurrencyControl getMVCC()
public long getMaxFlushedSeqId()
public long getReadPoint(IsolationLevel isolationLevel)
null
for defaultpublic boolean isLoadingCfsOnDemandDefault()
public java.util.Map<byte[],java.util.List<HStoreFile>> close() throws java.io.IOException
This method could take some time to execute, so don't call it from a time-sensitive thread.
java.io.IOException
- eDroppedSnapshotException
- Thrown when replay of wal is required
because a Snapshot was not properly persisted. The region is put in closing mode, and the
caller MUST abort after this.public java.util.Map<byte[],java.util.List<HStoreFile>> close(boolean abort) throws java.io.IOException
abort
- true if server is aborting (only during testing)java.io.IOException
- eDroppedSnapshotException
- Thrown when replay of wal is required
because a Snapshot was not properly persisted. The region is put in closing mode, and the
caller MUST abort after this.public void setClosing(boolean closing)
public void setTimeoutForWriteLock(long timeoutForWriteLock)
doClose(boolean, org.apache.hadoop.hbase.monitoring.MonitoredTask)
will block forever if someone tries proving the dead lock via the unit test.
Instead of blocking, the doClose(boolean, org.apache.hadoop.hbase.monitoring.MonitoredTask)
will throw exception if you set the timeout.timeoutForWriteLock
- the second time to wait for the write lock in doClose(boolean, org.apache.hadoop.hbase.monitoring.MonitoredTask)
public void waitForFlushesAndCompactions()
public void waitForFlushes()
public boolean waitForFlushes(long timeout)
protected java.util.concurrent.ThreadPoolExecutor getStoreOpenAndCloseThreadPool(java.lang.String threadNamePrefix)
protected java.util.concurrent.ThreadPoolExecutor getStoreFileOpenAndCloseThreadPool(java.lang.String threadNamePrefix)
public TableDescriptor getTableDescriptor()
public WAL getWAL()
public RegionSplitPolicy getSplitPolicy()
public FileSystem getFilesystem()
FileSystem
being used by this regionpublic HRegionFileSystem getRegionFileSystem()
HRegionFileSystem
used by this regionpublic long getEarliestFlushTimeForAllStores()
public long getOldestHfileTs(boolean majorCompactionOnly) throws java.io.IOException
java.io.IOException
protected void doRegionCompactionPrep() throws java.io.IOException
java.io.IOException
public void compact(boolean majorCompaction) throws java.io.IOException
This operation could block for a long time, so don't call it from a time-sensitive thread.
Note that no locks are taken to prevent possible conflicts between compaction and splitting activities. The regionserver does not normally compact and split in parallel. However by calling this method you may introduce unexpected and unhandled concurrency. Don't do this unless you know what you are doing.
majorCompaction
- True to force a major compaction regardless of thresholdsjava.io.IOException
public void compactStores() throws java.io.IOException
It is used by utilities and testing
java.io.IOException
public boolean compact(CompactionContext compaction, HStore store, ThroughputController throughputController) throws java.io.IOException
This operation could block for a long time, so don't call it from a time-sensitive thread. Note that no locking is necessary at this level because compaction only conflicts with a region split, and that cannot happen because the region server does them sequentially and not in parallel.
compaction
- Compaction details, obtained by requestCompaction()throughputController
- java.io.IOException
public boolean compact(CompactionContext compaction, HStore store, ThroughputController throughputController, User user) throws java.io.IOException
java.io.IOException
public HRegion.FlushResult flush(boolean force) throws java.io.IOException
When this method is called the cache will be flushed unless:
This method may block for some time, so it should not be called from a time-sensitive thread.
force
- whether we want to force a flush of all storesjava.io.IOException
- general io exceptions
because a snapshot was not properly persisted.public HRegion.FlushResultImpl flushcache(boolean forceFlushAllStores, boolean writeFlushRequestWalMarker, FlushLifeCycleTracker tracker) throws java.io.IOException
This method may block for some time, so it should not be called from a time-sensitive thread.
forceFlushAllStores
- whether we want to flush all storeswriteFlushRequestWalMarker
- whether to write the flush request marker to WALtracker
- used to track the life cycle of this flushjava.io.IOException
- general io exceptionsDroppedSnapshotException
- Thrown when replay of wal is required
because a Snapshot was not properly persisted. The region is put in closing mode, and the
caller MUST abort after this.protected HRegion.FlushResultImpl internalFlushcache(WAL wal, long myseqid, java.util.Collection<HStore> storesToFlush, MonitoredTask status, boolean writeFlushWalMarker, FlushLifeCycleTracker tracker) throws java.io.IOException
This method may block for some time. Every time you call it, we up the regions sequence id even if we don't flush; i.e. the returned region id will be at least one larger than the last edit applied to this region. The returned id does not refer to an actual edit. The returned id can be used for say installing a bulk loaded file just ahead of the last hfile that was the result of this flush, etc.
wal
- Null if we're NOT to go via wal.myseqid
- The seqid to use if wal
is null writing out flush file.storesToFlush
- The list of stores to flush.java.io.IOException
- general io exceptionsDroppedSnapshotException
- Thrown when replay of WAL is required.protected org.apache.hadoop.hbase.regionserver.HRegion.PrepareFlushResult internalPrepareFlushCache(WAL wal, long myseqid, java.util.Collection<HStore> storesToFlush, MonitoredTask status, boolean writeFlushWalMarker, FlushLifeCycleTracker tracker) throws java.io.IOException
java.io.IOException
protected HRegion.FlushResultImpl internalFlushCacheAndCommit(WAL wal, MonitoredTask status, org.apache.hadoop.hbase.regionserver.HRegion.PrepareFlushResult prepareResult, java.util.Collection<HStore> storesToFlush) throws java.io.IOException
java.io.IOException
protected long getNextSequenceId(WAL wal) throws java.io.IOException
java.io.IOException
public org.apache.hadoop.hbase.regionserver.HRegion.RegionScannerImpl getScanner(Scan scan) throws java.io.IOException
java.io.IOException
public org.apache.hadoop.hbase.regionserver.HRegion.RegionScannerImpl getScanner(Scan scan, java.util.List<KeyValueScanner> additionalScanners) throws java.io.IOException
java.io.IOException
protected RegionScanner instantiateRegionScanner(Scan scan, java.util.List<KeyValueScanner> additionalScanners) throws java.io.IOException
java.io.IOException
protected org.apache.hadoop.hbase.regionserver.HRegion.RegionScannerImpl instantiateRegionScanner(Scan scan, java.util.List<KeyValueScanner> additionalScanners, long nonceGroup, long nonce) throws java.io.IOException
java.io.IOException
public void prepareDelete(Delete delete) throws java.io.IOException
delete
- The passed delete is modified by this method. WARNING!java.io.IOException
public void delete(Delete delete) throws java.io.IOException
java.io.IOException
public void prepareDeleteTimestamps(Mutation mutation, java.util.Map<byte[],java.util.List<Cell>> familyMap, byte[] byteNow) throws java.io.IOException
Caller should have the row and region locks.
mutation
- familyMap
- byteNow
- java.io.IOException
public void put(Put put) throws java.io.IOException
java.io.IOException
public OperationStatus[] batchMutate(Mutation[] mutations, long nonceGroup, long nonce) throws java.io.IOException
java.io.IOException
public OperationStatus[] batchMutate(Mutation[] mutations, boolean atomic, long nonceGroup, long nonce) throws java.io.IOException
java.io.IOException
public OperationStatus[] batchMutate(Mutation[] mutations) throws java.io.IOException
java.io.IOException
public OperationStatus[] batchReplay(MutationReplay[] mutations, long replaySeqId) throws java.io.IOException
java.io.IOException
protected Durability getEffectiveDurability(Durability d)
public boolean checkAndMutate(byte[] row, byte[] family, byte[] qualifier, CompareOperator op, ByteArrayComparable comparator, TimeRange timeRange, Mutation mutation) throws java.io.IOException
java.io.IOException
public boolean checkAndRowMutate(byte[] row, byte[] family, byte[] qualifier, CompareOperator op, ByteArrayComparable comparator, TimeRange timeRange, RowMutations rm) throws java.io.IOException
java.io.IOException
public void addRegionToSnapshot(SnapshotDescription desc, ForeignExceptionSnare exnSnare) throws java.io.IOException
ForeignExceptionSnare
arg. (In the future other cancellable HRegion methods could eventually add a
ForeignExceptionSnare
, or we could do something fancier).desc
- snapshot description objectexnSnare
- ForeignExceptionSnare that captures external exceptions in case we need to
bail out. This is allowed to be null and will just be ignored in that case.java.io.IOException
- if there is an external or internal error causing the snapshot to failprotected void checkReadOnly() throws java.io.IOException
java.io.IOException
- Throws exception if region is in read-only mode.protected void checkReadsEnabled() throws java.io.IOException
java.io.IOException
public void setReadsEnabled(boolean readsEnabled)
public void checkFamilies(java.util.Collection<byte[]> families) throws NoSuchColumnFamilyException
families
- NoSuchColumnFamilyException
public void checkTimestamps(java.util.Map<byte[],java.util.List<Cell>> familyMap, long now) throws FailedSanityCheckException
familyMap
- now
- current timestampFailedSanityCheckException
protected long replayRecoveredEditsIfAny(Path regiondir, java.util.Map<byte[],java.lang.Long> maxSeqIdInStores, CancelableProgressable reporter, MonitoredTask status) throws java.io.IOException
We can ignore any wal message that has a sequence ID that's equal to or lower than minSeqId. (Because we know such messages are already reflected in the HFiles.)
While this is running we are putting pressure on memory yet we are outside of our usual accounting because we are not yet an onlined region (this stuff is being run as part of Region initialization). This means that if we're up against global memory limits, we'll not be flagged to flush because we are not online. We can't be flushed by usual mechanisms anyways; we're not yet online so our relative sequenceids are not yet aligned with WAL sequenceids -- not till we come up online, post processing of split edits.
But to help relieve memory pressure, at least manage our own heap size flushing if are in excess of per-region limits. Flushing, though, we have to be careful and avoid using the regionserver/wal sequenceid. Its running on a different line to whats going on in here in this region context so if we crashed replaying these edits, but in the midst had a flush that used the regionserver wal with a sequenceid in excess of whats going on in here in this region and with its split editlogs, then we could miss edits the next time we go to recover. So, we have to flush inline, using seqids that make sense in a this single region context only -- until we online.
maxSeqIdInStores
- Any edit found in split editlogs needs to be in excess of
the maxSeqId for the store to be applied, else its skipped.minSeqId
if nothing added from editlogs.java.io.IOException
public boolean refreshStoreFiles() throws java.io.IOException
java.io.IOException
protected boolean refreshStoreFiles(boolean force) throws java.io.IOException
java.io.IOException
protected void restoreEdit(HStore s, Cell cell, MemStoreSizing memstoreAccounting)
s
- Store to add edit too.cell
- Cell to add.protected HStore instantiateHStore(ColumnFamilyDescriptor family) throws java.io.IOException
java.io.IOException
public HStore getStore(byte[] column)
public java.util.List<HStore> getStores()
public java.util.List<java.lang.String> getStoreFileList(byte[][] columns) throws java.lang.IllegalArgumentException
java.lang.IllegalArgumentException
public RowLock getRowLock(byte[] row) throws java.io.IOException
row
- Which row to lock.java.io.IOException
public RowLock getRowLock(byte[] row, boolean readLock) throws java.io.IOException
java.io.IOException
protected RowLock getRowLockInternal(byte[] row, boolean readLock, RowLock prevRowLock) throws java.io.IOException
java.io.IOException
public int getReadLockCount()
public java.util.concurrent.ConcurrentHashMap<HashedBytes,org.apache.hadoop.hbase.regionserver.HRegion.RowLockContext> getLockedRows()
public java.util.Map<byte[],java.util.List<Path>> bulkLoadHFiles(java.util.Collection<<any>> familyPaths, boolean assignSeqId, HRegion.BulkLoadListener bulkLoadListener) throws java.io.IOException
familyPaths
- List of Pair<byte[] column family, String hfilePath>bulkLoadListener
- Internal hooks enabling massaging/preparation of a
file about to be bulk loadedassignSeqId
- java.io.IOException
- if failed unrecoverably.public java.util.Map<byte[],java.util.List<Path>> bulkLoadHFiles(java.util.Collection<<any>> familyPaths, boolean assignSeqId, HRegion.BulkLoadListener bulkLoadListener, boolean copyFile) throws java.io.IOException
familyPaths
- List of Pair<byte[] column family, String hfilePath>assignSeqId
- bulkLoadListener
- Internal hooks enabling massaging/preparation of a
file about to be bulk loadedcopyFile
- always copy hfiles if truejava.io.IOException
- if failed unrecoverably.public boolean equals(java.lang.Object o)
equals
in class java.lang.Object
public int hashCode()
hashCode
in class java.lang.Object
public java.lang.String toString()
toString
in class java.lang.Object
public static HRegion createHRegion(RegionInfo info, Path rootDir, Configuration conf, TableDescriptor hTableDescriptor, WAL wal, boolean initialize) throws java.io.IOException
info
- Info for region to create.rootDir
- Root directory for HBase instancewal
- shared WALinitialize
- - true to initialize the regionjava.io.IOException
public static HRegion createHRegion(RegionInfo info, Path rootDir, Configuration conf, TableDescriptor hTableDescriptor, WAL wal) throws java.io.IOException
java.io.IOException
public static HRegion openHRegion(RegionInfo info, TableDescriptor htd, WAL wal, Configuration conf) throws java.io.IOException
info
- Info for region to be opened.wal
- WAL for region to use. This method will call
WAL#setSequenceNumber(long) passing the result of the call to
HRegion#getMinSequenceId() to ensure the wal id is properly kept
up. HRegionStore does this every time it opens a new region.java.io.IOException
public static HRegion openHRegion(RegionInfo info, TableDescriptor htd, WAL wal, Configuration conf, RegionServerServices rsServices, CancelableProgressable reporter) throws java.io.IOException
info
- Info for region to be openedhtd
- the table descriptorwal
- WAL for region to use. This method will call
WAL#setSequenceNumber(long) passing the result of the call to
HRegion#getMinSequenceId() to ensure the wal id is properly kept
up. HRegionStore does this every time it opens a new region.conf
- The Configuration object to use.rsServices
- An interface we can request flushes against.reporter
- An interface we can report progress against.java.io.IOException
public static HRegion openHRegion(Path rootDir, RegionInfo info, TableDescriptor htd, WAL wal, Configuration conf) throws java.io.IOException
rootDir
- Root directory for HBase instanceinfo
- Info for region to be opened.htd
- the table descriptorwal
- WAL for region to use. This method will call
WAL#setSequenceNumber(long) passing the result of the call to
HRegion#getMinSequenceId() to ensure the wal id is properly kept
up. HRegionStore does this every time it opens a new region.conf
- The Configuration object to use.java.io.IOException
public static HRegion openHRegion(Path rootDir, RegionInfo info, TableDescriptor htd, WAL wal, Configuration conf, RegionServerServices rsServices, CancelableProgressable reporter) throws java.io.IOException
rootDir
- Root directory for HBase instanceinfo
- Info for region to be opened.htd
- the table descriptorwal
- WAL for region to use. This method will call
WAL#setSequenceNumber(long) passing the result of the call to
HRegion#getMinSequenceId() to ensure the wal id is properly kept
up. HRegionStore does this every time it opens a new region.conf
- The Configuration object to use.rsServices
- An interface we can request flushes against.reporter
- An interface we can report progress against.java.io.IOException
public static HRegion openHRegion(Configuration conf, FileSystem fs, Path rootDir, RegionInfo info, TableDescriptor htd, WAL wal) throws java.io.IOException
conf
- The Configuration object to use.fs
- Filesystem to userootDir
- Root directory for HBase instanceinfo
- Info for region to be opened.htd
- the table descriptorwal
- WAL for region to use. This method will call
WAL#setSequenceNumber(long) passing the result of the call to
HRegion#getMinSequenceId() to ensure the wal id is properly kept
up. HRegionStore does this every time it opens a new region.java.io.IOException
public static HRegion openHRegion(Configuration conf, FileSystem fs, Path rootDir, RegionInfo info, TableDescriptor htd, WAL wal, RegionServerServices rsServices, CancelableProgressable reporter) throws java.io.IOException
conf
- The Configuration object to use.fs
- Filesystem to userootDir
- Root directory for HBase instanceinfo
- Info for region to be opened.htd
- the table descriptorwal
- WAL for region to use. This method will call
WAL#setSequenceNumber(long) passing the result of the call to
HRegion#getMinSequenceId() to ensure the wal id is properly kept
up. HRegionStore does this every time it opens a new region.rsServices
- An interface we can request flushes against.reporter
- An interface we can report progress against.java.io.IOException
public static HRegion openHRegion(Configuration conf, FileSystem fs, Path rootDir, Path tableDir, RegionInfo info, TableDescriptor htd, WAL wal, RegionServerServices rsServices, CancelableProgressable reporter) throws java.io.IOException
conf
- The Configuration object to use.fs
- Filesystem to userootDir
- Root directory for HBase instanceinfo
- Info for region to be opened.htd
- the table descriptorwal
- WAL for region to use. This method will call
WAL#setSequenceNumber(long) passing the result of the call to
HRegion#getMinSequenceId() to ensure the wal id is properly kept
up. HRegionStore does this every time it opens a new region.rsServices
- An interface we can request flushes against.reporter
- An interface we can report progress against.java.io.IOException
public java.util.NavigableMap<byte[],java.lang.Integer> getReplicationScope()
public static HRegion openHRegion(HRegion other, CancelableProgressable reporter) throws java.io.IOException
other
- original objectreporter
- An interface we can report progress against.java.io.IOException
public static Region openHRegion(Region other, CancelableProgressable reporter) throws java.io.IOException
java.io.IOException
protected HRegion openHRegion(CancelableProgressable reporter) throws java.io.IOException
this
java.io.IOException
public static HRegion openReadOnlyFileSystemHRegion(Configuration conf, FileSystem fs, Path tableDir, RegionInfo info, TableDescriptor htd) throws java.io.IOException
conf
- The Configuration object to use.fs
- Filesystem to useinfo
- Info for region to be opened.htd
- the table descriptorjava.io.IOException
public static void warmupHRegion(RegionInfo info, TableDescriptor htd, WAL wal, Configuration conf, RegionServerServices rsServices, CancelableProgressable reporter) throws java.io.IOException
java.io.IOException
@Deprecated public static Path getRegionDir(Path tabledir, java.lang.String name)
tabledir
- qualified path for tablename
- ENCODED region name@Deprecated public static Path getRegionDir(Path rootdir, RegionInfo info)
rootdir
- qualified path of HBase root directoryinfo
- RegionInfo for the regionpublic static boolean rowIsInRange(RegionInfo info, byte[] row)
info
- RegionInfo that specifies the row rangerow
- row to be checkedpublic static boolean rowIsInRange(RegionInfo info, byte[] row, int offset, short length)
public Result get(Get get) throws java.io.IOException
java.io.IOException
public java.util.List<Cell> get(Get get, boolean withCoprocessor) throws java.io.IOException
java.io.IOException
public java.util.List<Cell> get(Get get, boolean withCoprocessor, long nonceGroup, long nonce) throws java.io.IOException
java.io.IOException
public void mutateRow(RowMutations rm) throws java.io.IOException
java.io.IOException
public void mutateRowsWithLocks(java.util.Collection<Mutation> mutations, java.util.Collection<byte[]> rowsToLock, long nonceGroup, long nonce) throws java.io.IOException
mutations
- The list of mutations to perform.
mutations
can contain operations for multiple rows.
Caller has to ensure that all rows are contained in this region.rowsToLock
- Rows to locknonceGroup
- Optional nonce group of the operation (client Id)nonce
- Optional nonce of the operation (unique random id to ensure "more idempotence")
If multiple rows are locked care should be taken that
rowsToLock
is sorted in order to avoid deadlocks.java.io.IOException
public ClientProtos.RegionLoadStats getLoadStatistics()
public void processRowsWithLocks(RowProcessor<?,?> processor) throws java.io.IOException
java.io.IOException
public void processRowsWithLocks(RowProcessor<?,?> processor, long nonceGroup, long nonce) throws java.io.IOException
java.io.IOException
public void processRowsWithLocks(RowProcessor<?,?> processor, long timeout, long nonceGroup, long nonce) throws java.io.IOException
java.io.IOException
public Result append(Append append) throws java.io.IOException
java.io.IOException
public Result append(Append mutation, long nonceGroup, long nonce) throws java.io.IOException
java.io.IOException
public Result increment(Increment increment) throws java.io.IOException
java.io.IOException
public Result increment(Increment mutation, long nonceGroup, long nonce) throws java.io.IOException
java.io.IOException
public long heapSize()
public boolean registerService(com.google.protobuf.Service instance)
Service
subclass as a coprocessor endpoint to
be available for handling Region#execService(com.google.protobuf.RpcController,
org.apache.hadoop.hbase.protobuf.generated.ClientProtos.CoprocessorServiceCall) calls.
Only a single instance may be registered per region for a given Service
subclass (the
instances are keyed on com.google.protobuf.Descriptors.ServiceDescriptor#getFullName()
.
After the first registration, subsequent calls with the same service name will fail with
a return value of false
.
instance
- the Service
subclass instance to expose as a coprocessor endpointtrue
if the registration was successful, false
otherwisepublic com.google.protobuf.Message execService(com.google.protobuf.RpcController controller, CoprocessorServiceCall call) throws java.io.IOException
Service
method using
the registered protocol handlers. Service
implementations must be registered via the
registerService(com.google.protobuf.Service)
method before they are available.controller
- an RpcContoller
implementation to pass to the invoked servicecall
- a CoprocessorServiceCall
instance identifying the service, method,
and parameters for the method invocationMessage
instance containing the method's resultjava.io.IOException
- if no registered service handler is found or an error
occurs during the invocationregisterService(com.google.protobuf.Service)
public byte[] checkSplit()
public int getCompactPriority()
public RegionCoprocessorHost getCoprocessorHost()
public void setCoprocessorHost(RegionCoprocessorHost coprocessorHost)
coprocessorHost
- the new coprocessor hostpublic void startRegionOperation() throws java.io.IOException
java.io.IOException
public void startRegionOperation(Operation op) throws java.io.IOException
java.io.IOException
public void closeRegionOperation() throws java.io.IOException
java.io.IOException
public void closeRegionOperation(Operation operation) throws java.io.IOException
java.io.IOException
public long getOpenSeqNum()
public java.util.Map<byte[],java.lang.Long> getMaxStoreSeqId()
public long getOldestSeqIdOfStore(byte[] familyName)
public CompactionState getCompactionState()
public void reportCompactionRequestStart(boolean isMajor)
public void reportCompactionRequestEnd(boolean isMajor, int numFiles, long filesSizeCompacted)
public void reportCompactionRequestFailure()
public void incrementCompactionsQueuedCount()
public void decrementCompactionsQueuedCount()
public void incrementFlushesQueuedCount()
public long getReadPoint()
public void onConfigurationChange(Configuration conf)
ConfigurationManager
object when the Configuration
object is reloaded from disk.onConfigurationChange
in interface ConfigurationObserver
public void registerChildren(ConfigurationManager manager)
registerChildren
in interface PropagatingConfigurationObserver
manager
- : to register topublic void deregisterChildren(ConfigurationManager manager)
deregisterChildren
in interface PropagatingConfigurationObserver
manager
- : to deregister frompublic CellComparator getCellComparator()
public long getMemStoreFlushSize()
public void requestCompaction(java.lang.String why, int priority, boolean major, CompactionLifeCycleTracker tracker) throws java.io.IOException
java.io.IOException
public void requestCompaction(byte[] family, java.lang.String why, int priority, boolean major, CompactionLifeCycleTracker tracker) throws java.io.IOException
java.io.IOException
public void requestFlush(FlushLifeCycleTracker tracker) throws java.io.IOException
java.io.IOException