public class WriteSinkCoprocessor
extends java.lang.Object
This coprocessor 'shallows' all the writes. It allows to test a pure write workload, going through all the communication layers. The reads will work as well, but they as we never write, they will always always return an empty structure. The WAL is also skipped. Obviously, the region will never be split automatically. It's up to the user to split and move it.
For a table created like this: create 'usertable', {NAME => 'f1', VERSIONS => 1}
You can then add the coprocessor with this command: alter 'usertable', 'coprocessor' => '|org.apache.hadoop.hbase.tool.WriteSinkCoprocessor|'
And then put 'usertable', 'f1', 'f1', 'f1'
scan 'usertable' Will return: 0 row(s) in 0.0050 seconds
TODO: It needs testsConstructor and Description |
---|
WriteSinkCoprocessor() |
Modifier and Type | Method and Description |
---|---|
java.util.Optional<RegionObserver> |
getRegionObserver() |
void |
preBatchMutate(<any> c,
MiniBatchOperationInProgress<Mutation> miniBatchOp) |
void |
preOpen(<any> e) |
public java.util.Optional<RegionObserver> getRegionObserver()
public void preOpen(<any> e) throws java.io.IOException
java.io.IOException
public void preBatchMutate(<any> c, MiniBatchOperationInProgress<Mutation> miniBatchOp) throws java.io.IOException
java.io.IOException