public class MultiTableHFileOutputFormat extends HFileOutputFormat2
DATABLOCK_ENCODING_OVERRIDE_CONF_KEY, LOCALITY_SENSITIVE_CONF_KEY, STORAGE_POLICY_PROPERTY, STORAGE_POLICY_PROPERTY_CF_PREFIX, tableSeparator
Constructor and Description |
---|
MultiTableHFileOutputFormat() |
Modifier and Type | Method and Description |
---|---|
static void |
configureIncrementalLoad(Job job,
java.util.List<org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.TableInfo> multiTableDescriptors)
Analogous to
HFileOutputFormat2.configureIncrementalLoad(Job, TableDescriptor, RegionLocator) ,
this function will configure the requisite number of reducers to write HFiles for multple
tables simultaneously |
static byte[] |
createCompositeKey(byte[] tableName,
byte[] suffix)
Creates a composite key to use as a mapper output key when using
MultiTableHFileOutputFormat.configureIncrementaLoad to set up bulk ingest job
|
static byte[] |
createCompositeKey(byte[] tableName,
ImmutableBytesWritable suffix)
Alternate api which accepts an ImmutableBytesWritable for the suffix
|
static byte[] |
createCompositeKey(java.lang.String tableName,
ImmutableBytesWritable suffix)
Alternate api which accepts a String for the tableName and ImmutableBytesWritable for the
suffix
|
protected static byte[] |
getSuffix(byte[] keyBytes) |
protected static byte[] |
getTableName(byte[] keyBytes) |
combineTableNameSuffix, configureIncrementalLoad, configureIncrementalLoad, configureIncrementalLoadMap, getRecordWriter, getTableNameSuffixedWithFamily
public static byte[] createCompositeKey(byte[] tableName, byte[] suffix)
tableName
- Name of the Table - Eg: TableName.getNameAsString()suffix
- Usually represents a rowkey when creating a mapper key or column familypublic static byte[] createCompositeKey(byte[] tableName, ImmutableBytesWritable suffix)
createCompositeKey(byte[], byte[])
public static byte[] createCompositeKey(java.lang.String tableName, ImmutableBytesWritable suffix)
createCompositeKey(byte[], byte[])
public static void configureIncrementalLoad(Job job, java.util.List<org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.TableInfo> multiTableDescriptors) throws java.io.IOException
HFileOutputFormat2.configureIncrementalLoad(Job, TableDescriptor, RegionLocator)
,
this function will configure the requisite number of reducers to write HFiles for multple
tables simultaneouslyjob
- See org.apache.hadoop.mapreduce.Job
multiTableDescriptors
- Table descriptor and region locator pairsjava.io.IOException
protected static byte[] getTableName(byte[] keyBytes)
protected static byte[] getSuffix(byte[] keyBytes)