org.apache.hadoop.hive.ql.io
Class HiveInputFormat<K extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable>
java.lang.Object
org.apache.hadoop.hive.ql.io.HiveInputFormat<K,V>
- All Implemented Interfaces:
- org.apache.hadoop.mapred.InputFormat<K,V>, org.apache.hadoop.mapred.JobConfigurable
- Direct Known Subclasses:
- BucketizedHiveInputFormat, CombineHiveInputFormat, HiveIndexedInputFormat
public class HiveInputFormat<K extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable>
- extends Object
- implements org.apache.hadoop.mapred.InputFormat<K,V>, org.apache.hadoop.mapred.JobConfigurable
HiveInputFormat is a parameterized InputFormat which looks at the path name
and determine the correct InputFormat for that path name from
mapredPlan.pathToPartitionInfo(). It can be used to read files with different
input format in the same map-reduce job.
Method Summary |
void |
configure(org.apache.hadoop.mapred.JobConf job)
|
static org.apache.hadoop.mapred.InputFormat<org.apache.hadoop.io.WritableComparable,org.apache.hadoop.io.Writable> |
getInputFormatFromCache(Class inputFormatClass,
org.apache.hadoop.mapred.JobConf job)
|
org.apache.hadoop.mapred.RecordReader |
getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter)
|
org.apache.hadoop.mapred.InputSplit[] |
getSplits(org.apache.hadoop.mapred.JobConf job,
int numSplits)
|
static void |
pushFilters(org.apache.hadoop.mapred.JobConf jobConf,
TableScanOperator tableScan)
|
CLASS_NAME
public static final String CLASS_NAME
LOG
public static final org.apache.commons.logging.Log LOG
HiveInputFormat
public HiveInputFormat()
configure
public void configure(org.apache.hadoop.mapred.JobConf job)
- Specified by:
configure
in interface org.apache.hadoop.mapred.JobConfigurable
getInputFormatFromCache
public static org.apache.hadoop.mapred.InputFormat<org.apache.hadoop.io.WritableComparable,org.apache.hadoop.io.Writable> getInputFormatFromCache(Class inputFormatClass,
org.apache.hadoop.mapred.JobConf job)
throws IOException
- Throws:
IOException
getRecordReader
public org.apache.hadoop.mapred.RecordReader getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter)
throws IOException
- Specified by:
getRecordReader
in interface org.apache.hadoop.mapred.InputFormat<K extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable>
- Throws:
IOException
getSplits
public org.apache.hadoop.mapred.InputSplit[] getSplits(org.apache.hadoop.mapred.JobConf job,
int numSplits)
throws IOException
- Specified by:
getSplits
in interface org.apache.hadoop.mapred.InputFormat<K extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable>
- Throws:
IOException
pushFilters
public static void pushFilters(org.apache.hadoop.mapred.JobConf jobConf,
TableScanOperator tableScan)
Copyright © 2014 The Apache Software Foundation. All rights reserved.