org.apache.hadoop.hbase.mapreduce
Class HFileOutputFormat
java.lang.Object
org.apache.hadoop.mapreduce.OutputFormat<K,V>
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat<ImmutableBytesWritable,KeyValue>
org.apache.hadoop.hbase.mapreduce.HFileOutputFormat
Deprecated. use HFileOutputFormat2
instead.
@Deprecated
@InterfaceAudience.Public
@InterfaceStability.Stable
public class HFileOutputFormat
- extends org.apache.hadoop.mapreduce.lib.output.FileOutputFormat<ImmutableBytesWritable,KeyValue>
Writes HFiles. Passed KeyValues must arrive in order.
Writes current time as the sequence id for the file. Sets the major compacted
attribute on created hfiles. Calling write(null,null) will forcibly roll
all HFiles being written.
Using this class as part of a MapReduce job is best done
using configureIncrementalLoad(Job, HTable)
.
- See Also:
KeyValueSortReducer
Nested classes/interfaces inherited from class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat |
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.Counter |
Fields inherited from class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat |
BASE_OUTPUT_NAME, COMPRESS, COMPRESS_CODEC, COMPRESS_TYPE, OUTDIR, PART |
Methods inherited from class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat |
checkOutputSpecs, getCompressOutput, getDefaultWorkFile, getOutputCommitter, getOutputCompressorClass, getOutputName, getOutputPath, getPathForWorkFile, getUniqueFile, getWorkOutputPath, setCompressOutput, setOutputCompressorClass, setOutputName, setOutputPath |
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
DATABLOCK_ENCODING_OVERRIDE_CONF_KEY
public static final String DATABLOCK_ENCODING_OVERRIDE_CONF_KEY
- Deprecated.
- See Also:
- Constant Field Values
HFileOutputFormat
public HFileOutputFormat()
- Deprecated.
getRecordWriter
public org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,KeyValue> getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context)
throws IOException,
InterruptedException
- Deprecated.
- Specified by:
getRecordWriter
in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat<ImmutableBytesWritable,KeyValue>
- Throws:
IOException
InterruptedException
configureIncrementalLoad
public static void configureIncrementalLoad(org.apache.hadoop.mapreduce.Job job,
HTable table)
throws IOException
- Deprecated.
- Configure a MapReduce Job to perform an incremental load into the given
table. This
- Inspects the table to configure a total order partitioner
- Uploads the partitions file to the cluster and adds it to the DistributedCache
- Sets the number of reduce tasks to match the current number of regions
- Sets the output key/value class to match HFileOutputFormat's requirements
- Sets the reducer up to perform the appropriate sorting (either KeyValueSortReducer or
PutSortReducer)
The user should be sure to set the map output value class to either KeyValue or Put before
running this function.
- Throws:
IOException
Copyright © 2007-2016 The Apache Software Foundation. All Rights Reserved.