org.apache.hadoop.hbase.mapreduce
Class HFileOutputFormat2

java.lang.Object
  extended by org.apache.hadoop.mapreduce.OutputFormat<K,V>
      extended by org.apache.hadoop.mapreduce.lib.output.FileOutputFormat<ImmutableBytesWritable,Cell>
          extended by org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2

@InterfaceAudience.Public
@InterfaceStability.Evolving
public class HFileOutputFormat2
extends org.apache.hadoop.mapreduce.lib.output.FileOutputFormat<ImmutableBytesWritable,Cell>

Writes HFiles. Passed Cells must arrive in order. Writes current time as the sequence id for the file. Sets the major compacted attribute on created hfiles. Calling write(null,null) will forcibly roll all HFiles being written.

Using this class as part of a MapReduce job is best done using configureIncrementalLoad(Job, HTable).


Nested Class Summary
 
Nested classes/interfaces inherited from class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.Counter
 
Field Summary
static String DATABLOCK_ENCODING_OVERRIDE_CONF_KEY
           
static String LOCALITY_SENSITIVE_CONF_KEY
          Keep locality while generating HFiles for bulkload.
 
Fields inherited from class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
BASE_OUTPUT_NAME, COMPRESS, COMPRESS_CODEC, COMPRESS_TYPE, OUTDIR, PART
 
Constructor Summary
HFileOutputFormat2()
           
 
Method Summary
static void configureIncrementalLoad(org.apache.hadoop.mapreduce.Job job, HTable table)
          Configure a MapReduce Job to perform an incremental load into the given table.
static void configureIncrementalLoadMap(org.apache.hadoop.mapreduce.Job job, HTable table)
           
 org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,Cell> getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context)
           
 
Methods inherited from class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
checkOutputSpecs, getCompressOutput, getDefaultWorkFile, getOutputCommitter, getOutputCompressorClass, getOutputName, getOutputPath, getPathForWorkFile, getUniqueFile, getWorkOutputPath, setCompressOutput, setOutputCompressorClass, setOutputName, setOutputPath
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

DATABLOCK_ENCODING_OVERRIDE_CONF_KEY

public static final String DATABLOCK_ENCODING_OVERRIDE_CONF_KEY
See Also:
Constant Field Values

LOCALITY_SENSITIVE_CONF_KEY

public static final String LOCALITY_SENSITIVE_CONF_KEY
Keep locality while generating HFiles for bulkload. See HBASE-12596

See Also:
Constant Field Values
Constructor Detail

HFileOutputFormat2

public HFileOutputFormat2()
Method Detail

getRecordWriter

public org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,Cell> getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context)
                                                                                      throws IOException,
                                                                                             InterruptedException
Specified by:
getRecordWriter in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat<ImmutableBytesWritable,Cell>
Throws:
IOException
InterruptedException

configureIncrementalLoad

public static void configureIncrementalLoad(org.apache.hadoop.mapreduce.Job job,
                                            HTable table)
                                     throws IOException
Configure a MapReduce Job to perform an incremental load into the given table. This The user should be sure to set the map output value class to either KeyValue or Put before running this function.

Throws:
IOException

configureIncrementalLoadMap

public static void configureIncrementalLoadMap(org.apache.hadoop.mapreduce.Job job,
                                               HTable table)
                                        throws IOException
Throws:
IOException


Copyright © 2007-2016 The Apache Software Foundation. All Rights Reserved.