@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class OrcSerDe extends Object implements Serializable, Cloneable, StructuredPojo
A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC.
Constructor and Description |
---|
OrcSerDe() |
Modifier and Type | Method and Description |
---|---|
OrcSerDe |
clone() |
boolean |
equals(Object obj) |
Integer |
getBlockSizeBytes()
The Hadoop Distributed File System (HDFS) block size.
|
List<String> |
getBloomFilterColumns()
The column names for which you want Kinesis Data Firehose to create bloom filters.
|
Double |
getBloomFilterFalsePositiveProbability()
The Bloom filter false positive probability (FPP).
|
String |
getCompression()
The compression code to use over data blocks.
|
Double |
getDictionaryKeyThreshold()
Represents the fraction of the total number of non-null rows.
|
Boolean |
getEnablePadding()
Set this to
true to indicate that you want stripes to be padded to the HDFS block boundaries. |
String |
getFormatVersion()
The version of the file to write.
|
Double |
getPaddingTolerance()
A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size.
|
Integer |
getRowIndexStride()
The number of rows between index entries.
|
Integer |
getStripeSizeBytes()
The number of bytes in each stripe.
|
int |
hashCode() |
Boolean |
isEnablePadding()
Set this to
true to indicate that you want stripes to be padded to the HDFS block boundaries. |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setBlockSizeBytes(Integer blockSizeBytes)
The Hadoop Distributed File System (HDFS) block size.
|
void |
setBloomFilterColumns(Collection<String> bloomFilterColumns)
The column names for which you want Kinesis Data Firehose to create bloom filters.
|
void |
setBloomFilterFalsePositiveProbability(Double bloomFilterFalsePositiveProbability)
The Bloom filter false positive probability (FPP).
|
void |
setCompression(String compression)
The compression code to use over data blocks.
|
void |
setDictionaryKeyThreshold(Double dictionaryKeyThreshold)
Represents the fraction of the total number of non-null rows.
|
void |
setEnablePadding(Boolean enablePadding)
Set this to
true to indicate that you want stripes to be padded to the HDFS block boundaries. |
void |
setFormatVersion(String formatVersion)
The version of the file to write.
|
void |
setPaddingTolerance(Double paddingTolerance)
A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size.
|
void |
setRowIndexStride(Integer rowIndexStride)
The number of rows between index entries.
|
void |
setStripeSizeBytes(Integer stripeSizeBytes)
The number of bytes in each stripe.
|
String |
toString()
Returns a string representation of this object.
|
OrcSerDe |
withBlockSizeBytes(Integer blockSizeBytes)
The Hadoop Distributed File System (HDFS) block size.
|
OrcSerDe |
withBloomFilterColumns(Collection<String> bloomFilterColumns)
The column names for which you want Kinesis Data Firehose to create bloom filters.
|
OrcSerDe |
withBloomFilterColumns(String... bloomFilterColumns)
The column names for which you want Kinesis Data Firehose to create bloom filters.
|
OrcSerDe |
withBloomFilterFalsePositiveProbability(Double bloomFilterFalsePositiveProbability)
The Bloom filter false positive probability (FPP).
|
OrcSerDe |
withCompression(OrcCompression compression)
The compression code to use over data blocks.
|
OrcSerDe |
withCompression(String compression)
The compression code to use over data blocks.
|
OrcSerDe |
withDictionaryKeyThreshold(Double dictionaryKeyThreshold)
Represents the fraction of the total number of non-null rows.
|
OrcSerDe |
withEnablePadding(Boolean enablePadding)
Set this to
true to indicate that you want stripes to be padded to the HDFS block boundaries. |
OrcSerDe |
withFormatVersion(OrcFormatVersion formatVersion)
The version of the file to write.
|
OrcSerDe |
withFormatVersion(String formatVersion)
The version of the file to write.
|
OrcSerDe |
withPaddingTolerance(Double paddingTolerance)
A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size.
|
OrcSerDe |
withRowIndexStride(Integer rowIndexStride)
The number of rows between index entries.
|
OrcSerDe |
withStripeSizeBytes(Integer stripeSizeBytes)
The number of bytes in each stripe.
|
public void setStripeSizeBytes(Integer stripeSizeBytes)
The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
stripeSizeBytes
- The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.public Integer getStripeSizeBytes()
The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
public OrcSerDe withStripeSizeBytes(Integer stripeSizeBytes)
The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
stripeSizeBytes
- The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.public void setBlockSizeBytes(Integer blockSizeBytes)
The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.
blockSizeBytes
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from
Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose
uses this value for padding calculations.public Integer getBlockSizeBytes()
The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.
public OrcSerDe withBlockSizeBytes(Integer blockSizeBytes)
The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.
blockSizeBytes
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from
Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose
uses this value for padding calculations.public void setRowIndexStride(Integer rowIndexStride)
The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
rowIndexStride
- The number of rows between index entries. The default is 10,000 and the minimum is 1,000.public Integer getRowIndexStride()
The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
public OrcSerDe withRowIndexStride(Integer rowIndexStride)
The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
rowIndexStride
- The number of rows between index entries. The default is 10,000 and the minimum is 1,000.public void setEnablePadding(Boolean enablePadding)
Set this to true
to indicate that you want stripes to be padded to the HDFS block boundaries. This
is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is
false
.
enablePadding
- Set this to true
to indicate that you want stripes to be padded to the HDFS block boundaries.
This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is
false
.public Boolean getEnablePadding()
Set this to true
to indicate that you want stripes to be padded to the HDFS block boundaries. This
is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is
false
.
true
to indicate that you want stripes to be padded to the HDFS block
boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The
default is false
.public OrcSerDe withEnablePadding(Boolean enablePadding)
Set this to true
to indicate that you want stripes to be padded to the HDFS block boundaries. This
is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is
false
.
enablePadding
- Set this to true
to indicate that you want stripes to be padded to the HDFS block boundaries.
This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is
false
.public Boolean isEnablePadding()
Set this to true
to indicate that you want stripes to be padded to the HDFS block boundaries. This
is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is
false
.
true
to indicate that you want stripes to be padded to the HDFS block
boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The
default is false
.public void setPaddingTolerance(Double paddingTolerance)
A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Kinesis Data Firehose ignores this parameter when OrcSerDe$EnablePadding is false
.
paddingTolerance
- A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe
size. The default value is 0.05, which means 5 percent of stripe size.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Kinesis Data Firehose ignores this parameter when OrcSerDe$EnablePadding is false
.
public Double getPaddingTolerance()
A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Kinesis Data Firehose ignores this parameter when OrcSerDe$EnablePadding is false
.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Kinesis Data Firehose ignores this parameter when OrcSerDe$EnablePadding is false
.
public OrcSerDe withPaddingTolerance(Double paddingTolerance)
A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Kinesis Data Firehose ignores this parameter when OrcSerDe$EnablePadding is false
.
paddingTolerance
- A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe
size. The default value is 0.05, which means 5 percent of stripe size.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Kinesis Data Firehose ignores this parameter when OrcSerDe$EnablePadding is false
.
public void setCompression(String compression)
The compression code to use over data blocks. The default is SNAPPY
.
compression
- The compression code to use over data blocks. The default is SNAPPY
.OrcCompression
public String getCompression()
The compression code to use over data blocks. The default is SNAPPY
.
SNAPPY
.OrcCompression
public OrcSerDe withCompression(String compression)
The compression code to use over data blocks. The default is SNAPPY
.
compression
- The compression code to use over data blocks. The default is SNAPPY
.OrcCompression
public OrcSerDe withCompression(OrcCompression compression)
The compression code to use over data blocks. The default is SNAPPY
.
compression
- The compression code to use over data blocks. The default is SNAPPY
.OrcCompression
public List<String> getBloomFilterColumns()
The column names for which you want Kinesis Data Firehose to create bloom filters. The default is
null
.
null
.public void setBloomFilterColumns(Collection<String> bloomFilterColumns)
The column names for which you want Kinesis Data Firehose to create bloom filters. The default is
null
.
bloomFilterColumns
- The column names for which you want Kinesis Data Firehose to create bloom filters. The default is
null
.public OrcSerDe withBloomFilterColumns(String... bloomFilterColumns)
The column names for which you want Kinesis Data Firehose to create bloom filters. The default is
null
.
NOTE: This method appends the values to the existing list (if any). Use
setBloomFilterColumns(java.util.Collection)
or withBloomFilterColumns(java.util.Collection)
if
you want to override the existing values.
bloomFilterColumns
- The column names for which you want Kinesis Data Firehose to create bloom filters. The default is
null
.public OrcSerDe withBloomFilterColumns(Collection<String> bloomFilterColumns)
The column names for which you want Kinesis Data Firehose to create bloom filters. The default is
null
.
bloomFilterColumns
- The column names for which you want Kinesis Data Firehose to create bloom filters. The default is
null
.public void setBloomFilterFalsePositiveProbability(Double bloomFilterFalsePositiveProbability)
The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
bloomFilterFalsePositiveProbability
- The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The
default value is 0.05, the minimum is 0, and the maximum is 1.public Double getBloomFilterFalsePositiveProbability()
The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
public OrcSerDe withBloomFilterFalsePositiveProbability(Double bloomFilterFalsePositiveProbability)
The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
bloomFilterFalsePositiveProbability
- The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The
default value is 0.05, the minimum is 0, and the maximum is 1.public void setDictionaryKeyThreshold(Double dictionaryKeyThreshold)
Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
dictionaryKeyThreshold
- Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this
fraction to a number that is less than the number of distinct keys in a dictionary. To always use
dictionary encoding, set this threshold to 1.public Double getDictionaryKeyThreshold()
Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
public OrcSerDe withDictionaryKeyThreshold(Double dictionaryKeyThreshold)
Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
dictionaryKeyThreshold
- Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this
fraction to a number that is less than the number of distinct keys in a dictionary. To always use
dictionary encoding, set this threshold to 1.public void setFormatVersion(String formatVersion)
The version of the file to write. The possible values are V0_11
and V0_12
. The default
is V0_12
.
formatVersion
- The version of the file to write. The possible values are V0_11
and V0_12
. The
default is V0_12
.OrcFormatVersion
public String getFormatVersion()
The version of the file to write. The possible values are V0_11
and V0_12
. The default
is V0_12
.
V0_11
and V0_12
. The
default is V0_12
.OrcFormatVersion
public OrcSerDe withFormatVersion(String formatVersion)
The version of the file to write. The possible values are V0_11
and V0_12
. The default
is V0_12
.
formatVersion
- The version of the file to write. The possible values are V0_11
and V0_12
. The
default is V0_12
.OrcFormatVersion
public OrcSerDe withFormatVersion(OrcFormatVersion formatVersion)
The version of the file to write. The possible values are V0_11
and V0_12
. The default
is V0_12
.
formatVersion
- The version of the file to write. The possible values are V0_11
and V0_12
. The
default is V0_12
.OrcFormatVersion
public String toString()
toString
in class Object
Object.toString()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.