All Classes Interface Summary Class Summary Enum Summary Exception Summary
Class |
Description |
AADPrefixVerifier |
|
AesCipher |
|
AesCtrDecryptor |
|
AesCtrEncryptor |
|
AesGcmDecryptor |
|
AesGcmEncryptor |
|
AesMode |
|
BadConfigurationException |
Thrown when the input/output formats are misconfigured
|
BenchmarkCounter |
Encapsulate counter operations, compatible with Hadoop1/2, mapred/mapreduce API
|
BenchmarkCounter.NullCounter |
|
BlockMetaData |
Block metadata stored in the footer and passed in an InputSplit
|
BloomFilterImpl |
|
BloomFilterReader |
|
CleanUtil |
A Helper class which use reflections to clean up DirectBuffer.
|
CodecConfig |
|
CodecFactory |
|
CodecFactory.BytesCompressor |
Deprecated.
|
CodecFactory.BytesDecompressor |
Deprecated.
|
ColumnChunkMetaData |
Column meta data for a block stored in the file footer and passed in the InputSplit
|
ColumnChunkPageWriteStore |
|
ColumnChunkProperties |
|
ColumnDecryptionProperties |
This class is only required for setting explicit column decryption keys -
to override key retriever (or to provide keys when key metadata and/or
key retriever are not available)
|
ColumnDecryptionProperties.Builder |
|
ColumnEncryptionProperties |
|
ColumnEncryptionProperties.Builder |
|
ColumnIndexValidator |
|
ColumnIndexValidator.Contract |
|
ColumnIndexValidator.ContractViolation |
|
ColumnMasker |
|
ColumnMasker.MaskMode |
|
ColumnPruner |
|
CompressionConverter |
|
CompressionConverter.TransParquetFileReader |
|
ConcatenatingKeyValueMetadataMergeStrategy |
Strategy to concatenate if there are multiple values for a given key in metadata.
|
ConfigurationUtil |
|
Container<T> |
A simple container of objects that you can get and set.
|
ContextUtil |
Utility methods to allow applications to deal with inconsistencies between
MapReduce Context Objects API between hadoop-0.20 and later versions.
|
CounterLoader |
Factory interface for CounterLoaders, will load the counter according to groupName, counterName,
and if in the configuration, flag with name counterFlag is false, the counter will not be loaded
|
DecryptionKeyRetriever |
Interface for classes retrieving encryption keys using the key metadata.
|
DecryptionPropertiesFactory |
DecryptionPropertiesFactory interface enables transparent activation of Parquet decryption.
|
DelegatingReadSupport<T> |
Helps composing read supports
|
DelegatingWriteSupport<T> |
Helps composing write supports
|
DeprecatedParquetInputFormat<V> |
|
DeprecatedParquetOutputFormat<V> |
|
DictionaryFilter |
Applies filters based on the contents of column dictionaries.
|
EncodingList |
|
EncryptionPropertiesFactory |
EncryptionPropertiesFactory interface enables transparent activation of Parquet encryption.
|
ExampleInputFormat |
Example input format to read Parquet files
This Input format uses a rather inefficient data model but works independently of higher level abstractions.
|
ExampleOutputFormat |
An example output format
must be provided the schema up front
|
ExampleParquetWriter |
An example file writer class.
|
ExampleParquetWriter.Builder |
|
FileDecryptionProperties |
|
FileDecryptionProperties.Builder |
|
FileEncryptionProperties |
|
FileEncryptionProperties.Builder |
|
FileKeyMaterialStore |
|
FileKeyUnwrapper |
|
FileKeyWrapper |
|
FileMetaData |
File level meta data (Schema, codec, ...)
|
Footer |
Represent the footer for a given file
|
GlobalMetaData |
Merged metadata when reading from multiple files.
|
GroupReadSupport |
|
GroupWriteSupport |
|
HadoopCodecs |
|
HadoopFSKeyMaterialStore |
|
HadoopInputFile |
|
HadoopOutputFile |
|
HadoopPositionOutputStream |
|
HadoopReadOptions |
|
HadoopReadOptions.Builder |
|
HadoopStreams |
Convenience methods to get Parquet abstractions for Hadoop data streams.
|
HiddenFileFilter |
A PathFilter that filters out hidden files.
|
ICounter |
Interface for counters in mapred/mapreduce package of hadoop
|
IndexReference |
Reference to an index (OffsetIndex and ColumnIndex) for a row-group containing the offset and length values so the
reader can read the referenced data.
|
InitContext |
Context passed to ReadSupport when initializing for read
|
InternalColumnDecryptionSetup |
|
InternalColumnEncryptionSetup |
|
InternalFileDecryptor |
|
InternalFileEncryptor |
|
KeyAccessDeniedException |
|
KeyMaterial |
KeyMaterial class represents the "key material", keeping the information that allows readers to recover an encryption key (see
description of the KeyMetadata class).
|
KeyMetadata |
Parquet encryption specification defines "key metadata" as an arbitrary byte array, generated by file writers for each encryption key,
and passed to the low level API for storage in the file footer .
|
KeyToolkit |
|
KeyValueMetadataMergeStrategy |
Strategy to merge metadata.
|
KmsClient |
|
LocalWrapKmsClient |
Typically, KMS systems support in-server key wrapping.
|
MapRedCounterAdapter |
Adapt a mapred counter to ICounter
|
MapRedCounterLoader |
Concrete factory for counters in mapred API,
get a counter using mapred API when the corresponding flag is set, otherwise return a NullCounter
|
MapredParquetOutputCommitter |
Adapter for supporting ParquetOutputCommitter in mapred API
|
MapReduceCounterAdapter |
Adapt a mapreduce counter to ICounter
|
MapReduceCounterLoader |
Concrete factory for counters in mapred API,
get a counter using mapreduce API when the corresponding flag is set, otherwise return a NullCounter
|
MemoryManager |
Implements a memory manager that keeps a global context of how many Parquet
writers there are and manages the memory between them.
|
ModuleCipherFactory |
|
ModuleCipherFactory.ModuleType |
|
NonBlockedCompressorStream |
CompressorStream class that should be used instead of the default hadoop CompressorStream
object.
|
NonBlockedDecompressorStream |
DecompressorStream class that should be used instead of the default hadoop DecompressorStream
object.
|
ParquetCipher |
|
ParquetCryptoRuntimeException |
Thrown upon encryption or decryption operation problem
|
ParquetFileReader |
Internal implementation of the Parquet file reader as a block container
|
ParquetFileWriter |
Internal implementation of the Parquet file writer as a block container
|
ParquetFileWriter.Mode |
|
ParquetInputFormat<T> |
The input format to read a Parquet file.
|
ParquetInputSplit |
Deprecated.
|
ParquetMetadata |
Meta Data block stored in the footer of the file
contains file level (Codec, Schema, ...) and block level (location, columns, record count, ...) meta data
|
ParquetMetadataConverter |
|
ParquetMetadataConverter.MetadataFilter |
|
ParquetOutputCommitter |
|
ParquetOutputFormat<T> |
OutputFormat to write to a Parquet file
It requires a WriteSupport to convert the actual records to the underlying format.
|
ParquetOutputFormat.JobSummaryLevel |
|
ParquetReader<T> |
Read records from a Parquet file.
|
ParquetReader.Builder<T> |
|
ParquetReadOptions |
|
ParquetReadOptions.Builder |
|
ParquetRecordReader<T> |
Reads the records from a block of a Parquet file
|
ParquetRecordWriter<T> |
Writes records to a Parquet file
|
ParquetWriter<T> |
Write records to a Parquet file.
|
ParquetWriter.Builder<T,SELF extends ParquetWriter.Builder<T,SELF>> |
An abstract builder class for ParquetWriter instances.
|
PrintFooter |
Utility to print footer information
|
PropertiesDrivenCryptoFactory |
|
ReadSupport<T> |
|
ReadSupport.ReadContext |
information to read the file
|
RowGroupFilter |
|
RowGroupFilter.FilterLevel |
|
SerializationUtil |
Serialization utils copied from:
https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/util/HadoopUtils.java
TODO: Refactor elephant-bird so that we can depend on utils like this without extra baggage.
|
SnappyCodec |
Snappy compression codec for Parquet.
|
SnappyCompressor |
This class is a wrapper around the snappy compressor.
|
SnappyDecompressor |
|
SnappyUtil |
Utilities for SnappyCompressor and SnappyDecompressor.
|
StatisticsFilter |
|
StrictKeyValueMetadataMergeStrategy |
Strategy to throw errors if there are multiple values for a given key in metadata.
|
TagVerificationException |
|
UnmaterializableRecordCounter |
Tracks number of records that cannot be materialized and throws ParquetDecodingException
if the rate of errors crosses a limit.
|
WriteSupport<T> |
|
WriteSupport.FinalizedWriteContext |
Information to be added in the file once all the records have been written
|
WriteSupport.WriteContext |
information to be persisted in the file
|
ZstandardCodec |
ZSTD compression codec for Parquet.
|
ZstdCompressorStream |
|
ZstdDecompressorStream |
|