Interface HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
-
- All Superinterfaces:
org.apache.camel.builder.EndpointConsumerBuilder
,org.apache.camel.EndpointConsumerResolver
,org.apache.camel.builder.EndpointProducerBuilder
,org.apache.camel.EndpointProducerResolver
,HdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
,HdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Enclosing interface:
- HdfsEndpointBuilderFactory
public static interface HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder extends HdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder, HdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
Advanced builder for endpoint for the HDFS component.
-
-
Method Summary
All Methods Instance Methods Default Methods Modifier and Type Method Description default HdfsEndpointBuilderFactory.HdfsEndpointBuilder
basic()
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
blockSize(long blockSize)
The size of the HDFS blocks.default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
blockSize(String blockSize)
The size of the HDFS blocks.default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
bufferSize(int bufferSize)
The buffer size used by HDFS.default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
bufferSize(String bufferSize)
The buffer size used by HDFS.default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
checkIdleInterval(int checkIdleInterval)
How often (time in millis) in to run the idle checker background task.default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
checkIdleInterval(String checkIdleInterval)
How often (time in millis) in to run the idle checker background task.default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
chunkSize(int chunkSize)
When reading a normal file, this is split into chunks producing a message per chunk.default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
chunkSize(String chunkSize)
When reading a normal file, this is split into chunks producing a message per chunk.default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
compressionCodec(String compressionCodec)
The compression codec to use.default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
compressionCodec(org.apache.camel.component.hdfs.HdfsCompressionCodec compressionCodec)
The compression codec to use.default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
compressionType(String compressionType)
The compression type to use (is default not in use).default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
compressionType(org.apache.hadoop.io.SequenceFile.CompressionType compressionType)
The compression type to use (is default not in use).default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
openedSuffix(String openedSuffix)
When a file is opened for reading/writing the file is renamed with this suffix to avoid to read it during the writing phase.default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
readSuffix(String readSuffix)
Once the file has been read is renamed with this suffix to avoid to read it again.default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
replication(short replication)
The HDFS replication factor.default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
replication(String replication)
The HDFS replication factor.default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder
splitStrategy(String splitStrategy)
In the current version of Hadoop opening a file in append mode is disabled since it's not very reliable.-
Methods inherited from interface org.apache.camel.builder.EndpointConsumerBuilder
doSetMultiValueProperties, doSetMultiValueProperty, doSetProperty, expr, getRawUri, getUri
-
Methods inherited from interface org.apache.camel.builder.EndpointProducerBuilder
doSetMultiValueProperties, doSetMultiValueProperty, doSetProperty, expr, getRawUri, getUri
-
Methods inherited from interface org.apache.camel.builder.endpoint.dsl.HdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
bridgeErrorHandler, bridgeErrorHandler, exceptionHandler, exceptionHandler, exchangePattern, exchangePattern, pollStrategy, pollStrategy
-
Methods inherited from interface org.apache.camel.builder.endpoint.dsl.HdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
lazyStartProducer, lazyStartProducer
-
-
-
-
Method Detail
-
basic
default HdfsEndpointBuilderFactory.HdfsEndpointBuilder basic()
- Specified by:
basic
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
basic
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
-
blockSize
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder blockSize(long blockSize)
The size of the HDFS blocks. The option is a: <code>long</code> type. Default: 67108864 Group: advanced- Specified by:
blockSize
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
blockSize
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
blockSize
- the value to set- Returns:
- the dsl builder
-
blockSize
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder blockSize(String blockSize)
The size of the HDFS blocks. The option will be converted to a <code>long</code> type. Default: 67108864 Group: advanced- Specified by:
blockSize
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
blockSize
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
blockSize
- the value to set- Returns:
- the dsl builder
-
bufferSize
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder bufferSize(int bufferSize)
The buffer size used by HDFS. The option is a: <code>int</code> type. Default: 4096 Group: advanced- Specified by:
bufferSize
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
bufferSize
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
bufferSize
- the value to set- Returns:
- the dsl builder
-
bufferSize
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder bufferSize(String bufferSize)
The buffer size used by HDFS. The option will be converted to a <code>int</code> type. Default: 4096 Group: advanced- Specified by:
bufferSize
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
bufferSize
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
bufferSize
- the value to set- Returns:
- the dsl builder
-
checkIdleInterval
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder checkIdleInterval(int checkIdleInterval)
How often (time in millis) in to run the idle checker background task. This option is only in use if the splitter strategy is IDLE. The option is a: <code>int</code> type. Default: 500 Group: advanced- Specified by:
checkIdleInterval
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
checkIdleInterval
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
checkIdleInterval
- the value to set- Returns:
- the dsl builder
-
checkIdleInterval
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder checkIdleInterval(String checkIdleInterval)
How often (time in millis) in to run the idle checker background task. This option is only in use if the splitter strategy is IDLE. The option will be converted to a <code>int</code> type. Default: 500 Group: advanced- Specified by:
checkIdleInterval
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
checkIdleInterval
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
checkIdleInterval
- the value to set- Returns:
- the dsl builder
-
chunkSize
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder chunkSize(int chunkSize)
When reading a normal file, this is split into chunks producing a message per chunk. The option is a: <code>int</code> type. Default: 4096 Group: advanced- Specified by:
chunkSize
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
chunkSize
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
chunkSize
- the value to set- Returns:
- the dsl builder
-
chunkSize
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder chunkSize(String chunkSize)
When reading a normal file, this is split into chunks producing a message per chunk. The option will be converted to a <code>int</code> type. Default: 4096 Group: advanced- Specified by:
chunkSize
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
chunkSize
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
chunkSize
- the value to set- Returns:
- the dsl builder
-
compressionCodec
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder compressionCodec(org.apache.camel.component.hdfs.HdfsCompressionCodec compressionCodec)
The compression codec to use. The option is a: <code>org.apache.camel.component.hdfs.HdfsCompressionCodec</code> type. Default: DEFAULT Group: advanced- Specified by:
compressionCodec
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
compressionCodec
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
compressionCodec
- the value to set- Returns:
- the dsl builder
-
compressionCodec
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder compressionCodec(String compressionCodec)
The compression codec to use. The option will be converted to a <code>org.apache.camel.component.hdfs.HdfsCompressionCodec</code> type. Default: DEFAULT Group: advanced- Specified by:
compressionCodec
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
compressionCodec
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
compressionCodec
- the value to set- Returns:
- the dsl builder
-
compressionType
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder compressionType(org.apache.hadoop.io.SequenceFile.CompressionType compressionType)
The compression type to use (is default not in use). The option is a: <code>org.apache.hadoop.io.SequenceFile.CompressionType</code> type. Default: NONE Group: advanced- Specified by:
compressionType
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
compressionType
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
compressionType
- the value to set- Returns:
- the dsl builder
-
compressionType
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder compressionType(String compressionType)
The compression type to use (is default not in use). The option will be converted to a <code>org.apache.hadoop.io.SequenceFile.CompressionType</code> type. Default: NONE Group: advanced- Specified by:
compressionType
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
compressionType
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
compressionType
- the value to set- Returns:
- the dsl builder
-
openedSuffix
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder openedSuffix(String openedSuffix)
When a file is opened for reading/writing the file is renamed with this suffix to avoid to read it during the writing phase. The option is a: <code>java.lang.String</code> type. Default: opened Group: advanced- Specified by:
openedSuffix
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
openedSuffix
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
openedSuffix
- the value to set- Returns:
- the dsl builder
-
readSuffix
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder readSuffix(String readSuffix)
Once the file has been read is renamed with this suffix to avoid to read it again. The option is a: <code>java.lang.String</code> type. Default: read Group: advanced- Specified by:
readSuffix
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
readSuffix
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
readSuffix
- the value to set- Returns:
- the dsl builder
-
replication
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder replication(short replication)
The HDFS replication factor. The option is a: <code>short</code> type. Default: 3 Group: advanced- Specified by:
replication
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
replication
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
replication
- the value to set- Returns:
- the dsl builder
-
replication
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder replication(String replication)
The HDFS replication factor. The option will be converted to a <code>short</code> type. Default: 3 Group: advanced- Specified by:
replication
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
replication
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
replication
- the value to set- Returns:
- the dsl builder
-
splitStrategy
default HdfsEndpointBuilderFactory.AdvancedHdfsEndpointBuilder splitStrategy(String splitStrategy)
In the current version of Hadoop opening a file in append mode is disabled since it's not very reliable. So, for the moment, it's only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way: If the split strategy option has been defined, the hdfs path will be used as a directory and files will be created using the configured UuidGenerator. Every time a splitting condition is met, a new file is created. The splitStrategy option is defined as a string with the following syntax: splitStrategy=ST:value,ST:value,... where ST can be: BYTES a new file is created, and the old is closed when the number of written bytes is more than value MESSAGES a new file is created, and the old is closed when the number of written messages is more than value IDLE a new file is created, and the old is closed when no writing happened in the last value milliseconds. The option is a: <code>java.lang.String</code> type. Group: advanced- Specified by:
splitStrategy
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointConsumerBuilder
- Specified by:
splitStrategy
in interfaceHdfsEndpointBuilderFactory.AdvancedHdfsEndpointProducerBuilder
- Parameters:
splitStrategy
- the value to set- Returns:
- the dsl builder
-
-