Materialized value of the consumer Source
.
Materialized value of the consumer Source
.
Combine control and a stream completion signal materialized values into one, so that the stream can be stopped in a controlled way without losing commits.
Combine control and a stream completion signal materialized values into one, so that the stream can be stopped in a controlled way without losing commits.
An implementation of Control to be used as an empty value, all methods return a failed future.
Convenience for "at-most once delivery" semantics.
Convenience for "at-most once delivery" semantics. The offset of each message is committed to Kafka before being emitted downstream.
The same as #plainPartitionedSource but with offset commit with metadata support.
The commitWithMetadataSource
makes it possible to add additional metadata (in the form of a string)
when an offset is committed based on the record.
The commitWithMetadataSource
makes it possible to add additional metadata (in the form of a string)
when an offset is committed based on the record. This can be useful (for example) to store information about which
node made the commit, what time the commit was made, the timestamp of the record etc.
The same as #plainExternalSource but with offset commit support.
The same as #plainPartitionedManualOffsetSource but with offset commit support.
The same as #plainPartitionedSource but with offset commit support.
The committableSource
makes it possible to commit offset positions to Kafka.
The committableSource
makes it possible to commit offset positions to Kafka.
This is useful when "at-least once delivery" is desired, as each message will likely be
delivered one time but in failure cases could be duplicated.
If you commit the offset before processing the message you get "at-most once delivery" semantics, and for that there is a #atMostOnceSource.
Compared to auto-commit, this gives exact control over when a message is considered consumed.
If you need to store offsets in anything other than Kafka, #plainSource should be used instead of this API.
Special source that can use an external KafkaAsyncConsumer
.
Special source that can use an external KafkaAsyncConsumer
. This is useful when you have
a lot of manually assigned topic-partitions and want to keep only one kafka consumer.
The plainPartitionedManualOffsetSource
is similar to #plainPartitionedSource but allows the use of an offset store outside
of Kafka, while retaining the automatic partition assignment.
The plainPartitionedManualOffsetSource
is similar to #plainPartitionedSource but allows the use of an offset store outside
of Kafka, while retaining the automatic partition assignment. When a topic-partition is assigned to a consumer, the getOffsetsOnAssign
function will be called to retrieve the offset, followed by a seek to the correct spot in the partition.
The onRevoke
function gives the consumer a chance to store any uncommitted offsets, and do any other cleanup
that is required. Also allows the user access to the onPartitionsRevoked
hook, useful for cleaning up any
partition-specific resources being used by the consumer.
The plainPartitionedSource
is a way to track automatic partition assignment from kafka.
The plainPartitionedSource
is a way to track automatic partition assignment from kafka.
When a topic-partition is assigned to a consumer, this source will emit tuples with the assigned topic-partition and a corresponding
source of ConsumerRecord
s.
When a topic-partition is revoked, the corresponding source completes.
The plainSource
emits ConsumerRecord
elements (as received from the underlying KafkaConsumer
).
The plainSource
emits ConsumerRecord
elements (as received from the underlying KafkaConsumer
).
It has no support for committing offsets to Kafka. It can be used when the offset is stored externally
or with auto-commit (note that auto-commit is by default disabled).
The consumer application doesn't need to use Kafka's built-in offset storage and can store offsets in a store of its own choosing. The primary use case for this is allowing the application to store both the offset and the results of the consumption in the same system in a way that both the results and offsets are stored atomically. This is not always possible, but when it is, it will make the consumption fully atomic and give "exactly once" semantics that are stronger than the "at-least once" semantics you get with Kafka's offset commit functionality.
API MAY CHANGE
API MAY CHANGE
This source emits ConsumerRecord
together with the offset position as flow context, thus makes it possible
to commit offset positions to Kafka.
This is useful when "at-least once delivery" is desired, as each message will likely be
delivered one time but in failure cases could be duplicated.
It is intended to be used with Akka's [flow with context](https://doc.akka.io/docs/akka/current/stream/operators/Flow/asFlowWithContext.html), Producer.flowWithContext and/or Committer.sinkWithOffsetContext.
This variant makes it possible to add additional metadata (in the form of a string) when an offset is committed based on the record. This can be useful (for example) to store information about which node made the commit, what time the commit was made, the timestamp of the record etc.
API MAY CHANGE
API MAY CHANGE
This source emits ConsumerRecord
together with the offset position as flow context, thus makes it possible
to commit offset positions to Kafka.
This is useful when "at-least once delivery" is desired, as each message will likely be
delivered one time but in failure cases could be duplicated.
It is intended to be used with Akka's [flow with context](https://doc.akka.io/docs/akka/current/stream/operators/Flow/asFlowWithContext.html), Producer.flowWithContext and/or Committer.sinkWithOffsetContext.
Akka Stream connector for subscribing to Kafka topics.