Attributes
- Companion
- object
- Source
- KafkaConsumeChunk.scala
- Graph
-
- Supertypes
- Known subtypes
-
Members list
Value members
Concrete methods
Consume from all assigned partitions concurrently, processing the records in Chunk
s.
Consume from all assigned partitions concurrently, processing the records in Chunk
s. For each Chunk
, the provided processor
is called, after that has finished the offsets for all messages in the chunk are committed.
This method is intended to be used in cases that require at-least-once-delivery, where messages have to be processed before offsets are committed. By relying on the methods like partitionedStream, records, and similar, you have to correctly implement not only your processing logic but also the correct mechanism for committing offsets. This can be tricky to do in a correct and efficient way.
Working with Chunk
s of records has several benefits:
-
As a user, you don't have to care about committing offsets correctly. You can focus on implementing your business logic
-
It's very straightforward to batch several messages from a
Chunk
together, e.g. for efficient writes to a persistent storage -
You can liberally use logic that involves concurrency, filtering, and re-ordering of messages without having to worry about incorrect offset commits
The processor
is a function that takes a Chunk[ConsumerRecord[K, V]]
and returns a F[CommitNow]
. CommitNow is isomorphic to Unit
, but helps in transporting the intention that processing of a Chunk
is done, offsets should be committed, and no important processing should be done afterwards.
The returned value has the type F[Nothing]
, because it's a never-ending process that doesn't terminate, and therefore doesn't return a result.
Attributes
- See also
-
CommitNow
- Note
-
This method does not make any use of Kafka's auto-commit feature, it implements "manual" commits in a way that suits most of the common use cases.
you have to first use
subscribe
orassign
the consumer before using thisStream
. If you forgot to subscribe, there will be a NotSubscribedException raised in theStream
. - Source
- KafkaConsumeChunk.scala
Inherited methods
Alias for partitionedStream
Consume from all assigned partitions, producing a stream of CommittableConsumerRecords.
Consume from all assigned partitions, producing a stream of CommittableConsumerRecords. Alias for stream.
Attributes
- Inherited from:
- KafkaConsume
- Source
- KafkaConsume.scala
Inherited and Abstract methods
Stream
where the elements themselves are Stream
s which continually request records for a single partition.
Stream
where the elements themselves are Stream
s which continually request records for a single partition. These Stream
s will have to be processed in parallel, using parJoin
or parJoinUnbounded
. Note that when using parJoin(n)
and n
is smaller than the number of currently assigned partitions, then there will be assigned partitions which won't be processed. For that reason, prefer parJoinUnbounded
and the actual limit will be the number of assigned partitions.
If you do not want to process all partitions in parallel, then you can use records instead, where records for all partitions are in a single Stream
.
Attributes
- Note
-
you have to first use
subscribe
orassign
the consumer before using thisStream
. If you forgot to subscribe, there will be a NotSubscribedException raised in theStream
. - Inherited from:
- KafkaConsume
- Source
- KafkaConsume.scala
Stream
where each element contains a Map
with all newly assigned partitions.
Stream
where each element contains a Map
with all newly assigned partitions. Keys of this Map
are TopicPartition
s, and values are record streams for the particular TopicPartition
. These streams will be closed only when a partition is revoked.
With the default assignor, all previous partitions are revoked at once, and a new set of partitions is assigned to a consumer on each rebalance. In this case, each returned Map
contains the full partition assignment for the consumer. And all streams from the previous assignment are closed. It means, that partitionsMapStream
reflects the default assignment process in a streaming manner.
This may not be the case when a custom assignor is configured in the consumer. When using the CooperativeStickyAssignor
, for instance, partitions may be revoked individually. In this case, each element in the stream (eachMap
) will contain only streams for newly assigned partitions. Previously returned streams for partitions that are retained will remain active. Only streams for revoked partitions will be closed.
This is the most generic Stream
method. If you don't need such control, consider using partitionedStream
or stream
methods. They are both based on a partitionsMapStream
.
Attributes
- See also
- Note
-
you have to first use
subscribe
orassign
to subscribe the consumer before using thisStream
. If you forgot to subscribe, there will be a NotSubscribedException raised in theStream
. - Inherited from:
- KafkaConsume
- Source
- KafkaConsume.scala
Stops consuming new messages from Kafka.
Stops consuming new messages from Kafka. This method could be used to implement a graceful shutdown.
This method has a few effects:
-
After this call no more data will be fetched from Kafka through the
poll
method. -
All currently running streams will continue to run until all in-flight messages will be processed. It means that streams will be completed when all fetched messages will be processed.
If some of the records methods will be called after stopConsuming call, these methods will return empty streams.
More than one call of stopConsuming will have no effect.
Attributes
- Inherited from:
- KafkaConsume
- Source
- KafkaConsume.scala
Alias for partitionedStream.parJoinUnbounded
.
Alias for partitionedStream.parJoinUnbounded
.
Attributes
- See also
-
partitionedRecords for more information
- Note
-
you have to first use
subscribe
orassign
the consumer before using thisStream
. If you forgot to subscribe, there will be a NotSubscribedException raised in theStream
. - Inherited from:
- KafkaConsume
- Source
- KafkaConsume.scala