sealed abstract class KafkaConsumer[F[_], K, V] extends KafkaConsume[F, K, V] with KafkaAssignment[F] with KafkaOffsetsV2[F] with KafkaSubscription[F] with KafkaTopics[F] with KafkaCommit[F] with KafkaMetrics[F] with KafkaConsumerLifecycle[F]
KafkaConsumer represents a consumer of Kafka records, with the
ability to subscribe
to topics, start a single top-level stream,
and optionally control it via the provided fiber instance.
The following top-level streams are provided.
- stream provides a single stream of records, where the order
of records is guaranteed per topic-partition.
- partitionedStream provides a stream with elements as streams
that continually request records for a single partition. Order
is guaranteed per topic-partition, but all assigned partitions
will have to be processed in parallel.
- partitionsMapStream provides a stream where each element contains
a current assignment. The current assignment is the Map
, where keys
is a TopicPartition
, and values are streams with records for a
particular TopicPartition
.
For the streams, records are wrapped in CommittableConsumerRecords
which provide CommittableOffsets with the ability to commit
record offsets to Kafka. For performance reasons, offsets are
usually committed in batches using CommittableOffsetBatch.
Provided Pipe
s, like commitBatchWithin are available for
batch committing offsets. If you are not committing offsets to
Kafka, you can simply discard the CommittableOffset, and
only make use of the record.
While it's technically possible to start more than one stream from a
single KafkaConsumer, it is generally not recommended as there is
no guarantee which stream will receive which records, and there might
be an overlap, in terms of duplicate records, between the two streams.
If a first stream completes, possibly with error, there's no guarantee
the stream has processed all of the records it received, and a second
stream from the same KafkaConsumer might not be able to pick up where
the first one left off. Therefore, only create a single top-level stream
per KafkaConsumer, and if you want to start a new stream if the first
one finishes, let the KafkaConsumer shutdown and create a new one.
- Source
- KafkaConsumer.scala
- Alphabetic
- By Inheritance
- KafkaConsumer
- KafkaConsumerLifecycle
- KafkaMetrics
- KafkaCommit
- KafkaTopics
- KafkaSubscription
- KafkaOffsetsV2
- KafkaOffsets
- KafkaAssignment
- KafkaConsume
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Abstract Value Members
-
abstract
def
assign(topic: String): F[Unit]
Manually assigns all partitions for the specified topic to the consumer.
Manually assigns all partitions for the specified topic to the consumer.
- Definition Classes
- KafkaAssignment
-
abstract
def
assign(partitions: NonEmptySet[TopicPartition]): F[Unit]
Manually assigns the specified list of topic partitions to the consumer.
Manually assigns the specified list of topic partitions to the consumer. This function does not allow for incremental assignment and will replace the previous assignment (if there is one).
Manual topic assignment through this method does not use the consumer's group management functionality. As such, there will be no rebalance operation triggered when group membership or cluster and topic metadata change. Note that it is not possible to use both manual partition assignment with
assign
and group assigment withsubscribe
.If auto-commit is enabled, an async commit (based on the old assignment) will be triggered before the new assignment replaces the old one.
To unassign all partitions, use KafkaConsumer#unsubscribe.
- Definition Classes
- KafkaAssignment
- See also
org.apache.kafka.clients.consumer.KafkaConsumer#assign
-
abstract
def
assignment: F[SortedSet[TopicPartition]]
Returns the set of partitions currently assigned to this consumer.
Returns the set of partitions currently assigned to this consumer.
- Definition Classes
- KafkaAssignment
-
abstract
def
assignmentStream: Stream[F, SortedSet[TopicPartition]]
Stream
where the elements are the set ofTopicPartition
s currently assigned to this consumer.Stream
where the elements are the set ofTopicPartition
s currently assigned to this consumer. The stream emits whenever a rebalance changes partition assignments.- Definition Classes
- KafkaAssignment
-
abstract
def
awaitTermination: F[Unit]
Wait for consumer to shut down.
Wait for consumer to shut down. Note that
awaitTermination
is guaranteed to complete after consumer shutdown, even when the consumer is cancelled withterminate
.This method will not initiate shutdown. To initiate shutdown and wait for it to complete, you can use
terminate >> awaitTermination
.- Definition Classes
- KafkaConsumerLifecycle
-
abstract
def
beginningOffsets(partitions: Set[TopicPartition], timeout: FiniteDuration): F[Map[TopicPartition, Long]]
Returns the first offset for the specified partitions.
Returns the first offset for the specified partitions.
- Definition Classes
- KafkaTopics
-
abstract
def
beginningOffsets(partitions: Set[TopicPartition]): F[Map[TopicPartition, Long]]
Returns the first offset for the specified partitions.
Timeout is determined bydefault.api.timeout.ms
, which is set using ConsumerSettings#withDefaultApiTimeout.Returns the first offset for the specified partitions.
Timeout is determined bydefault.api.timeout.ms
, which is set using ConsumerSettings#withDefaultApiTimeout.- Definition Classes
- KafkaTopics
-
abstract
def
commitAsync(offsets: Map[TopicPartition, OffsetAndMetadata]): F[Unit]
Commit the specified offsets for the specified list of topics and partitions to Kafka.
This commits offsets to Kafka.Commit the specified offsets for the specified list of topics and partitions to Kafka.
This commits offsets to Kafka. The offsets committed using this API will be used on the first fetch after every rebalance and also on startup. As such, if you need to store offsets in anything other than Kafka, this API should not be used. The committed offset should be the next message your application will consume, i.e. lastProcessedMessageOffset + 1. If automatic group management with subscribe is used, then the committed offsets must belong to the currently auto-assigned partitions.
Offsets committed through multiple calls to this API are guaranteed to be sent in the same order as the invocations. Additionally note that offsets committed through this API are guaranteed to complete before a subsequent call to commitSync (and variants) returns.
Note, that the recommended way for committing offsets in fs2-kafka is to usecommit
on CommittableConsumerRecord, CommittableOffset or CommittableOffsetBatch. commitAsync and commitSync usually needs only for some custom scenarios.- offsets
A map of offsets by partition with associate metadata.
- Definition Classes
- KafkaCommit
- See also
org.apache.kafka.clients.consumer.KafkaConsumer#commitAsync
-
abstract
def
commitSync(offsets: Map[TopicPartition, OffsetAndMetadata]): F[Unit]
Commit the specified offsets for the specified list of topics and partitions.
This commits offsets to Kafka.Commit the specified offsets for the specified list of topics and partitions.
This commits offsets to Kafka. The offsets committed using this API will be used on the first fetch after every rebalance and also on startup. As such, if you need to store offsets in anything other than Kafka, this API should not be used. The committed offset should be the next message your application will consume, i.e. lastProcessedMessageOffset + 1. If automatic group management with subscribe is used, then the committed offsets must belong to the currently auto-assigned partitions.
Despite of it's name, this method is not blocking. But it's based on a blocking org.apache.kafka.clients.consumer.KafkaConsumer#commitSync method.
Note, that the recommended way for committing offsets in fs2-kafka is to usecommit
on CommittableConsumerRecord, CommittableOffset or CommittableOffsetBatch. commitAsync and commitSync usually needs only for some custom scenarios.- offsets
A map of offsets by partition with associated metadata
- Definition Classes
- KafkaCommit
- See also
org.apache.kafka.clients.consumer.KafkaConsumer#commitSync
-
abstract
def
committed(partitions: Set[TopicPartition], timeout: FiniteDuration): F[Map[TopicPartition, OffsetAndMetadata]]
Returns the last committed offsets for the given partitions.
Timeout is determined bydefault.api.timeout.ms
, which is set using ConsumerSettings#withDefaultApiTimeout.Returns the last committed offsets for the given partitions.
Timeout is determined bydefault.api.timeout.ms
, which is set using ConsumerSettings#withDefaultApiTimeout.- Definition Classes
- KafkaOffsetsV2
-
abstract
def
committed(partitions: Set[TopicPartition]): F[Map[TopicPartition, OffsetAndMetadata]]
Returns the last committed offsets for the given partitions.
Returns the last committed offsets for the given partitions.
- Definition Classes
- KafkaOffsetsV2
-
abstract
def
endOffsets(partitions: Set[TopicPartition], timeout: FiniteDuration): F[Map[TopicPartition, Long]]
Returns the last offset for the specified partitions.
Returns the last offset for the specified partitions.
- Definition Classes
- KafkaTopics
-
abstract
def
endOffsets(partitions: Set[TopicPartition]): F[Map[TopicPartition, Long]]
Returns the last offset for the specified partitions.
Timeout is determined byrequest.timeout.ms
, which is set using ConsumerSettings#withRequestTimeout.Returns the last offset for the specified partitions.
Timeout is determined byrequest.timeout.ms
, which is set using ConsumerSettings#withRequestTimeout.- Definition Classes
- KafkaTopics
-
abstract
def
metrics: F[Map[MetricName, Metric]]
Returns consumer metrics.
Returns consumer metrics.
- Definition Classes
- KafkaMetrics
- See also
org.apache.kafka.clients.consumer.KafkaConsumer#metrics
-
abstract
def
partitionedStream: Stream[F, Stream[F, CommittableConsumerRecord[F, K, V]]]
Stream
where the elements themselves areStream
s which continually request records for a single partition.Stream
where the elements themselves areStream
s which continually request records for a single partition. TheseStream
s will have to be processed in parallel, usingparJoin
orparJoinUnbounded
. Note that when usingparJoin(n)
andn
is smaller than the number of currently assigned partitions, then there will be assigned partitions which won't be processed. For that reason, preferparJoinUnbounded
and the actual limit will be the number of assigned partitions.
If you do not want to process all partitions in parallel, then you can use records instead, where records for all partitions are in a singleStream
.- Definition Classes
- KafkaConsume
- Note
you have to first use
subscribe
orassign
the consumer before using thisStream
. If you forgot to subscribe, there will be a NotSubscribedException raised in theStream
.
-
abstract
def
partitionsFor(topic: String, timeout: FiniteDuration): F[List[PartitionInfo]]
Returns the partitions for the specified topic.
Returns the partitions for the specified topic.
- Definition Classes
- KafkaTopics
-
abstract
def
partitionsFor(topic: String): F[List[PartitionInfo]]
Returns the partitions for the specified topic.
Returns the partitions for the specified topic.
Timeout is determined by
default.api.timeout.ms
, which is set using ConsumerSettings#withDefaultApiTimeout.- Definition Classes
- KafkaTopics
-
abstract
def
partitionsMapStream: Stream[F, Map[TopicPartition, Stream[F, CommittableConsumerRecord[F, K, V]]]]
Stream
where each element contains aMap
with all newly assigned partitions.Stream
where each element contains aMap
with all newly assigned partitions. Keys of thisMap
areTopicPartition
s, and values are record streams for the particularTopicPartition
. These streams will be closed only when a partition is revoked.
With the default assignor, all previous partitions are revoked at once, and a new set of partitions is assigned to a consumer on each rebalance. In this case, each returnedMap
contains the full partition assignment for the consumer. And all streams from the previous assignment are closed. It means, thatpartitionsMapStream
reflects the default assignment process in a streaming manner.
This may not be the case when a custom assignor is configured in the consumer. When using theCooperativeStickyAssignor
, for instance, partitions may be revoked individually. In this case, each element in the stream (eachMap
) will contain only streams for newly assigned partitions. Previously returned streams for partitions that are retained will remain active. Only streams for revoked partitions will be closed.
This is the most genericStream
method. If you don't need such control, consider usingpartitionedStream
orstream
methods. They are both based on apartitionsMapStream
.- Definition Classes
- KafkaConsume
- Note
you have to first use
subscribe
orassign
to subscribe the consumer before using thisStream
. If you forgot to subscribe, there will be a NotSubscribedException raised in theStream
.- See also
-
abstract
def
position(partition: TopicPartition, timeout: FiniteDuration): F[Long]
Returns the offset of the next record that will be fetched.
Returns the offset of the next record that will be fetched.
- Definition Classes
- KafkaOffsets
-
abstract
def
position(partition: TopicPartition): F[Long]
Returns the offset of the next record that will be fetched.
Timeout is determined bydefault.api.timeout.ms
, which is set using ConsumerSettings#withDefaultApiTimeout.Returns the offset of the next record that will be fetched.
Timeout is determined bydefault.api.timeout.ms
, which is set using ConsumerSettings#withDefaultApiTimeout.- Definition Classes
- KafkaOffsets
-
abstract
def
seek(partition: TopicPartition, offset: Long): F[Unit]
Overrides the fetch offsets that the consumer will use when reading the next record.
Overrides the fetch offsets that the consumer will use when reading the next record. If this API is invoked for the same partition more than once, the latest offset will be used. Note that you may lose data if this API is arbitrarily used in the middle of consumption to reset the fetch offsets.
- Definition Classes
- KafkaOffsets
-
abstract
def
seekToBeginning[G[_]](partitions: G[TopicPartition])(implicit arg0: Foldable[G]): F[Unit]
Seeks to the first offset for each of the specified partitions.
Seeks to the first offset for each of the specified partitions. If no partitions are provided, seeks to the first offset for all currently assigned partitions.
Note that this seek evaluates lazily, and only on the next call topoll
orposition
.- Definition Classes
- KafkaOffsets
-
abstract
def
seekToEnd[G[_]](partitions: G[TopicPartition])(implicit arg0: Foldable[G]): F[Unit]
Seeks to the last offset for each of the specified partitions.
Seeks to the last offset for each of the specified partitions. If no partitions are provided, seeks to the last offset for all currently assigned partitions.
Note that this seek evaluates lazily, and only on the next call topoll
orposition
.- Definition Classes
- KafkaOffsets
-
abstract
def
stopConsuming: F[Unit]
Stops consuming new messages from Kafka.
Stops consuming new messages from Kafka. This method could be used to implement a graceful shutdown.
This method has a few effects: 1. After this call no more data will be fetched from Kafka through thepoll
method. 2. All currently running streams will continue to run until all in-flight messages will be processed. It means that streams will be completed when all fetched messages will be processed.
If some of the records methods will be called after stopConsuming call, these methods will return empty streams.
More than one call of stopConsuming will have no effect.- Definition Classes
- KafkaConsume
-
abstract
def
stream: Stream[F, CommittableConsumerRecord[F, K, V]]
Alias for
partitionedStream.parJoinUnbounded
.Alias for
partitionedStream.parJoinUnbounded
. See partitionedRecords for more information.- Definition Classes
- KafkaConsume
- Note
you have to first use
subscribe
orassign
the consumer before using thisStream
. If you forgot to subscribe, there will be a NotSubscribedException raised in theStream
.
-
abstract
def
subscribe(regex: Regex): F[Unit]
Subscribes the consumer to the topics matching the specified
Regex
.Subscribes the consumer to the topics matching the specified
Regex
. Note that you have to use one of thesubscribe
functions before you can use any of the providedStream
s, or a NotSubscribedException will be raised in theStream
s.- regex
the regex to which matching topics should be subscribed
- Definition Classes
- KafkaSubscription
-
abstract
def
subscribe[G[_]](topics: G[String])(implicit arg0: Reducible[G]): F[Unit]
Subscribes the consumer to the specified topics.
Subscribes the consumer to the specified topics. Note that you have to use one of the
subscribe
functions to subscribe to one or more topics before using any of the providedStream
s, or a NotSubscribedException will be raised in theStream
s.- topics
the topics to which the consumer should subscribe
- Definition Classes
- KafkaSubscription
-
abstract
def
terminate: F[Unit]
Whenever
terminate
is invoked, an attempt will be made to stop the underlying consumer.Whenever
terminate
is invoked, an attempt will be made to stop the underlying consumer. Theterminate
operation will not wait for the consumer to shutdown. If you also want to wait for the shutdown to complete, you can useterminate >> awaitTermination
.
- Definition Classes
- KafkaConsumerLifecycle
-
abstract
def
unsubscribe: F[Unit]
Unsubscribes the consumer from all topics and partitions assigned by
subscribe
orassign
.Unsubscribes the consumer from all topics and partitions assigned by
subscribe
orassign
.- Definition Classes
- KafkaSubscription
Concrete Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
assign(topic: String, partitions: NonEmptySet[Int]): F[Unit]
Manually assigns the specified list of partitions for the specified topic to the consumer.
Manually assigns the specified list of partitions for the specified topic to the consumer. This function does not allow for incremental assignment and will replace the previous assignment (if there is one).
Manual topic assignment through this method does not use the consumer's group management functionality. As such, there will be no rebalance operation triggered when group membership or cluster and topic metadata change. Note that it is not possible to use both manual partition assignment with
assign
and group assignment withsubscribe
.If auto-commit is enabled, an async commit (based on the old assignment) will be triggered before the new assignment replaces the old one.
To unassign all partitions, use KafkaConsumer#unsubscribe.
- Definition Classes
- KafkaAssignment
- See also
org.apache.kafka.clients.consumer.KafkaConsumer#assign
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
partitionedRecords: Stream[F, Stream[F, CommittableConsumerRecord[F, K, V]]]
Alias for partitionedStream
Alias for partitionedStream
- Definition Classes
- KafkaConsume
-
final
def
records: Stream[F, CommittableConsumerRecord[F, K, V]]
Consume from all assigned partitions, producing a stream of CommittableConsumerRecords.
Consume from all assigned partitions, producing a stream of CommittableConsumerRecords. Alias for stream.
- Definition Classes
- KafkaConsume
-
def
seekToBeginning: F[Unit]
Seeks to the first offset for each currently assigned partition.
Seeks to the first offset for each currently assigned partition. This is equivalent to using
seekToBeginning
with an empty set of partitions.
Note that this seek evaluates lazily, and only on the next call topoll
orposition
.- Definition Classes
- KafkaOffsets
-
def
seekToEnd: F[Unit]
Seeks to the last offset for each currently assigned partition.
Seeks to the last offset for each currently assigned partition. This is equivalent to using
seekToEnd
with an empty set of partitions.
Note that this seek evaluates lazily, and only on the next call topoll
orposition
.- Definition Classes
- KafkaOffsets
-
def
subscribeTo(firstTopic: String, remainingTopics: String*): F[Unit]
Subscribes the consumer to the specified topics.
Subscribes the consumer to the specified topics. Note that you have to use one of the
subscribe
functions to subscribe to one or more topics before using any of the providedStream
s, or a NotSubscribedException will be raised in theStream
s.- Definition Classes
- KafkaSubscription
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()