Wraps a Kafka Decoder
, provided for
convenience, since it can be implicitly fetched
from the context.
Configuration for Kafka Consumer.
Configuration for Kafka Consumer.
For the official documentation on the available configuration
options, see
Consumer Configs
on kafka.apache.org
.
is the group.id
setting, a unique string
that identifies the consumer group this consumer
belongs to.
is the zookeeper.connect
setting,
a list of host/port pairs to use for establishing
the initial connection to the Zookeeper cluster.
is the consumer.id
setting, a unique string
that identifies the consumer (will be autogenerated if not set).
is the socket.timeout.ms
setting,
the socket timeout for network requests.
is the socket.receive.buffer.bytes
setting, the size of the socket receive buffer for network requests.
is the fetch.message.max.bytes
setting, the maximum amount of data per-partition the
server will return.
is the num.consumer.fetchers
setting,
the number of fetcher threads to spawn.
is the auto.commit.enable
setting.
If true the consumer's offset will be periodically committed
in the background.
is the auto.commit.interval.ms
setting,
the frequency of autocommits.
is the queued.max.message.chunks
setting, the maximum number of message chunks that consumer
may buffer.
is the rebalance.max.retries
setting,
the number of attempts to rebalance the consumer group when a new
consumer joins.
is the fetch.min.bytes
setting,
the minimum amount of data the server should return
for a fetch request.
is the fetch.wait.max.ms
setting,
the maximum amount of time the server will block before
answering the fetch request if there isn't sufficient data to
immediately satisfy the requirement given by fetch.min.bytes.
is the rebalance.backoff.m
setting.
The amount of time to wait before attempting to rebalance the
consumer group.
is the refresh.leader.backoff.ms
setting. The amount of time to wait before trying to elect
a new leader for a consumer group that has lost one.
is the auto.offset.reset
setting,
specifying what to do when there is no initial offset in
Kafka or if the current offset does not exist any more
on the server (e.g. because that data has been deleted).
is the consumer.timeout.ms
setting,
which specifies the amount of time to wait before throwing
an exception when there's nothing to consume.
is the exclude.internal.topics
setting.
Whether records from internal topics (such as offsets) should be
exposed to the consumer. If set to true the only way to receive
records from an internal topic is subscribing to it.
is the partition.assignment.strategy
setting, which chooses how partitions will be assigned to consumer
streams (range
or roundrobin
). Note that roundrobin
strategy
results in a more even load distribution, but will not work when
consuming from multiple topics.
is the client.id
setting,
an id string to pass to the server when making requests.
The purpose of this is to be able to track the source of
requests beyond just ip/port by allowing a logical application
name to be included in server-side request logging.
is the zookeeper.session.timeout.ms
setting, the maximum amount of time to wait for a heartbeat before
initiating a rebalance.
is the zookeeper.connection.timeout.ms
setting, the maximum amount of time the client will wait to
establish a connection to ZooKeeper.
is the zookeeper.sync.time.ms
setting,
the maximum lag allowed for ZK followers.
is the offsets.storage
setting, that controls
where offsets are stored (zookeeper
or kafka
).
is the offsets.channel.backoff.ms
setting, the backoff period when reconnecting the offsets channel
or retrying failed offset fetch/commit requests.
is the offsets.channel.socket.timeout.ms
setting. Socket timeout when reading responses for offset fetch/commit
requests.
is the offsets.commit.max.retries
setting,
The maximum amount of retries for commiting the offset. This retry
count only applies to offset commits during shut-down. It does not
apply to commits originating from the auto-commit thread.
is the dual.commit.enabled
setting, which
can be used to dual commit offsets to ZooKeeper if using
kafka
as offsets.storage
. This is required during migration
from ZooKeeper-based offset storage to Kafka-based offset storage.
Exposes an Observable
that consumes a Kafka stream by
means of a Kafka Consumer client.
Exposes an Observable
that consumes a Kafka stream by
means of a Kafka Consumer client.
In order to get initialized, it needs a configuration. See the
KafkaConsumerConfig needed and see monix/kafka/default.conf
,
(in the resource files) that is exposing all default values.
Wraps the Kafka Producer.
The Kafka Producer config.
The Kafka Producer config.
For the official documentation on the available configuration
options, see
Producer Configs
on kafka.apache.org
.
is the bootstrap.servers
setting
and represents the list of servers to connect to.
is the acks
setting and represents
the number of acknowledgments the producer requires the leader to
have received before considering a request complete.
See Acks.
is the buffer.memory
setting and
represents the total bytes of memory the producer
can use to buffer records waiting to be sent to the server.
is the compression.type
setting and specifies
what compression algorithm to apply to all the generated data
by the producer. The default is none (no compression applied).
is the retries
setting. A value greater than zero will
cause the client to resend any record whose send fails with
a potentially transient error.
is the batch.size
setting.
The producer will attempt to batch records together into fewer
requests whenever multiple records are being sent to the
same partition. This setting specifies the maximum number of
records to batch together.
is the client.id
setting,
an id string to pass to the server when making requests.
The purpose of this is to be able to track the source of
requests beyond just ip/port by allowing a logical application
name to be included in server-side request logging.
is the linger.ms
setting
and specifies to buffer records for more efficient batching,
up to the maximum batch size or for the maximum lingerTime
.
If zero, then no buffering will happen, but if different
from zero, then records will be delayed in absence of load.
is the max.request.size
setting
and represents the maximum size of a request in bytes.
This is also effectively a cap on the maximum record size.
is the receive.buffer.bytes
setting
being the size of the TCP receive buffer (SO_RCVBUF) to use
when reading data.
is the send.buffer.bytes
setting,
being the size of the TCP send buffer (SO_SNDBUF) to use
when sending data.
is the timeout.ms
setting,
a configuration the controls the maximum amount of time
the server will wait for acknowledgments from followers to meet
the acknowledgment requirements the producer has specified with
the acks
configuration.
is the block.on.buffer.full
setting,
which controls whether producer stops accepting new
records (blocks) or throws errors when the memory buffer
is exhausted.
is the metadata.fetch.timeout.ms
setting.
The period of time in milliseconds after which we force a
refresh of metadata even if we haven't seen any partition
leadership changes to proactively discover any new brokers
or partitions.
is the metadata.max.age.ms
setting.
The period of time in milliseconds after which we force a
refresh of metadata even if we haven't seen any partition
leadership changes to proactively discover any new brokers
or partitions.
is the reconnect.backoff.ms
setting.
The amount of time to wait before attempting to reconnect to a
given host. This avoids repeatedly connecting to a host in a
tight loop. This backoff applies to all requests sent by the
consumer to the broker.
is the retry.backoff.ms
setting.
The amount of time to wait before attempting to retry a failed
request to a given topic partition. This avoids repeatedly
sending requests in a tight loop under some failure scenarios.
is the monix.producer.sink.parallelism
setting indicating how many requests the KafkaProducerSink
can execute in parallel.
A monix.reactive.Consumer
that pushes incoming messages into
a KafkaProducer.
Wraps a Kafka Serializer
, provided for
convenience, since it can be implicitly fetched
from the context.
Wraps a Kafka Serializer
, provided for
convenience, since it can be implicitly fetched
from the context.
is the full package path to the Kafka Serializer
is the java.lang.Class
for className
creates an instance of classType.
This is defaulted with a Serializer.Constructor[A]
function that creates a
new instance using an assumed empty constructor.
Supplying this parameter allows for manual provision of the Serializer
.
Wraps a Kafka
Decoder
, provided for convenience, since it can be implicitly fetched from the context.is the full package path to the Kafka
Decoder
is the
java.lang.Class
for classNamecreates an instance of classType. This is defaulted with a
Deserializer.Constructor[A]
function that creates a new instance using an assumed empty or nullable constructor. Supplying this parameter allows for manual provision of theDecoder
.