Package

monix

kafka

Permalink

package kafka

Visibility
  1. Public
  2. All

Type Members

  1. final case class Deserializer[A](className: String, classType: Class[_ <: Decoder[A]], constructor: Constructor[A] = Deserializer.reflectCreate[A]) extends Product with Serializable

    Permalink

    Wraps a Kafka Decoder, provided for convenience, since it can be implicitly fetched from the context.

    Wraps a Kafka Decoder, provided for convenience, since it can be implicitly fetched from the context.

    className

    is the full package path to the Kafka Decoder

    classType

    is the java.lang.Class for className

    constructor

    creates an instance of classType. This is defaulted with a Deserializer.Constructor[A] function that creates a new instance using an assumed empty or nullable constructor. Supplying this parameter allows for manual provision of the Decoder.

  2. final case class KafkaConsumerConfig(groupId: String, zookeeperConnect: String, consumerId: String, socketTimeout: FiniteDuration, socketReceiveBufferInBytes: Int, fetchMessageMaxBytes: Int, numConsumerFetchers: Int, autoCommitEnable: Boolean, autoCommitInterval: FiniteDuration, queuedMaxMessageChunks: Int, rebalanceMaxRetries: Int, fetchMinBytes: Int, fetchWaitMaxTime: FiniteDuration, rebalanceBackoffTime: FiniteDuration, refreshLeaderBackoffTime: FiniteDuration, autoOffsetReset: AutoOffsetReset, consumerTimeout: FiniteDuration, excludeInternalTopics: Boolean, partitionAssignmentStrategy: PartitionAssignmentStrategy, clientId: String, zookeeperSessionTimeout: FiniteDuration, zookeeperConnectionTimeout: FiniteDuration, zookeeperSyncTime: FiniteDuration, offsetsStorage: OffsetsStorage, offsetsChannelBackoffTime: FiniteDuration, offsetsChannelSocketTimeout: FiniteDuration, offsetsCommitMaxRetries: Int, dualCommitEnabled: Boolean) extends Product with Serializable

    Permalink

    Configuration for Kafka Consumer.

    Configuration for Kafka Consumer.

    For the official documentation on the available configuration options, see Consumer Configs on kafka.apache.org.

    groupId

    is the group.id setting, a unique string that identifies the consumer group this consumer belongs to.

    zookeeperConnect

    is the zookeeper.connect setting, a list of host/port pairs to use for establishing the initial connection to the Zookeeper cluster.

    consumerId

    is the consumer.id setting, a unique string that identifies the consumer (will be autogenerated if not set).

    socketTimeout

    is the socket.timeout.ms setting, the socket timeout for network requests.

    socketReceiveBufferInBytes

    is the socket.receive.buffer.bytes setting, the size of the socket receive buffer for network requests.

    fetchMessageMaxBytes

    is the fetch.message.max.bytes setting, the maximum amount of data per-partition the server will return.

    numConsumerFetchers

    is the num.consumer.fetchers setting, the number of fetcher threads to spawn.

    autoCommitEnable

    is the auto.commit.enable setting. If true the consumer's offset will be periodically committed in the background.

    autoCommitInterval

    is the auto.commit.interval.ms setting, the frequency of autocommits.

    queuedMaxMessageChunks

    is the queued.max.message.chunks setting, the maximum number of message chunks that consumer may buffer.

    rebalanceMaxRetries

    is the rebalance.max.retries setting, the number of attempts to rebalance the consumer group when a new consumer joins.

    fetchMinBytes

    is the fetch.min.bytes setting, the minimum amount of data the server should return for a fetch request.

    fetchWaitMaxTime

    is the fetch.wait.max.ms setting, the maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.

    rebalanceBackoffTime

    is the rebalance.backoff.m setting. The amount of time to wait before attempting to rebalance the consumer group.

    refreshLeaderBackoffTime

    is the refresh.leader.backoff.ms setting. The amount of time to wait before trying to elect a new leader for a consumer group that has lost one.

    autoOffsetReset

    is the auto.offset.reset setting, specifying what to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted).

    consumerTimeout

    is the consumer.timeout.ms setting, which specifies the amount of time to wait before throwing an exception when there's nothing to consume.

    excludeInternalTopics

    is the exclude.internal.topics setting. Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to true the only way to receive records from an internal topic is subscribing to it.

    partitionAssignmentStrategy

    is the partition.assignment.strategy setting, which chooses how partitions will be assigned to consumer streams (range or roundrobin). Note that roundrobin strategy results in a more even load distribution, but will not work when consuming from multiple topics.

    clientId

    is the client.id setting, an id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

    zookeeperSessionTimeout

    is the zookeeper.session.timeout.ms setting, the maximum amount of time to wait for a heartbeat before initiating a rebalance.

    zookeeperConnectionTimeout

    is the zookeeper.connection.timeout.ms setting, the maximum amount of time the client will wait to establish a connection to ZooKeeper.

    zookeeperSyncTime

    is the zookeeper.sync.time.ms setting, the maximum lag allowed for ZK followers.

    offsetsStorage

    is the offsets.storage setting, that controls where offsets are stored (zookeeper or kafka).

    offsetsChannelBackoffTime

    is the offsets.channel.backoff.ms setting, the backoff period when reconnecting the offsets channel or retrying failed offset fetch/commit requests.

    offsetsChannelSocketTimeout

    is the offsets.channel.socket.timeout.ms setting. Socket timeout when reading responses for offset fetch/commit requests.

    offsetsCommitMaxRetries

    is the offsets.commit.max.retries setting, The maximum amount of retries for commiting the offset. This retry count only applies to offset commits during shut-down. It does not apply to commits originating from the auto-commit thread.

    dualCommitEnabled

    is the dual.commit.enabled setting, which can be used to dual commit offsets to ZooKeeper if using kafka as offsets.storage. This is required during migration from ZooKeeper-based offset storage to Kafka-based offset storage.

  3. final class KafkaConsumerObservable[K, V] extends Observable[MessageAndMetadata[K, V]]

    Permalink

    Exposes an Observable that consumes a Kafka stream by means of a Kafka Consumer client.

    Exposes an Observable that consumes a Kafka stream by means of a Kafka Consumer client.

    In order to get initialized, it needs a configuration. See the KafkaConsumerConfig needed and see monix/kafka/default.conf, (in the resource files) that is exposing all default values.

  4. trait KafkaProducer[K, V] extends Serializable

    Permalink

    Wraps the Kafka Producer.

  5. case class KafkaProducerConfig(bootstrapServers: List[String], acks: Acks, bufferMemoryInBytes: Int, compressionType: CompressionType, retries: Int, batchSizeInBytes: Int, clientId: String, lingerTime: FiniteDuration, maxRequestSizeInBytes: Int, receiveBufferInBytes: Int, sendBufferInBytes: Int, timeout: FiniteDuration, blockOnBufferFull: Boolean, metadataFetchTimeout: FiniteDuration, metadataMaxAge: FiniteDuration, reconnectBackoffTime: FiniteDuration, retryBackoffTime: FiniteDuration, monixSinkParallelism: Int) extends Product with Serializable

    Permalink

    The Kafka Producer config.

    The Kafka Producer config.

    For the official documentation on the available configuration options, see Producer Configs on kafka.apache.org.

    bootstrapServers

    is the bootstrap.servers setting and represents the list of servers to connect to.

    acks

    is the acks setting and represents the number of acknowledgments the producer requires the leader to have received before considering a request complete. See Acks.

    bufferMemoryInBytes

    is the buffer.memory setting and represents the total bytes of memory the producer can use to buffer records waiting to be sent to the server.

    compressionType

    is the compression.type setting and specifies what compression algorithm to apply to all the generated data by the producer. The default is none (no compression applied).

    retries

    is the retries setting. A value greater than zero will cause the client to resend any record whose send fails with a potentially transient error.

    batchSizeInBytes

    is the batch.size setting. The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This setting specifies the maximum number of records to batch together.

    clientId

    is the client.id setting, an id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

    lingerTime

    is the linger.ms setting and specifies to buffer records for more efficient batching, up to the maximum batch size or for the maximum lingerTime. If zero, then no buffering will happen, but if different from zero, then records will be delayed in absence of load.

    maxRequestSizeInBytes

    is the max.request.size setting and represents the maximum size of a request in bytes. This is also effectively a cap on the maximum record size.

    receiveBufferInBytes

    is the receive.buffer.bytes setting being the size of the TCP receive buffer (SO_RCVBUF) to use when reading data.

    sendBufferInBytes

    is the send.buffer.bytes setting, being the size of the TCP send buffer (SO_SNDBUF) to use when sending data.

    timeout

    is the timeout.ms setting, a configuration the controls the maximum amount of time the server will wait for acknowledgments from followers to meet the acknowledgment requirements the producer has specified with the acks configuration.

    blockOnBufferFull

    is the block.on.buffer.full setting, which controls whether producer stops accepting new records (blocks) or throws errors when the memory buffer is exhausted.

    metadataFetchTimeout

    is the metadata.fetch.timeout.ms setting. The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    metadataMaxAge

    is the metadata.max.age.ms setting. The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    reconnectBackoffTime

    is the reconnect.backoff.ms setting. The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.

    retryBackoffTime

    is the retry.backoff.ms setting. The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.

    monixSinkParallelism

    is the monix.producer.sink.parallelism setting indicating how many requests the KafkaProducerSink can execute in parallel.

  6. final class KafkaProducerSink[K, V] extends Consumer[Seq[ProducerRecord[K, V]], Unit] with StrictLogging with Serializable

    Permalink

    A monix.reactive.Consumer that pushes incoming messages into a KafkaProducer.

  7. final case class Serializer[A](className: String, classType: Class[_ <: org.apache.kafka.common.serialization.Serializer[A]], constructor: Constructor[A] = ...) extends Product with Serializable

    Permalink

    Wraps a Kafka Serializer, provided for convenience, since it can be implicitly fetched from the context.

    Wraps a Kafka Serializer, provided for convenience, since it can be implicitly fetched from the context.

    className

    is the full package path to the Kafka Serializer

    classType

    is the java.lang.Class for className

    constructor

    creates an instance of classType. This is defaulted with a Serializer.Constructor[A] function that creates a new instance using an assumed empty constructor. Supplying this parameter allows for manual provision of the Serializer.

Value Members

  1. object Deserializer extends Serializable

    Permalink
  2. object KafkaConsumerConfig extends Serializable

    Permalink
  3. object KafkaConsumerObservable extends Serializable

    Permalink
  4. object KafkaProducer extends Serializable

    Permalink
  5. object KafkaProducerConfig extends Serializable

    Permalink
  6. object KafkaProducerSink extends Serializable

    Permalink
  7. object Serializer extends Serializable

    Permalink
  8. package config

    Permalink

Ungrouped