Package

net.liftmodules

kafkaactors

Permalink

package kafkaactors

Visibility
  1. Public
  2. All

Type Members

  1. case class CommitOffsets(offsets: Map[TopicPartition, OffsetAndMetadata]) extends InternalKafkaActorMessage with Product with Serializable

    Permalink

    Tells the actor to signal the consumer thread to commit the enclosed offsets.

    Tells the actor to signal the consumer thread to commit the enclosed offsets.

    This message is essentially a checkpoint for consumption. Once the offsets are committed the messages before that point won't be reconsumed if the actor crashes and restarts.

  2. sealed trait InternalKafkaActorMessage extends AnyRef

    Permalink

    This is the parent trait for messages that KafkaActors handle internally.

    This is the parent trait for messages that KafkaActors handle internally. It's not possible for user code to intercept any messages that subclass this trait.

  3. abstract class KafkaActor extends LiftActor

    Permalink

    A kind of LiftActor capable of consuming messages from a Kafka topic.

    A kind of LiftActor capable of consuming messages from a Kafka topic.

    This actor imposes a few restrictions that normal LiftActors do not. Specifically:

    • You must define your message handling in userMessageHandler instead of messageHandler.
    • You cannot override the processing of any InternalKafkaActorMessage.

    Other than the above, this Actor behaves very similarly to a normal LiftActor. You can send messages directly to it, thus bypassing Kafka, by using its ! or send methods.

    Configuration

    For this actor to work correctly with Kafka, you'll ahve to provide it with some basic configuration. The required overrides are:

    • bootstrapServers: This needs to be the broker list for your Kafka cluster
    • groupId: This is the groupId the actor should consume under. See Kafka docs for more details.
    • kafkaTopic: The topic the actor should subscribe to

    The Kafka consumer works by polling for a certain number of milliseconds, and then returning if no messages are retrieved. We abstract away that event loop behavior, but sometimes applications need to tweak how long the consumer will sleep in order to optimize performance. To change that you can also override the following:

    • pollTime: The amount of time, in milliseconds, the consumer should wait for recrods. Defaults to 500ms.

    If you need to tune more specific settings, you can provide a consumerPropsCustomizer that will get to alter the Properties object before we pass it into the KafkaConsumer constructor. This is what you'll need to implement if you want to provide custom settings for things like authentication, encryption, etc. By default, we provide the bootstrap servers, the group ID, we disable auto committing, and provide a key and value serializer implementation.

    Please be careful when overriding settings that were set by the KafkaActor itself.

    Starting consumption

    Once the actor is created, it'll behave like a normal LiftActor until its told to connect up to Kafka and start consuming messages. To do that your code will need to transmit the StartConsumer message to it like so:

    actor ! StartConsumer

    You can also stop consuming anytime you like by transmitting StopConsumer or you can force committing offsets by transmitting CommitOffsets to the actor if you need to do so for some reason, though as mentioned below those cases should be rare.

    Processing messages

    When messages come from the topic, they will be parsed and extracted to case class objects using lift-json. The messages will then be put in the actor's normal mailbox using ! and be subjet to normal actor processing rules. Every time the actor consumes messages it'll also add a CommitOffsets message onto the end of the message batch.

    Because of the way the actor mailbox works, CommitOffsets won't be processed until all of the messages in that batch have been processed. Thus, if you have a class of errors that may cause you to want to avoid checkpointing offsets to Kafka, you sould throw an exception of some sort in your userMessageHandler so things blow up.

    Sending messages

    KafkaActor works like a normal actor if you use the regular sending methods. However, you may find it useful to send messages to this actor _through_ Kafka even when the actor is running in the local process. To facilitate this, you can send messages through the included ref like so:

    myKafkaActor.ref ! MyMessage()

    This will cause the message to be routed through a Kafka producer and then consumed by the actor using the normal consumption means.

  4. class KafkaActorRef extends SpecializedLiftActor[Any] with Loggable

    Permalink

    A ref to a KafkaActor that will send its message through Kafka.

    A ref to a KafkaActor that will send its message through Kafka.

    This class conforms to the LiftActor shape so that it can be plugged into anything that would normally take a LiftActor. However, as we've provided a custom implementation of the send method this actor won't be able to define a working messageHandler.

    By default, this producer is configured to ensure all brokers acknowledge a message and to ensure that requests are properly ordered. It's also configured with 10 retries by default. If you'd like to customize these settings, you can override producerPropsCustomizer to change the Properties instance that we use to configure the producer.

Value Members

  1. object StartConsumer extends InternalKafkaActorMessage with Product with Serializable

    Permalink

    Instruction for the KafkaActor to start consuming messages from Kafka.

    Instruction for the KafkaActor to start consuming messages from Kafka.

    User code must send this message to newly created Kafka actors to cause them to start consuming from Kafka. Until this message is sent, a KafkaActor is no more than a regular LiftActor. If you wish to have Kafka-consuming behavior toggleable, you should be able to add code paths that do or don't send this message to the relevant actor.

  2. object StopConsumer extends InternalKafkaActorMessage with Product with Serializable

    Permalink

    Instruction for the KafkaActor to stop consuming messages from Kafka.

    Instruction for the KafkaActor to stop consuming messages from Kafka.

    We recommend sending this message once your application knows its going to shut down so that consumption can finish up cleanly.

Ungrouped