comma-separated string of topics
kafka consumer config zookeeper.connect
io.gearpump.streaming.transaction.api.OffsetStorageFactory that creates io.gearpump.streaming.transaction.api.OffsetStorage
decodes Message from raw bytes
filters out message based on timestamp
comma-separated string of topics
kafka consumer config zookeeper.connect
io.gearpump.streaming.transaction.api.OffsetStorageFactory that creates io.gearpump.streaming.transaction.api.OffsetStorage
comma-separated string of topics
kafka consumer config
io.gearpump.streaming.transaction.api.OffsetStorageFactory that creates io.gearpump.streaming.transaction.api.OffsetStorage
decodes Message from raw bytes
filters out message based on timestamp
comma-separated string of topics
kafka consumer config
io.gearpump.streaming.transaction.api.OffsetStorageFactory that creates io.gearpump.streaming.transaction.api.OffsetStorage
kafka source config
decodes Message from raw bytes
filters out message based on timestamp
fetches messages and puts on a in-memory queue
manages offset-to-timestamp storage for each kafka.common.TopicAndPartition
Kafka source connectors that pulls a batch of messages (
kafka.consumer.emit.batch.size
) from multiple Kafka TopicAndPartition in a round-robin way.This is a TimeReplayableSource which is able to replay messages given a start time. Each kafka message is tagged with a timestamp by io.gearpump.streaming.transaction.api.MessageDecoder and the (offset, timestamp) mapping is stored to a OffsetStorage. On recovery, we could retrieve the previously stored offset from the OffsetStorage by timestamp and start to read from there.
kafka message is wrapped into gearpump Message and further filtered by a TimeStampFilter such that obsolete messages are dropped.