public interface ProcessingContext
Modifier and Type | Method and Description |
---|---|
Map<String,Object> |
appConfigs()
Returns all the application config properties as key/value pairs.
|
Map<String,Object> |
appConfigsWithPrefix(String prefix)
Return all the application config properties with the given key prefix, as key/value pairs
stripping the prefix.
|
String |
applicationId()
Return the application id.
|
void |
commit()
Request a commit.
|
long |
currentStreamTimeMs()
Return the current stream-time in milliseconds.
|
long |
currentSystemTimeMs()
Return the current system timestamp (also called wall-clock time) in milliseconds.
|
<S extends StateStore> |
getStateStore(String name)
Get the state store given the store name.
|
org.apache.kafka.common.serialization.Serde<?> |
keySerde()
Return the default key serde.
|
StreamsMetrics |
metrics()
Return Metrics instance.
|
Optional<RecordMetadata> |
recordMetadata()
Return the metadata of the current record if available.
|
Cancellable |
schedule(Duration interval,
PunctuationType type,
Punctuator callback)
Schedule a periodic operation for processors.
|
File |
stateDir()
Return the state directory for the partition.
|
TaskId |
taskId()
Return the task id.
|
org.apache.kafka.common.serialization.Serde<?> |
valueSerde()
Return the default value serde.
|
String applicationId()
TaskId taskId()
Optional<RecordMetadata> recordMetadata()
schedule(Duration, PunctuationType, Punctuator)
),
or because a parent processor called forward(Record)
.
In the case of a punctuation, there is no source record, so this metadata would be
undefined. Note that when a punctuator invokes forward(Record)
,
downstream processors will receive the forwarded record as a regular
Processor.process(Record)
or FixedKeyProcessor.process(FixedKeyRecord)
invocation.
In other words, it wouldn't be apparent to
downstream processors whether the record being processed came from an input topic
or punctuation and therefore whether this metadata is defined. This is why
the return type of this method is Optional
.
If there is any possibility of punctuators upstream, any access
to this field should consider the case of
"recordMetadata().isPresent() == false
".
Of course, it would be safest to always guard this condition.
org.apache.kafka.common.serialization.Serde<?> keySerde()
org.apache.kafka.common.serialization.Serde<?> valueSerde()
File stateDir()
StreamsMetrics metrics()
<S extends StateStore> S getStateStore(String name)
S
- The type or interface of the store to returnname
- The store nameClassCastException
- if the return type isn't a type or interface of the actual returned store.Cancellable schedule(Duration interval, PunctuationType type, Punctuator callback)
initialization
,
processing
,
initialization
, or
processing
to
schedule a periodic callback — called a punctuation — to Punctuator.punctuate(long)
.
The type parameter controls what notion of time is used for punctuation:
PunctuationType.STREAM_TIME
— uses "stream time", which is advanced by the processing of messages
in accordance with the timestamp as extracted by the TimestampExtractor
in use.
The first punctuation will be triggered by the first record that is processed.
NOTE: Only advanced if messages arrivePunctuationType.WALL_CLOCK_TIME
— uses system time (the wall-clock time),
which is advanced independent of whether new messages arrive.
The first punctuation will be triggered after interval has elapsed.
NOTE: This is best effort only as its granularity is limited by how long an iteration of the
processing loop takes to completePunctuationType.STREAM_TIME
, when stream time advances more than intervalPunctuationType.WALL_CLOCK_TIME
, on GC pause, too short interval, ...interval
- the time interval between punctuations (supported minimum is 1 millisecond)type
- one of: PunctuationType.STREAM_TIME
, PunctuationType.WALL_CLOCK_TIME
callback
- a function consuming timestamps representing the current stream or system timeIllegalArgumentException
- if the interval is not representable in millisecondsvoid commit()
commit()
is only a request for a commit, but it does not execute one.
Hence, when commit()
returns, no commit was executed yet. However, Kafka Streams will commit as soon
as possible, instead of waiting for next commit.interval.ms
to pass.Map<String,Object> appConfigs()
The config properties are defined in the StreamsConfig
object and associated to the ProcessorContext.
The type of the values is dependent on the type
of the property
(e.g. the value of DEFAULT_KEY_SERDE_CLASS_CONFIG
will be of type Class
, even if it was specified as a String to
StreamsConfig(Map)
).
Map<String,Object> appConfigsWithPrefix(String prefix)
The config properties are defined in the StreamsConfig
object and associated to the ProcessorContext.
prefix
- the properties prefixlong currentSystemTimeMs()
Note: this method returns the internally cached system timestamp from the Kafka Stream runtime.
Thus, it may return a different value compared to System.currentTimeMillis()
.
long currentStreamTimeMs()
Stream-time is the maximum observed record timestamp
so far
(including the currently processed record), i.e., it can be considered a high-watermark.
Stream-time is tracked on a per-task basis and is preserved across restarts and during task migration.
Note: this method is not supported for global processors (cf. Topology.addGlobalStore(org.apache.kafka.streams.state.StoreBuilder<?>, java.lang.String, org.apache.kafka.common.serialization.Deserializer<K>, org.apache.kafka.common.serialization.Deserializer<V>, java.lang.String, java.lang.String, org.apache.kafka.streams.processor.ProcessorSupplier<K, V>)
(...)
and StreamsBuilder.addGlobalStore(org.apache.kafka.streams.state.StoreBuilder<?>, java.lang.String, org.apache.kafka.streams.kstream.Consumed<K, V>, org.apache.kafka.streams.processor.ProcessorSupplier<K, V>)
(...),
because there is no concept of stream-time for this case.
Calling this method in a global processor will result in an UnsupportedOperationException
.