abstract class AkkaStreamletLogic extends StreamletLogic[AkkaStreamletContext]
Provides an entry-point for defining the behavior of an AkkaStreamlet.
Override the run
method to implement the specific logic / code that should be executed once the streamlet deployed
as part of a running cloudflow application.
See RunnableGraphStreamletLogic
if you just want to create a RunnableGraph.
- Alphabetic
- By Inheritance
- AkkaStreamletLogic
- StreamletLogic
- Serializable
- Serializable
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
- new AkkaStreamletLogic()(implicit context: AkkaStreamletContext)
Abstract Value Members
-
abstract
def
run(): Unit
This method is called when the streamlet is run.
This method is called when the streamlet is run. Override this method to define what the specific streamlet logic should do.
Concrete Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
def
clusterSharding(): ClusterSharding
Helper method to make it easier to start typed cluster sharding with an classic actor system
-
def
committableSink[T]: Sink[(T, Committable), NotUsed]
Creates a sink, purely for committing the offsets that have been read further upstream.
Creates a sink, purely for committing the offsets that have been read further upstream. Batches offsets from the contexts that accompany the records, and commits these to Kafka. Uses a default CommitterSettings, which is configured through the default configuration in
akka.kafka.committer
. -
def
committableSink[T](committerSettings: CommitterSettings): Sink[(T, Committable), NotUsed]
Creates a sink, purely for committing the offsets that have been read further upstream.
Creates a sink, purely for committing the offsets that have been read further upstream. Batches offsets from the contexts that accompany the records, and commits these to Kafka.
-
def
committableSink[T](outlet: CodecOutlet[T], committerSettings: CommitterSettings = defaultCommitterSettings): Sink[(T, Committable), NotUsed]
Creates a sink for publishing records to the outlet.
Creates a sink for publishing records to the outlet. The records are partitioned according to the
partitioner
of theoutlet
. Batches offsets from the contexts that accompany the records, and commits these to Kafka. Theoutlet
specifies a cloudflow.streamlets.Codec that will be used to serialize the records that are written to Kafka. -
final
def
config: Config
The full configuration for the AkkaStreamlet, containing all deployment-time configuration parameters on top of the normal configuration as loaded through ActorSystem.settings.config
-
implicit
val
context: AkkaStreamletContext
- Definition Classes
- AkkaStreamletLogic → StreamletLogic
-
val
defaultCommitterSettings: CommitterSettings
The akka.kafka.CommitterSettings that have been configured from the default configuration
akka.kafka.committer
. -
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
implicit final
val
executionContext: ExecutionContextExecutor
The default ExecutionContext of the ActorSystem (the system dispatcher).
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
getCommittableSink[T](): Sink[Pair[T, Committable], NotUsed]
Java API
-
def
getCommittableSink[T](committerSettings: CommitterSettings): Sink[Pair[T, Committable], NotUsed]
Java API
-
def
getCommittableSink[T](outlet: CodecOutlet[T]): Sink[Pair[T, Committable], NotUsed]
Java API
-
def
getCommittableSink[T](outlet: CodecOutlet[T], committerSettings: CommitterSettings): Sink[Pair[T, Committable], NotUsed]
Java API
-
final
def
getConfig(): Config
Java API
-
def
getContext(): AkkaStreamletContext
- Definition Classes
- AkkaStreamletLogic → StreamletLogic
-
def
getDefaultCommitterSettings(): CommitterSettings
Java API
-
def
getExecutionContext(): ExecutionContextExecutor
Java API
-
final
def
getMountedPath(volumeMount: VolumeMount): Path
The path mounted for a VolumeMount request from a streamlet.
The path mounted for a VolumeMount request from a streamlet. In a clustered deployment, the mounted path will correspond to the requested mount path in the VolumeMount definition. In a local environment, this path will be replaced by a local folder.
- volumeMount
the volumeMount declaration for which we want to obtain the mounted path.
- returns
the path where the volume is mounted.
- Exceptions thrown
[[cloudflow.streamlets.MountedPathUnavailableException
MountedPathUnavailableException ]] in the case the path is not available.
-
def
getPlainSink[T](outlet: CodecOutlet[T]): Sink[T, NotUsed]
Java API
-
def
getPlainSource[T](inlet: CodecInlet[T], resetPosition: ResetPosition): Source[T, NotUsed]
Java API
-
def
getPlainSource[T](inlet: CodecInlet[T]): Source[T, NotUsed]
Java API
-
def
getShardedPlainSource[T, M, E](inlet: CodecInlet[T], shardEntity: Entity[M, E], resetPosition: ResetPosition = Latest, kafkaTimeout: FiniteDuration = 10.seconds): Source[T, Future[NotUsed]]
Java API
Java API
- Annotations
- @ApiMayChange()
-
def
getShardedPlainSource[T, M, E](inlet: CodecInlet[T], shardEntity: Entity[M, E], kafkaTimeout: FiniteDuration): Source[T, Future[NotUsed]]
Java API
Java API
- Annotations
- @ApiMayChange()
-
def
getShardedSourceWithCommittableContext[T, M, E](inlet: CodecInlet[T], shardEntity: Entity[M, E], kafkaTimeout: FiniteDuration = 10.seconds): SourceWithContext[T, Committable, Future[NotUsed]]
Java API
Java API
- Annotations
- @ApiMayChange()
- See also
-
final
def
getSinkRef[T](outlet: CodecOutlet[T]): WritableSinkRef[T]
Java API
-
def
getSourceWithCommittableContext[T](inlet: CodecInlet[T]): SourceWithContext[T, Committable, _]
Java API
Java API
- See also
-
final
def
getStreamletConfig(): Config
Java API
-
final
def
getStreamletRef(): String
Java API
-
def
getSystem(): ActorSystem
Java API
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
plainSink[T](outlet: CodecOutlet[T]): Sink[T, NotUsed]
Creates a sink for publishing
T
records to the outlet.Creates a sink for publishing
T
records to the outlet. The records are partitioned according to thepartitioner
of theoutlet
. Theoutlet
specifies a cloudflow.streamlets.Codec that will be used to serialize the records that are written to Kafka. -
def
plainSource[T](inlet: CodecInlet[T], resetPosition: ResetPosition = Latest): Source[T, NotUsed]
The
plainSource
emitsT
records (as received through theinlet
).The
plainSource
emitsT
records (as received through theinlet
).It has no support for committing offsets to Kafka. The
inlet
specifies a cloudflow.streamlets.Codec that will be used to deserialize the records read from Kafka. -
final
def
runGraph[T](graph: RunnableGraph[T]): T
Java API Launch the execution of the graph.
-
final
def
runGraph[T](graph: RunnableGraph[T]): T
Launch the execution of the graph.
-
def
shardedPlainSource[T, M, E](inlet: CodecInlet[T], shardEntity: Entity[M, E], resetPosition: ResetPosition = Latest, kafkaTimeout: FiniteDuration = 10.seconds): Source[T, Future[NotUsed]]
This source is designed to function the same as plainSource while also leveraging Akka Kafka Cluster Sharding for stateful streaming.
This source is designed to function the same as plainSource while also leveraging Akka Kafka Cluster Sharding for stateful streaming.
The
plainSource
emitsT
records (as received through theinlet
).It has no support for committing offsets to Kafka.
It is required to use this source with Akka Cluster. This source will start up Akka Cluster Sharding using the supplied
shardEntity
and configure the kafka external shard strategy to co-locate Kafka partition consumption with Akka Cluster shards.- inlet
the inlet to consume messages from. The inlet specifies a cloudflow.streamlets.Codec that is used to deserialize the records read from the underlying transport.
- shardEntity
is used to specific the settings for the started shard region
- kafkaTimeout
is used to specify the amount of time the message extractor will wait for a response from kafka
- Annotations
- @ApiMayChange()
-
def
shardedSourceWithCommittableContext[T, M, E](inlet: CodecInlet[T], shardEntity: Entity[M, E], kafkaTimeout: FiniteDuration = 10.seconds): SourceWithContext[T, CommittableOffset, Future[NotUsed]]
This source is designed to function the same as sourceWithCommittableContext while also leveraging Akka Kafka Cluster Sharding for stateful streaming.
This source is designed to function the same as sourceWithCommittableContext while also leveraging Akka Kafka Cluster Sharding for stateful streaming.
This source emits
T
records together with the committable context, thus makes it possible to commit offset positions to Kafka usingcommittableSink(outlet: CodecOutlet[T])
.It is required to use this source with Akka Cluster. This source will start up Akka Cluster Sharding using the supplied
shardEntity
and configure the kafka external shard strategy to co-locate Kafka partition consumption with Akka Cluster shards.- inlet
the inlet to consume messages from. The inlet specifies a cloudflow.streamlets.Codec that is used to deserialize the records read from the underlying transport.
- shardEntity
is used to specify the settings for the started shard region
- kafkaTimeout
is used to specify the amount of time the message extractor will wait for a response from kafka
- Annotations
- @ApiMayChange()
-
final
def
signalReady(): Boolean
Signals that the streamlet is ready to process data.
Signals that the streamlet is ready to process data.
signalReady
completes the cloudflow.streamlets.StreamletExecution#ready future. When a streamlet is run using the testkit, a cloudflow.streamlets.StreamletExecution is returned. cloudflow.streamlets.StreamletExecution#ready can be used for instance to wait for a server streamlet to signal that it is ready to accept requests. -
final
def
sinkRef[T](outlet: CodecOutlet[T]): WritableSinkRef[T]
Creates a SinkRef to write to, for the specified CodeOutlet.
Creates a SinkRef to write to, for the specified CodeOutlet. The records are partitioned according to the
partitioner
of theoutlet
.- outlet
the specified CodeOutlet
- returns
the WritebleSinkRef created
-
def
sourceWithCommittableContext[T](inlet: CodecInlet[T]): SourceWithCommittableContext[T]
This source emits
T
records together with the committable context, thus makes it possible to commit offset positions to Kafka (as received through theinlet
).This source emits
T
records together with the committable context, thus makes it possible to commit offset positions to Kafka (as received through theinlet
). This is useful when "at-least once delivery" is desired, as each message will likely be delivered one time, but in failure cases, they can be duplicated.It is intended to be used with
committableSink(outlet: CodecOutlet[T])
, which commits the offset positions that accompany the records that are read from this source after the records have been written to the specifiedoutlet
.The
inlet
specifies a cloudflow.streamlets.Codec that is used to deserialize the records read from the underlying transport. -
final
def
streamletConfig: Config
The subset of configuration specific to a single named instance of a streamlet.
The subset of configuration specific to a single named instance of a streamlet.
A cloudflow.streamlets.Streamlet can specify the set of environment- and instance-specific configuration keys it will use during runtime through cloudflow.streamlets.Streamlet#configParameters. Those keys will then be made available through this configuration.
-
final
def
streamletRef: String
The streamlet reference which identifies the streamlet in the blueprint.
The streamlet reference which identifies the streamlet in the blueprint. It is used in a Streamlet for logging and metrics, referring back to the streamlet instance using a name recognizable by the user.
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
implicit final
val
system: ActorSystem
The ActorSystem that will run the Akkastreamlet.
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
Deprecated Value Members
-
def
getSinkWithOffsetContext[T](): Sink[Pair[T, CommittableOffset], NotUsed]
Java API
Java API
- Annotations
- @deprecated
- Deprecated
(Since version 1.3.1) Use
getCommittableSink
instead.
-
def
getSinkWithOffsetContext[T](committerSettings: CommitterSettings): Sink[Pair[T, CommittableOffset], NotUsed]
Java API
Java API
- Annotations
- @deprecated
- Deprecated
(Since version 1.3.1) Use
getCommittableSink
instead.
-
def
getSinkWithOffsetContext[T](outlet: CodecOutlet[T], committerSettings: CommitterSettings): Sink[Pair[T, CommittableOffset], NotUsed]
Java API
Java API
- Annotations
- @deprecated
- Deprecated
(Since version 1.3.1) Use
getCommittableSink
instead.
-
def
getSinkWithOffsetContext[T](outlet: CodecOutlet[T]): Sink[Pair[T, CommittableOffset], NotUsed]
Java API
Java API
- Annotations
- @deprecated
- Deprecated
(Since version 1.3.1) Use
getCommittableSink
instead.
-
def
getSourceWithOffsetContext[T](inlet: CodecInlet[T]): SourceWithContext[T, CommittableOffset, _]
Java API
Java API
- Annotations
- @deprecated
- Deprecated
(Since version 1.3.4) Use getSourceWithCommittableContext
-
def
sinkWithOffsetContext[T]: Sink[(T, CommittableOffset), NotUsed]
Creates a sink, purely for committing the offsets that have been read further upstream.
Creates a sink, purely for committing the offsets that have been read further upstream. Batches offsets from the contexts that accompany the records, and commits these to Kafka.
- Annotations
- @deprecated
- Deprecated
(Since version 1.3.1) Use
committableSink
instead.
-
def
sinkWithOffsetContext[T](committerSettings: CommitterSettings): Sink[(T, CommittableOffset), NotUsed]
Creates a sink, purely for committing the offsets that have been read further upstream.
Creates a sink, purely for committing the offsets that have been read further upstream. Batches offsets from the contexts that accompany the records, and commits these to Kafka.
- Annotations
- @deprecated
- Deprecated
(Since version 1.3.1) Use
committableSink
instead.
-
def
sinkWithOffsetContext[T](outlet: CodecOutlet[T], committerSettings: CommitterSettings = defaultCommitterSettings): Sink[(T, CommittableOffset), NotUsed]
Creates a sink for publishing records to the outlet.
Creates a sink for publishing records to the outlet. The records are partitioned according to the
partitioner
of theoutlet
. Batches offsets from the contexts that accompany the records, and commits these to Kafka. Theoutlet
specifies a cloudflow.streamlets.Codec that will be used to serialize the records that are written to Kafka.- Annotations
- @deprecated
- Deprecated
(Since version 1.3.1) Use
committableSink
instead.
-
def
sourceWithOffsetContext[T](inlet: CodecInlet[T]): SourceWithOffsetContext[T]
This source emits
T
records together with the offset position as context, thus makes it possible to commit offset positions to Kafka (as received through theinlet
).This source emits
T
records together with the offset position as context, thus makes it possible to commit offset positions to Kafka (as received through theinlet
). This is useful when "at-least once delivery" is desired, as each message will likely be delivered one time, but in failure cases, they can be duplicated.It is intended to be used with
sinkWithOffsetContext(outlet: CodecOutlet[T])
or akka.kafka.scaladsl.Committer#sinkWithOffsetContext, which both commit the offset positions that accompany the records, read from this source.sinkWithOffsetContext(outlet: CodecOutlet[T])
should be used if you want to commit the offset positions after records have been written to the specifiedoutlet
. Theinlet
specifies a cloudflow.streamlets.Codec that will be used to deserialize the records read from Kafka.- Annotations
- @deprecated
- Deprecated
(Since version 1.3.4) Use sourceWithCommittableContext