See: Description
Interface | Description |
---|---|
AlgorithmSpecification |
(experimental) Specify the training algorithm and algorithm-specific metadata.
|
AthenaGetQueryExecutionProps |
(experimental) Properties for getting a Query Execution.
|
AthenaGetQueryResultsProps |
(experimental) Properties for getting a Query Results.
|
AthenaStartQueryExecutionProps |
(experimental) Properties for starting a Query Execution.
|
AthenaStopQueryExecutionProps |
(experimental) Properties for stoping a Query Execution.
|
BatchContainerOverrides |
(experimental) The overrides that should be sent to a container.
|
BatchJobDependency |
(experimental) An object representing an AWS Batch job dependency.
|
BatchSubmitJobProps |
(experimental) Properties for RunBatchJob.
|
CallApiGatewayEndpointBaseProps |
(experimental) Base CallApiGatewayEdnpoint Task Props.
|
CallApiGatewayHttpApiEndpointProps |
(experimental) Properties for calling an HTTP API Endpoint.
|
CallApiGatewayRestApiEndpointProps |
(experimental) Properties for calling an REST API Endpoint.
|
Channel |
(experimental) Describes the training, validation or test dataset and the Amazon S3 location where it is stored.
|
CodeBuildStartBuildProps |
(experimental) Properties for CodeBuildStartBuild.
|
CommonEcsRunTaskProps |
(experimental) Basic properties for ECS Tasks.
|
ContainerDefinitionConfig |
(experimental) Configuration options for the ContainerDefinition.
|
ContainerDefinitionOptions |
(experimental) Properties to define a ContainerDefinition.
|
ContainerOverride |
(experimental) A list of container overrides that specify the name of a container and the overrides it should receive.
|
ContainerOverrides |
(experimental) The overrides that should be sent to a container.
|
DataSource |
(experimental) Location of the channel data.
|
DockerImageConfig |
(experimental) Configuration for a using Docker image.
|
DynamoDeleteItemProps |
(experimental) Properties for DynamoDeleteItem Task.
|
DynamoGetItemProps |
(experimental) Properties for DynamoGetItem Task.
|
DynamoPutItemProps |
(experimental) Properties for DynamoPutItem Task.
|
DynamoUpdateItemProps |
(experimental) Properties for DynamoUpdateItem Task.
|
EcsEc2LaunchTargetOptions |
(experimental) Options to run an ECS task on EC2 in StepFunctions and ECS.
|
EcsFargateLaunchTargetOptions |
(experimental) Properties to define an ECS service.
|
EcsLaunchTargetConfig |
(experimental) Configuration options for the ECS launch type.
|
EcsRunTaskProps |
(experimental) Properties for ECS Tasks.
|
EksCallProps |
(experimental) Properties for calling a EKS endpoint with EksCall.
|
EmrAddStepProps |
(experimental) Properties for EmrAddStep.
|
EmrCancelStepProps |
(experimental) Properties for EmrCancelStep.
|
EmrCreateCluster.ApplicationConfigProperty |
(experimental) Properties for the EMR Cluster Applications.
|
EmrCreateCluster.AutoScalingPolicyProperty |
(experimental) An automatic scaling policy for a core instance group or task instance group in an Amazon EMR cluster.
|
EmrCreateCluster.BootstrapActionConfigProperty |
(experimental) Configuration of a bootstrap action.
|
EmrCreateCluster.CloudWatchAlarmDefinitionProperty |
(experimental) The definition of a CloudWatch metric alarm, which determines when an automatic scaling activity is triggered.
|
EmrCreateCluster.ConfigurationProperty |
(experimental) An optional configuration specification to be used when provisioning cluster instances, which can include configurations for applications and software bundled with Amazon EMR.
|
EmrCreateCluster.EbsBlockDeviceConfigProperty |
(experimental) Configuration of requested EBS block device associated with the instance group with count of volumes that will be associated to every instance.
|
EmrCreateCluster.EbsConfigurationProperty |
(experimental) The Amazon EBS configuration of a cluster instance.
|
EmrCreateCluster.InstanceFleetConfigProperty |
(experimental) The configuration that defines an instance fleet.
|
EmrCreateCluster.InstanceFleetProvisioningSpecificationsProperty |
(experimental) The launch specification for Spot instances in the fleet, which determines the defined duration and provisioning timeout behavior.
|
EmrCreateCluster.InstanceGroupConfigProperty |
(experimental) Configuration defining a new instance group.
|
EmrCreateCluster.InstancesConfigProperty |
(experimental) A specification of the number and type of Amazon EC2 instances.
|
EmrCreateCluster.InstanceTypeConfigProperty |
(experimental) An instance type configuration for each instance type in an instance fleet, which determines the EC2 instances Amazon EMR attempts to provision to fulfill On-Demand and Spot target capacities.
|
EmrCreateCluster.KerberosAttributesProperty |
(experimental) Attributes for Kerberos configuration when Kerberos authentication is enabled using a security configuration.
|
EmrCreateCluster.MetricDimensionProperty |
(experimental) A CloudWatch dimension, which is specified using a Key (known as a Name in CloudWatch), Value pair.
|
EmrCreateCluster.PlacementTypeProperty |
(experimental) The Amazon EC2 Availability Zone configuration of the cluster (job flow).
|
EmrCreateCluster.ScalingActionProperty |
(experimental) The type of adjustment the automatic scaling activity makes when triggered, and the periodicity of the adjustment.
|
EmrCreateCluster.ScalingConstraintsProperty |
(experimental) The upper and lower EC2 instance limits for an automatic scaling policy.
|
EmrCreateCluster.ScalingRuleProperty |
(experimental) A scale-in or scale-out rule that defines scaling activity, including the CloudWatch metric alarm that triggers activity, how EC2 instances are added or removed, and the periodicity of adjustments.
|
EmrCreateCluster.ScalingTriggerProperty |
(experimental) The conditions that trigger an automatic scaling activity and the definition of a CloudWatch metric alarm.
|
EmrCreateCluster.ScriptBootstrapActionConfigProperty |
(experimental) Configuration of the script to run during a bootstrap action.
|
EmrCreateCluster.SimpleScalingPolicyConfigurationProperty |
(experimental) An automatic scaling configuration, which describes how the policy adds or removes instances, the cooldown period, and the number of EC2 instances that will be added each time the CloudWatch metric alarm condition is satisfied.
|
EmrCreateCluster.SpotProvisioningSpecificationProperty |
(experimental) The launch specification for Spot instances in the instance fleet, which determines the defined duration and provisioning timeout behavior.
|
EmrCreateCluster.VolumeSpecificationProperty |
(experimental) EBS volume specifications such as volume type, IOPS, and size (GiB) that will be requested for the EBS volume attached to an EC2 instance in the cluster.
|
EmrCreateClusterProps |
(experimental) Properties for EmrCreateCluster.
|
EmrModifyInstanceFleetByNameProps |
(experimental) Properties for EmrModifyInstanceFleetByName.
|
EmrModifyInstanceGroupByName.InstanceGroupModifyConfigProperty |
(experimental) Modify the size or configurations of an instance group.
|
EmrModifyInstanceGroupByName.InstanceResizePolicyProperty |
(experimental) Custom policy for requesting termination protection or termination of specific instances when shrinking an instance group.
|
EmrModifyInstanceGroupByName.ShrinkPolicyProperty |
(experimental) Policy for customizing shrink operations.
|
EmrModifyInstanceGroupByNameProps |
(experimental) Properties for EmrModifyInstanceGroupByName.
|
EmrSetClusterTerminationProtectionProps |
(experimental) Properties for EmrSetClusterTerminationProtection.
|
EmrTerminateClusterProps |
(experimental) Properties for EmrTerminateCluster.
|
EncryptionConfiguration |
(experimental) Encryption Configuration of the S3 bucket.
|
EvaluateExpressionProps |
(experimental) Properties for EvaluateExpression.
|
GlueDataBrewStartJobRunProps |
(experimental) Properties for starting a job run with StartJobRun.
|
GlueStartJobRunProps |
(experimental) Properties for starting an AWS Glue job as a task.
|
IContainerDefinition |
(experimental) Configuration of the container used to host the model.
|
IContainerDefinition.Jsii$Default |
Internal default implementation for
IContainerDefinition . |
IEcsLaunchTarget |
(experimental) An Amazon ECS launch type determines the type of infrastructure on which your tasks and services are hosted.
|
IEcsLaunchTarget.Jsii$Default |
Internal default implementation for
IEcsLaunchTarget . |
ISageMakerTask |
(experimental) Task to train a machine learning model using Amazon SageMaker.
|
ISageMakerTask.Jsii$Default |
Internal default implementation for
ISageMakerTask . |
JobDependency |
(experimental) An object representing an AWS Batch job dependency.
|
LambdaInvokeProps |
(experimental) Properties for invoking a Lambda function with LambdaInvoke.
|
LaunchTargetBindOptions |
(experimental) Options for binding a launch target to an ECS run job task.
|
MetricDefinition |
(experimental) Specifies the metric name and regular expressions used to parse algorithm logs.
|
ModelClientOptions |
(experimental) Configures the timeout and maximum number of retries for processing a transform job invocation.
|
OutputDataConfig |
(experimental) Configures the S3 bucket where SageMaker will save the result of model training.
|
ProductionVariant |
(experimental) Identifies a model that you want to host and the resources to deploy for hosting it.
|
QueryExecutionContext |
(experimental) Database and data catalog context in which the query execution occurs.
|
ResourceConfig |
(experimental) Specifies the resources, ML compute instances, and ML storage volumes to deploy for model training.
|
ResultConfiguration |
(experimental) Location of query result along with S3 bucket configuration.
|
S3DataSource |
(experimental) S3 location of the channel data.
|
S3LocationBindOptions |
(experimental) Options for binding an S3 Location.
|
S3LocationConfig |
(experimental) Stores information about the location of an object in Amazon S3.
|
SageMakerCreateEndpointConfigProps |
(experimental) Properties for creating an Amazon SageMaker endpoint configuration.
|
SageMakerCreateEndpointProps |
(experimental) Properties for creating an Amazon SageMaker endpoint.
|
SageMakerCreateModelProps |
(experimental) Properties for creating an Amazon SageMaker model.
|
SageMakerCreateTrainingJobProps |
(experimental) Properties for creating an Amazon SageMaker training job.
|
SageMakerCreateTransformJobProps |
(experimental) Properties for creating an Amazon SageMaker transform job task.
|
SageMakerUpdateEndpointProps |
(experimental) Properties for updating Amazon SageMaker endpoint.
|
ShuffleConfig |
(experimental) Configuration for a shuffle option for input data in a channel.
|
SnsPublishProps |
(experimental) Properties for publishing a message to an SNS topic.
|
SqsSendMessageProps |
(experimental) Properties for sending a message to an SQS queue.
|
StepFunctionsInvokeActivityProps |
(experimental) Properties for invoking an Activity worker.
|
StepFunctionsStartExecutionProps |
(experimental) Properties for StartExecution.
|
StoppingCondition |
(experimental) Specifies a limit to how long a model training job can run.
|
TaskEnvironmentVariable |
(experimental) An environment variable to be set in the container run as a task.
|
TransformDataSource |
(experimental) S3 location of the input data that the model can consume.
|
TransformInput |
(experimental) Dataset to be transformed and the Amazon S3 location where it is stored.
|
TransformOutput |
(experimental) S3 location where you want Amazon SageMaker to save the results from the transform job.
|
TransformResources |
(experimental) ML compute instances for the transform job.
|
TransformS3DataSource |
(experimental) Location of the channel data.
|
VpcConfig |
(experimental) Specifies the VPC that you want your Amazon SageMaker training job to connect to.
|
Enum | Description |
---|---|
ActionOnFailure |
(experimental) The action to take when the cluster step fails.
|
AssembleWith |
(experimental) How to assemble the results of the transform job as a single S3 object.
|
AuthType |
(experimental) The authentication method used to call the endpoint.
|
BatchStrategy |
(experimental) Specifies the number of records to include in a mini-batch for an HTTP inference request.
|
CompressionType |
(experimental) Compression type of the data.
|
DynamoConsumedCapacity |
(experimental) Determines the level of detail about provisioned throughput consumption that is returned.
|
DynamoItemCollectionMetrics |
(experimental) Determines whether item collection metrics are returned.
|
DynamoReturnValues |
(experimental) Use ReturnValues if you want to get the item attributes as they appear before or after they are changed.
|
EmrCreateCluster.CloudWatchAlarmComparisonOperator |
(experimental) CloudWatch Alarm Comparison Operators.
|
EmrCreateCluster.CloudWatchAlarmStatistic |
(experimental) CloudWatch Alarm Statistics.
|
EmrCreateCluster.CloudWatchAlarmUnit |
(experimental) CloudWatch Alarm Units.
|
EmrCreateCluster.EbsBlockDeviceVolumeType |
(experimental) EBS Volume Types.
|
EmrCreateCluster.EmrClusterScaleDownBehavior |
(experimental) The Cluster ScaleDownBehavior specifies the way that individual Amazon EC2 instances terminate when an automatic scale-in activity occurs or an instance group is resized.
|
EmrCreateCluster.InstanceMarket |
(experimental) EC2 Instance Market.
|
EmrCreateCluster.InstanceRoleType |
(experimental) Instance Role Types.
|
EmrCreateCluster.ScalingAdjustmentType |
(experimental) AutoScaling Adjustment Type.
|
EmrCreateCluster.SpotTimeoutAction |
(experimental) Spot Timeout Actions.
|
EncryptionOption |
(experimental) Encryption Options of the S3 bucket.
|
HttpMethod |
(experimental) Http Methods that API Gateway supports.
|
HttpMethods |
(experimental) Method type of a EKS call.
|
InputMode |
(experimental) Input mode that the algorithm supports.
|
LambdaInvocationType |
(experimental) Invocation type of a Lambda.
|
Mode |
(experimental) Specifies how many models the container hosts.
|
RecordWrapperType |
(experimental) Define the format of the input data.
|
S3DataDistributionType |
(experimental) S3 Data Distribution Type.
|
S3DataType |
(experimental) S3 Data Type.
|
SplitType |
(experimental) Method to use to split the transform job's data files into smaller batches.
|
---
AWS Step Functions is a web service that enables you to coordinate the components of distributed applications and microservices using visual workflows. You build applications from individual components that each perform a discrete function, or task, allowing you to scale and change applications quickly.
A Task state represents a single unit of work performed by a state machine. All work in your state machine is performed by tasks.
This module is part of the AWS Cloud Development Kit project.
A Task state represents a single unit of work performed by a state machine. In the
CDK, the exact work to be done is determined by a class that implements IStepFunctionsTask
.
AWS Step Functions integrates with some AWS services so that you can call API actions, and coordinate executions directly from the Amazon States Language in Step Functions. You can directly call and pass parameters to the APIs of those services.
In the Amazon States Language, a path is a string beginning with $
that you
can use to identify components within JSON text.
Learn more about input and output processing in Step Functions here
Both InputPath
and Parameters
fields provide a way to manipulate JSON as it
moves through your workflow. AWS Step Functions applies the InputPath
field first,
and then the Parameters
field. You can first filter your raw input to a selection
you want using InputPath, and then apply Parameters to manipulate that input
further, or add new values. If you don't specify an InputPath
, a default value
of $
will be used.
The following example provides the field named input
as the input to the Task
state that runs a Lambda function.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 Object submitJob = LambdaInvoke.Builder.create(this, "Invoke Handler") .lambdaFunction(fn) .inputPath("$.input") .build();
Tasks also allow you to select a portion of the state output to pass to the next
state. This enables you to filter out unwanted information, and pass only the
portion of the JSON that you care about. If you don't specify an OutputPath
,
a default value of $
will be used. This passes the entire JSON node to the next
state.
The response from a Lambda function includes the response from the function as well as other metadata.
The following example assigns the output from the Task to a field named result
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 Object submitJob = LambdaInvoke.Builder.create(this, "Invoke Handler") .lambdaFunction(fn) .outputPath("$.Payload.result") .build();
You can use ResultSelector
to manipulate the raw result of a Task, Map or Parallel state before it is
passed to ResultPath
. For service integrations, the raw
result contains metadata in addition to the response payload. You can use
ResultSelector to construct a JSON payload that becomes the effective result
using static values or references to the raw result or context object.
The following example extracts the output payload of a Lambda function Task and combines it with some static values and the state name from the context object.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 LambdaInvoke.Builder.create(this, "Invoke Handler") .lambdaFunction(fn) .resultSelector(Map.of( "lambdaOutput", sfn.JsonPath.stringAt("$.Payload"), "invokeRequestId", sfn.JsonPath.stringAt("$.SdkResponseMetadata.RequestId"), "staticValue", Map.of( "foo", "bar"), "stateName", sfn.JsonPath.stringAt("$.State.Name"))) .build();
The output of a state can be a copy of its input, the result it produces (for
example, output from a Task state’s Lambda function), or a combination of its
input and result. Use ResultPath
to control which combination of these is
passed to the state output. If you don't specify an ResultPath
, a default
value of $
will be used.
The following example adds the item from calling DynamoDB's getItem
API to the state
input and passes it to the next state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 DynamoPutItem.Builder.create(this, "PutItem") .item(Map.of( "MessageId", tasks.DynamoAttributeValue.fromString("message-id"))) .table(myTable) .resultPath("$.Item") .build();
⚠️ The OutputPath
is computed after applying ResultPath
. All service integrations
return metadata as part of their response. When using ResultPath
, it's not possible to
merge a subset of the task output to the input.
Most tasks take parameters. Parameter values can either be static, supplied directly
in the workflow definition (by specifying their values), or a value available at runtime
in the state machine's execution (either as its input or an output of a prior state).
Parameter values available at runtime can be specified via the JsonPath
class,
using methods such as JsonPath.stringAt()
.
The following example provides the field named input
as the input to the Lambda function
and invokes it asynchronously.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 Object submitJob = LambdaInvoke.Builder.create(this, "Invoke Handler") .lambdaFunction(fn) .payload(sfn.TaskInput.fromDataAt("$.input")) .invocationType(tasks.LambdaInvocationType.getEVENT()) .build();
Each service integration has its own set of parameters that can be supplied.
Use the EvaluateExpression
to perform simple operations referencing state paths. The
expression
referenced in the task will be evaluated in a Lambda function
(eval()
). This allows you to not have to write Lambda code for simple operations.
Example: convert a wait time from milliseconds to seconds, concat this in a message and wait:
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 Object convertToSeconds = EvaluateExpression.Builder.create(this, "Convert to seconds") .expression("$.waitMilliseconds / 1000") .resultPath("$.waitSeconds") .build(); Object createMessage = EvaluateExpression.Builder.create(this, "Create message") // Note: this is a string inside a string. .expression("`Now waiting ${$.waitSeconds} seconds...`") .runtime(lambda.Runtime.getNODEJS_14_X()) .resultPath("$.message") .build(); Object publishMessage = SnsPublish.Builder.create(this, "Publish message") .topic(new Topic(this, "cool-topic")) .message(sfn.TaskInput.fromDataAt("$.message")) .resultPath("$.sns") .build(); Object wait = Wait.Builder.create(this, "Wait") .time(sfn.WaitTime.secondsPath("$.waitSeconds")) .build(); StateMachine.Builder.create(this, "StateMachine") .definition(convertToSeconds .next(createMessage) .next(publishMessage).next(wait)) .build();
The EvaluateExpression
supports a runtime
prop to specify the Lambda
runtime to use to evaluate the expression. Currently, only runtimes
of the Node.js family are supported.
Step Functions supports API Gateway through the service integration pattern.
HTTP APIs are designed for low-latency, cost-effective integrations with AWS services, including AWS Lambda, and HTTP endpoints. HTTP APIs support OIDC and OAuth 2.0 authorization, and come with built-in support for CORS and automatic deployments. Previous-generation REST APIs currently offer more features. More details can be found here.
The CallApiGatewayRestApiEndpoint
calls the REST API endpoint.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 import software.amazon.awscdk.aws_stepfunctions; import .*; Object restApi = new RestApi(stack, "MyRestApi"); Object invokeTask = CallApiGatewayRestApiEndpoint.Builder.create(stack, "Call REST API") .api(restApi) .stageName("prod") .method(HttpMethod.getGET()) .build();
The CallApiGatewayHttpApiEndpoint
calls the HTTP API endpoint.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 import software.amazon.awscdk.aws_stepfunctions; import .*; Object httpApi = new HttpApi(stack, "MyHttpApi"); Object invokeTask = CallApiGatewayHttpApiEndpoint.Builder.create(stack, "Call HTTP API") .apiId(httpApi.getApiId()) .apiStack(cdk.Stack.of(httpApi)) .method(HttpMethod.getGET()) .build();
Step Functions supports Athena through the service integration pattern.
The StartQueryExecution API runs the SQL query statement.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 Object startQueryExecutionJob = AthenaStartQueryExecution.Builder.create(this, "Start Athena Query") .queryString(sfn.JsonPath.stringAt("$.queryString")) .queryExecutionContext(Map.of( "databaseName", "mydatabase")) .resultConfiguration(Map.of( "encryptionConfiguration", Map.of( "encryptionOption", tasks.EncryptionOption.getS3_MANAGED()), "outputLocation", Map.of( "bucketName", "query-results-bucket", "objectKey", "folder"))) .build();
The GetQueryExecution API gets information about a single execution of a query.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 Object getQueryExecutionJob = AthenaGetQueryExecution.Builder.create(this, "Get Query Execution") .queryExecutionId(sfn.JsonPath.stringAt("$.QueryExecutionId")) .build();
The GetQueryResults API that streams the results of a single query execution specified by QueryExecutionId from S3.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 Object getQueryResultsJob = AthenaGetQueryResults.Builder.create(this, "Get Query Results") .queryExecutionId(sfn.JsonPath.stringAt("$.QueryExecutionId")) .build();
The StopQueryExecution API that stops a query execution.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 Object stopQueryExecutionJob = AthenaStopQueryExecution.Builder.create(this, "Stop Query Execution") .queryExecutionId(sfn.JsonPath.stringAt("$.QueryExecutionId")) .build();
Step Functions supports Batch through the service integration pattern.
The SubmitJob API submits an AWS Batch job from a job definition.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 BatchSubmitJob task = new BatchSubmitJob(this, "Submit Job", new BatchSubmitJobProps() .jobDefinitionArn(batchJobDefinitionArn) .jobName("MyJob") .jobQueueArn(batchQueueArn));
Step Functions supports CodeBuild through the service integration pattern.
StartBuild starts a CodeBuild Project by Project Name.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 import software.amazon.awscdk.aws_codebuild; Project codebuildProject = new Project(this, "Project", new ProjectProps() .projectName("MyTestProject") .buildSpec(codebuild.BuildSpec.fromObject(Map.of( "version", "0.2", "phases", Map.of( "build", Map.of( "commands", asList("echo \"Hello, CodeBuild!\""))))))); Object task = CodeBuildStartBuild.Builder.create(this, "Task") .project(codebuildProject) .integrationPattern(sfn.IntegrationPattern.getRUN_JOB()) .environmentVariablesOverride(Map.of( "ZONE", Map.of( "type", codebuild.BuildEnvironmentVariableType.getPLAINTEXT(), "value", sfn.JsonPath.stringAt("$.envVariables.zone")))) .build();
You can call DynamoDB APIs from a Task
state.
Read more about calling DynamoDB APIs here
The GetItem operation returns a set of attributes for the item with the given primary key.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 DynamoGetItem.Builder.create(this, "Get Item") .key(Map.of("messageId", tasks.DynamoAttributeValue.fromString("message-007"))) .table(myTable) .build();
The PutItem operation creates a new item, or replaces an old item with a new item.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 DynamoPutItem.Builder.create(this, "PutItem") .item(Map.of( "MessageId", tasks.DynamoAttributeValue.fromString("message-007"), "Text", tasks.DynamoAttributeValue.fromString(sfn.JsonPath.stringAt("$.bar")), "TotalCount", tasks.DynamoAttributeValue.fromNumber(10))) .table(myTable) .build();
The DeleteItem operation deletes a single item in a table by primary key.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 DynamoDeleteItem.Builder.create(this, "DeleteItem") .key(Map.of("MessageId", tasks.DynamoAttributeValue.fromString("message-007"))) .table(myTable) .resultPath(sfn.JsonPath.getDISCARD()) .build();
The UpdateItem operation edits an existing item's attributes, or adds a new item to the table if it does not already exist.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 DynamoUpdateItem.Builder.create(this, "UpdateItem") .key(Map.of( "MessageId", tasks.DynamoAttributeValue.fromString("message-007"))) .table(myTable) .expressionAttributeValues(Map.of( ":val", tasks.DynamoAttributeValue.numberFromString(sfn.JsonPath.stringAt("$.Item.TotalCount.N")), ":rand", tasks.DynamoAttributeValue.fromNumber(20))) .updateExpression("SET TotalCount = :val + :rand") .build();
Step Functions supports ECS/Fargate through the service integration pattern.
RunTask starts a new task using the specified task definition.
The EC2 launch type allows you to run your containerized applications on a cluster of Amazon EC2 instances that you manage.
When a task that uses the EC2 launch type is launched, Amazon ECS must determine where to place the task based on the requirements specified in the task definition, such as CPU and memory. Similarly, when you scale down the task count, Amazon ECS must determine which tasks to terminate. You can apply task placement strategies and constraints to customize how Amazon ECS places and terminates tasks. Learn more about task placement
The latest ACTIVE revision of the passed task definition is used for running the task.
The following example runs a job from a task definition on EC2
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 import software.amazon.awscdk.aws_ecs; Object vpc = ec2.Vpc.fromLookup(this, "Vpc", Map.of( "isDefault", true)); Cluster cluster = new Cluster(this, "Ec2Cluster", new ClusterProps().vpc(vpc)); cluster.addCapacity("DefaultAutoScalingGroup", new AddCapacityOptions() .instanceType(new InstanceType("t2.micro")) .vpcSubnets(new SubnetSelection().subnetType(ec2.SubnetType.getPUBLIC()))); TaskDefinition taskDefinition = new TaskDefinition(this, "TD", new TaskDefinitionProps() .compatibility(ecs.Compatibility.getEC2())); taskDefinition.addContainer("TheContainer", new ContainerDefinitionOptions() .image(ecs.ContainerImage.fromRegistry("foo/bar")) .memoryLimitMiB(256)); Object runTask = EcsRunTask.Builder.create(this, "Run") .integrationPattern(sfn.IntegrationPattern.getRUN_JOB()) .cluster(cluster) .taskDefinition(taskDefinition) .launchTarget(EcsEc2LaunchTarget.Builder.create() .placementStrategies(asList(ecs.PlacementStrategy.spreadAcrossInstances(), ecs.PlacementStrategy.packedByCpu(), ecs.PlacementStrategy.randomly())) .placementConstraints(asList(ecs.PlacementConstraint.memberOf("blieptuut"))) .build()) .build();
AWS Fargate is a serverless compute engine for containers that works with Amazon Elastic Container Service (ECS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. Learn more about Fargate
The Fargate launch type allows you to run your containerized applications without the need to provision and manage the backend infrastructure. Just register your task definition and Fargate launches the container for you. The latest ACTIVE revision of the passed task definition is used for running the task. Learn more about Fargate Versioning
The following example runs a job from a task definition on Fargate
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 import software.amazon.awscdk.aws_ecs; Object vpc = ec2.Vpc.fromLookup(this, "Vpc", Map.of( "isDefault", true)); Cluster cluster = new Cluster(this, "FargateCluster", new ClusterProps().vpc(vpc)); TaskDefinition taskDefinition = new TaskDefinition(this, "TD", new TaskDefinitionProps() .memoryMiB("512") .cpu("256") .compatibility(ecs.Compatibility.getFARGATE())); ContainerDefinition containerDefinition = taskDefinition.addContainer("TheContainer", new ContainerDefinitionOptions() .image(ecs.ContainerImage.fromRegistry("foo/bar")) .memoryLimitMiB(256)); Object runTask = EcsRunTask.Builder.create(this, "RunFargate") .integrationPattern(sfn.IntegrationPattern.getRUN_JOB()) .cluster(cluster) .taskDefinition(taskDefinition) .assignPublicIp(true) .containerOverrides(asList(Map.of( "containerDefinition", containerDefinition, "environment", asList(Map.of("name", "SOME_KEY", "value", sfn.JsonPath.stringAt("$.SomeKey")))))) .launchTarget(new EcsFargateLaunchTarget()) .build();
Step Functions supports Amazon EMR through the service integration pattern. The service integration APIs correspond to Amazon EMR APIs but differ in the parameters that are used.
Read more about the differences when using these service integrations.
Creates and starts running a cluster (job flow).
Corresponds to the runJobFlow
API in EMR.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 Role clusterRole = new Role(this, "ClusterRole", new RoleProps() .assumedBy(new ServicePrincipal("ec2.amazonaws.com"))); Role serviceRole = new Role(this, "ServiceRole", new RoleProps() .assumedBy(new ServicePrincipal("elasticmapreduce.amazonaws.com"))); Role autoScalingRole = new Role(this, "AutoScalingRole", new RoleProps() .assumedBy(new ServicePrincipal("elasticmapreduce.amazonaws.com"))); autoScalingRole.assumeRolePolicy.addStatements( new PolicyStatement(new PolicyStatementProps() .effect(iam.Effect.getALLOW()) .principals(asList( new ServicePrincipal("application-autoscaling.amazonaws.com"))) .actions(asList("sts:AssumeRole")))); EmrCreateCluster.Builder.create(this, "Create Cluster") .instances(Map.of()) .clusterRole(clusterRole) .name(sfn.TaskInput.fromDataAt('$.ClusterName').getValue()) .serviceRole(serviceRole) .autoScalingRole(autoScalingRole) .build();
Locks a cluster (job flow) so the EC2 instances in the cluster cannot be terminated by user intervention, an API call, or a job-flow error.
Corresponds to the setTerminationProtection
API in EMR.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 EmrSetClusterTerminationProtection.Builder.create(this, "Task") .clusterId("ClusterId") .terminationProtected(false) .build();
Shuts down a cluster (job flow).
Corresponds to the terminateJobFlows
API in EMR.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 EmrTerminateCluster.Builder.create(this, "Task") .clusterId("ClusterId") .build();
Adds a new step to a running cluster.
Corresponds to the addJobFlowSteps
API in EMR.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 EmrAddStep.Builder.create(this, "Task") .clusterId("ClusterId") .name("StepName") .jar("Jar") .actionOnFailure(tasks.ActionOnFailure.getCONTINUE()) .build();
Cancels a pending step in a running cluster.
Corresponds to the cancelSteps
API in EMR.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 EmrCancelStep.Builder.create(this, "Task") .clusterId("ClusterId") .stepId("StepId") .build();
Modifies the target On-Demand and target Spot capacities for the instance fleet with the specified InstanceFleetName.
Corresponds to the modifyInstanceFleet
API in EMR.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 EmrModifyInstanceFleetByName.Builder.create(this, "Task") .clusterId("ClusterId") .instanceFleetName("InstanceFleetName") .targetOnDemandCapacity(2) .targetSpotCapacity(0) .build();
Modifies the number of nodes and configuration settings of an instance group.
Corresponds to the modifyInstanceGroups
API in EMR.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 EmrModifyInstanceGroupByName.Builder.create(this, "Task") .clusterId("ClusterId") .instanceGroupName(sfn.JsonPath.stringAt("$.InstanceGroupName")) .instanceGroup(Map.of( "instanceCount", 1)) .build();
Step Functions supports Amazon EKS through the service integration pattern. The service integration APIs correspond to Amazon EKS APIs.
Read more about the differences when using these service integrations.
Read and write Kubernetes resource objects via a Kubernetes API endpoint.
Corresponds to the call
API in Step Functions Connector.
The following code snippet includes a Task state that uses eks:call to list the pods.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 import software.amazon.awscdk.aws_eks; import software.amazon.awscdk.aws_stepfunctions; import software.amazon.awscdk.aws_stepfunctions_tasks; Cluster myEksCluster = new Cluster(this, "my sample cluster", new ClusterProps() .version(eks.KubernetesVersion.getV1_18()) .clusterName("myEksCluster")); new EksCall(stack, "Call a EKS Endpoint", new EksCallProps() .cluster(myEksCluster) .httpMethod(MethodType.getGET()) .httpPath("/api/v1/namespaces/default/pods"));
Step Functions supports AWS Glue through the service integration pattern.
You can call the StartJobRun
API from a Task
state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 GlueStartJobRun.Builder.create(this, "Task") .glueJobName("my-glue-job") .arguments(sfn.TaskInput.fromObject(Map.of( "key", "value"))) .timeout(cdk.Duration.minutes(30)) .notifyDelayAfter(cdk.Duration.minutes(5)) .build();
Step Functions supports AWS Glue DataBrew through the service integration pattern.
You can call the StartJobRun
API from a Task
state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 GlueDataBrewStartJobRun.Builder.create(this, "Task") .name("databrew-job") .build();
Invoke a Lambda function.
You can specify the input to your Lambda function through the payload
attribute.
By default, Step Functions invokes Lambda function with the state input (JSON path '$')
as the input.
The following snippet invokes a Lambda Function with the state input as the payload
by referencing the $
path.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 LambdaInvoke.Builder.create(this, "Invoke with state input") .lambdaFunction(fn) .build();
When a function is invoked, the Lambda service sends these response elements back.
⚠️ The response from the Lambda function is in an attribute called Payload
The following snippet invokes a Lambda Function by referencing the $.Payload
path
to reference the output of a Lambda executed before it.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 LambdaInvoke.Builder.create(this, "Invoke with empty object as payload") .lambdaFunction(fn) .payload(sfn.TaskInput.fromObject(Map.of())) .build(); // use the output of fn as input // use the output of fn as input LambdaInvoke.Builder.create(this, "Invoke with payload field in the state input") .lambdaFunction(fn) .payload(sfn.TaskInput.fromDataAt("$.Payload")) .build();
The following snippet invokes a Lambda and sets the task output to only include the Lambda function response.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 LambdaInvoke.Builder.create(this, "Invoke and set function response as task output") .lambdaFunction(fn) .outputPath("$.Payload") .build();
If you want to combine the input and the Lambda function response you can use
the payloadResponseOnly
property and specify the resultPath
. This will put the
Lambda function ARN directly in the "Resource" string, but it conflicts with the
integrationPattern, invocationType, clientContext, and qualifier properties.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 LambdaInvoke.Builder.create(this, "Invoke and combine function response with task input") .lambdaFunction(fn) .payloadResponseOnly(true) .resultPath("$.fn") .build();
You can have Step Functions pause a task, and wait for an external process to return a task token. Read more about the callback pattern
To use the callback pattern, set the token
property on the task. Call the Step
Functions SendTaskSuccess
or SendTaskFailure
APIs with the token to
indicate that the task has completed and the state machine should resume execution.
The following snippet invokes a Lambda with the task token as part of the input to the Lambda.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 LambdaInvoke.Builder.create(this, "Invoke with callback") .lambdaFunction(fn) .integrationPattern(sfn.IntegrationPattern.getWAIT_FOR_TASK_TOKEN()) .payload(sfn.TaskInput.fromObject(Map.of( "token", sfn.JsonPath.getTaskToken(), "input", sfn.JsonPath.stringAt("$.someField")))) .build();
⚠️ The task will pause until it receives that task token back with a SendTaskSuccess
or SendTaskFailure
call. Learn more about Callback with the Task
Token.
AWS Lambda can occasionally experience transient service errors. In this case, invoking Lambda
results in a 500 error, such as ServiceException
, AWSLambdaException
, or SdkClientException
.
As a best practice, the LambdaInvoke
task will retry on those errors with an interval of 2 seconds,
a back-off rate of 2 and 6 maximum attempts. Set the retryOnServiceExceptions
prop to false
to
disable this behavior.
Step Functions supports AWS SageMaker through the service integration pattern.
You can call the CreateTrainingJob
API from a Task
state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 SageMakerCreateTrainingJob.Builder.create(this, "TrainSagemaker") .trainingJobName(sfn.JsonPath.stringAt("$.JobName")) .algorithmSpecification(Map.of( "algorithmName", "BlazingText", "trainingInputMode", tasks.InputMode.getFILE())) .inputDataConfig(asList(Map.of( "channelName", "train", "dataSource", Map.of( "s3DataSource", Map.of( "s3DataType", tasks.S3DataType.getS3_PREFIX(), "s3Location", tasks.S3Location.fromJsonExpression("$.S3Bucket")))))) .outputDataConfig(Map.of( "s3OutputLocation", tasks.S3Location.fromBucket(s3.Bucket.fromBucketName(this, "Bucket", "mybucket"), "myoutputpath"))) .resourceConfig(Map.of( "instanceCount", 1, "instanceType", ec2.InstanceType.of(ec2.InstanceClass.getP3(), ec2.InstanceSize.getXLARGE2()), "volumeSize", cdk.Size.gibibytes(50)))// optional: default is 1 instance of EC2 `M4.XLarge` with `10GB` volume .stoppingCondition(Map.of( "maxRuntime", cdk.Duration.hours(2))) .build();
You can call the CreateTransformJob
API from a Task
state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 SageMakerCreateTransformJob.Builder.create(this, "Batch Inference") .transformJobName("MyTransformJob") .modelName("MyModelName") .modelClientOptions(Map.of( "invocationsMaxRetries", 3, // default is 0 "invocationsTimeout", cdk.Duration.minutes(5))) .transformInput(Map.of( "transformDataSource", Map.of( "s3DataSource", Map.of( "s3Uri", "s3://inputbucket/train", "s3DataType", tasks.S3DataType.getS3_PREFIX())))) .transformOutput(Map.of( "s3OutputPath", "s3://outputbucket/TransformJobOutputPath")) .transformResources(Map.of( "instanceCount", 1, "instanceType", ec2.InstanceType.of(ec2.InstanceClass.getM4(), ec2.InstanceSize.getXLARGE()))) .build();
You can call the CreateEndpoint
API from a Task
state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 SageMakerCreateEndpoint.Builder.create(this, "SagemakerEndpoint") .endpointName(sfn.JsonPath.stringAt("$.EndpointName")) .endpointConfigName(sfn.JsonPath.stringAt("$.EndpointConfigName")) .build();
You can call the CreateEndpointConfig
API from a Task
state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 SageMakerCreateEndpointConfig.Builder.create(this, "SagemakerEndpointConfig") .endpointConfigName("MyEndpointConfig") .productionVariants(asList(Map.of( "initialInstanceCount", 2, "instanceType", ec2.InstanceType.of(ec2.InstanceClass.getM5(), ec2.InstanceSize.getXLARGE()), "modelName", "MyModel", "variantName", "awesome-variant"))) .build();
You can call the CreateModel
API from a Task
state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 SageMakerCreateModel.Builder.create(this, "Sagemaker") .modelName("MyModel") .primaryContainer(ContainerDefinition.Builder.create() .image(tasks.DockerImage.fromJsonExpression(sfn.JsonPath.stringAt("$.Model.imageName"))) .mode(tasks.Mode.getSINGLE_MODEL()) .modelS3Location(tasks.S3Location.fromJsonExpression("$.TrainingJob.ModelArtifacts.S3ModelArtifacts")) .build()) .build();
You can call the UpdateEndpoint
API from a Task
state.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 SageMakerUpdateEndpoint.Builder.create(this, "SagemakerEndpoint") .endpointName(sfn.JsonPath.stringAt("$.Endpoint.Name")) .endpointConfigName(sfn.JsonPath.stringAt("$.Endpoint.EndpointConfig")) .build();
Step Functions supports Amazon SNS through the service integration pattern.
You can call the Publish
API from a Task
state to publish to an SNS topic.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 Topic topic = new Topic(this, "Topic"); // Use a field from the execution data as message. Object task1 = SnsPublish.Builder.create(this, "Publish1") .topic(topic) .integrationPattern(sfn.IntegrationPattern.getREQUEST_RESPONSE()) .message(sfn.TaskInput.fromDataAt("$.state.message")) .build(); // Combine a field from the execution data with // a literal object. Object task2 = SnsPublish.Builder.create(this, "Publish2") .topic(topic) .message(sfn.TaskInput.fromObject(Map.of( "field1", "somedata", "field2", sfn.JsonPath.stringAt("$.field2")))) .build();
You can manage AWS Step Functions executions.
AWS Step Functions supports it's own StartExecution
API as a service integration.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 // Define a state machine with one Pass state Object child = StateMachine.Builder.create(this, "ChildStateMachine") .definition(sfn.Chain.start(new Pass(this, "PassState"))) .build(); // Include the state machine in a Task state with callback pattern Object task = StepFunctionsStartExecution.Builder.create(this, "ChildTask") .stateMachine(child) .integrationPattern(sfn.IntegrationPattern.getWAIT_FOR_TASK_TOKEN()) .input(sfn.TaskInput.fromObject(Map.of( "token", sfn.JsonPath.getTaskToken(), "foo", "bar"))) .name("MyExecutionName") .build(); // Define a second state machine with the Task state above // Define a second state machine with the Task state above StateMachine.Builder.create(this, "ParentStateMachine") .definition(task) .build();
You can invoke a Step Functions Activity which enables you to have a task in your state machine where the work is performed by a worker that can be hosted on Amazon EC2, Amazon ECS, AWS Lambda, basically anywhere. Activities are a way to associate code running somewhere (known as an activity worker) with a specific task in a state machine.
When Step Functions reaches an activity task state, the workflow waits for an activity worker to poll for a task. An activity worker polls Step Functions by using GetActivityTask, and sending the ARN for the related activity.
After the activity worker completes its work, it can provide a report of its
success or failure by using SendTaskSuccess
or SendTaskFailure
. These two
calls use the taskToken provided by GetActivityTask to associate the result
with that task.
The following example creates an activity and creates a task that invokes the activity.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 Object submitJobActivity = new Activity(this, "SubmitJob"); StepFunctionsInvokeActivity.Builder.create(this, "Submit Job") .activity(submitJobActivity) .build();
Step Functions supports Amazon SQS
You can call the SendMessage
API from a Task
state
to send a message to an SQS queue.
// Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826 Queue queue = new Queue(this, "Queue"); // Use a field from the execution data as message. Object task1 = SqsSendMessage.Builder.create(this, "Send1") .queue(queue) .messageBody(sfn.TaskInput.fromDataAt("$.message")) .build(); // Combine a field from the execution data with // a literal object. Object task2 = SqsSendMessage.Builder.create(this, "Send2") .queue(queue) .messageBody(sfn.TaskInput.fromObject(Map.of( "field1", "somedata", "field2", sfn.JsonPath.stringAt("$.field2")))) .build();
Copyright © 2021. All rights reserved.