public interface AmazonECSAsync extends AmazonECS
AsyncHandler
can be used to receive
notification when an asynchronous operation completes.
Note: Do not directly implement this interface, new methods are added to it regularly. Extend from
AbstractAmazonECSAsync
instead.
Amazon EC2 Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster of EC2 instances. Amazon ECS lets you launch and stop container-enabled applications with simple API calls, allows you to get the state of your cluster from a centralized service, and gives you access to many familiar Amazon EC2 features like security groups, Amazon EBS volumes, and IAM roles.
You can use Amazon ECS to schedule the placement of containers across your cluster based on your resource needs, isolation policies, and availability requirements. Amazon EC2 Container Service eliminates the need for you to operate your own cluster management and configuration management systems or worry about scaling your management infrastructure.
ENDPOINT_PREFIX
createCluster, createCluster, createService, deleteCluster, deleteService, deregisterContainerInstance, deregisterTaskDefinition, describeClusters, describeClusters, describeContainerInstances, describeServices, describeTaskDefinition, describeTasks, discoverPollEndpoint, discoverPollEndpoint, getCachedResponseMetadata, listClusters, listClusters, listContainerInstances, listContainerInstances, listServices, listServices, listTaskDefinitionFamilies, listTaskDefinitionFamilies, listTaskDefinitions, listTaskDefinitions, listTasks, listTasks, registerContainerInstance, registerTaskDefinition, runTask, setEndpoint, setRegion, shutdown, startTask, stopTask, submitContainerStateChange, submitContainerStateChange, submitTaskStateChange, updateContainerAgent, updateService, waiters
Future<CreateClusterResult> createClusterAsync(CreateClusterRequest createClusterRequest)
Creates a new Amazon ECS cluster. By default, your account receives a default
cluster when you
launch your first container instance. However, you can create your own cluster with a unique name with the
CreateCluster
action.
createClusterRequest
- Future<CreateClusterResult> createClusterAsync(CreateClusterRequest createClusterRequest, AsyncHandler<CreateClusterRequest,CreateClusterResult> asyncHandler)
Creates a new Amazon ECS cluster. By default, your account receives a default
cluster when you
launch your first container instance. However, you can create your own cluster with a unique name with the
CreateCluster
action.
createClusterRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<CreateClusterResult> createClusterAsync()
createClusterAsync(CreateClusterRequest)
Future<CreateClusterResult> createClusterAsync(AsyncHandler<CreateClusterRequest,CreateClusterResult> asyncHandler)
Future<CreateServiceResult> createServiceAsync(CreateServiceRequest createServiceRequest)
Runs and maintains a desired number of tasks from a specified task definition. If the number of tasks running in
a service drops below desiredCount
, Amazon ECS spawns another copy of the task in the specified
cluster. To update an existing service, see UpdateService.
In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind a load balancer. The load balancer distributes traffic across the tasks that are associated with the service. For more information, see Service Load Balancing in the Amazon EC2 Container Service Developer Guide.
You can optionally specify a deployment configuration for your service. During a deployment (which is triggered
by changing the task definition or the desired count of a service with an UpdateService operation), the
service scheduler uses the minimumHealthyPercent
and maximumPercent
parameters to
determine the deployment strategy.
The minimumHealthyPercent
represents a lower limit on the number of your service's tasks that must
remain in the RUNNING
state during a deployment, as a percentage of the desiredCount
(rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster
capacity. For example, if your service has a desiredCount
of four tasks and a
minimumHealthyPercent
of 50%, the scheduler may stop two existing tasks to free up cluster capacity
before starting two new tasks. Tasks for services that do not use a load balancer are considered healthy
if they are in the RUNNING
state; tasks for services that do use a load balancer are
considered healthy if they are in the RUNNING
state and the container instance it is hosted on is
reported as healthy by the load balancer. The default value for minimumHealthyPercent
is 50% in the
console and 100% for the AWS CLI, the AWS SDKs, and the APIs.
The maximumPercent
parameter represents an upper limit on the number of your service's tasks that
are allowed in the RUNNING
or PENDING
state during a deployment, as a percentage of the
desiredCount
(rounded down to the nearest integer). This parameter enables you to define the
deployment batch size. For example, if your service has a desiredCount
of four tasks and a
maximumPercent
value of 200%, the scheduler may start four new tasks before stopping the four older
tasks (provided that the cluster resources required to do this are available). The default value for
maximumPercent
is 200%.
When the service scheduler launches new tasks, it attempts to balance them across the Availability Zones in your cluster with the following logic:
Determine which of the container instances in your cluster can support your service's task definition (for example, they have the required CPU, memory, ports, and container instance attributes).
Sort the valid container instances by the fewest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service.
createServiceRequest
- Future<CreateServiceResult> createServiceAsync(CreateServiceRequest createServiceRequest, AsyncHandler<CreateServiceRequest,CreateServiceResult> asyncHandler)
Runs and maintains a desired number of tasks from a specified task definition. If the number of tasks running in
a service drops below desiredCount
, Amazon ECS spawns another copy of the task in the specified
cluster. To update an existing service, see UpdateService.
In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind a load balancer. The load balancer distributes traffic across the tasks that are associated with the service. For more information, see Service Load Balancing in the Amazon EC2 Container Service Developer Guide.
You can optionally specify a deployment configuration for your service. During a deployment (which is triggered
by changing the task definition or the desired count of a service with an UpdateService operation), the
service scheduler uses the minimumHealthyPercent
and maximumPercent
parameters to
determine the deployment strategy.
The minimumHealthyPercent
represents a lower limit on the number of your service's tasks that must
remain in the RUNNING
state during a deployment, as a percentage of the desiredCount
(rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster
capacity. For example, if your service has a desiredCount
of four tasks and a
minimumHealthyPercent
of 50%, the scheduler may stop two existing tasks to free up cluster capacity
before starting two new tasks. Tasks for services that do not use a load balancer are considered healthy
if they are in the RUNNING
state; tasks for services that do use a load balancer are
considered healthy if they are in the RUNNING
state and the container instance it is hosted on is
reported as healthy by the load balancer. The default value for minimumHealthyPercent
is 50% in the
console and 100% for the AWS CLI, the AWS SDKs, and the APIs.
The maximumPercent
parameter represents an upper limit on the number of your service's tasks that
are allowed in the RUNNING
or PENDING
state during a deployment, as a percentage of the
desiredCount
(rounded down to the nearest integer). This parameter enables you to define the
deployment batch size. For example, if your service has a desiredCount
of four tasks and a
maximumPercent
value of 200%, the scheduler may start four new tasks before stopping the four older
tasks (provided that the cluster resources required to do this are available). The default value for
maximumPercent
is 200%.
When the service scheduler launches new tasks, it attempts to balance them across the Availability Zones in your cluster with the following logic:
Determine which of the container instances in your cluster can support your service's task definition (for example, they have the required CPU, memory, ports, and container instance attributes).
Sort the valid container instances by the fewest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service.
createServiceRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<DeleteClusterResult> deleteClusterAsync(DeleteClusterRequest deleteClusterRequest)
Deletes the specified cluster. You must deregister all container instances from this cluster before you may delete it. You can list the container instances in a cluster with ListContainerInstances and deregister them with DeregisterContainerInstance.
deleteClusterRequest
- Future<DeleteClusterResult> deleteClusterAsync(DeleteClusterRequest deleteClusterRequest, AsyncHandler<DeleteClusterRequest,DeleteClusterResult> asyncHandler)
Deletes the specified cluster. You must deregister all container instances from this cluster before you may delete it. You can list the container instances in a cluster with ListContainerInstances and deregister them with DeregisterContainerInstance.
deleteClusterRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<DeleteServiceResult> deleteServiceAsync(DeleteServiceRequest deleteServiceRequest)
Deletes a specified service within a cluster. You can delete a service if you have no running tasks in it and the desired task count is zero. If the service is actively maintaining tasks, you cannot delete it, and you must update the service to a desired task count of zero. For more information, see UpdateService.
When you delete a service, if there are still running tasks that require cleanup, the service status moves from
ACTIVE
to DRAINING
, and the service is no longer visible in the console or in
ListServices API operations. After the tasks have stopped, then the service status moves from
DRAINING
to INACTIVE
. Services in the DRAINING
or INACTIVE
status can still be viewed with DescribeServices API operations; however, in the future,
INACTIVE
services may be cleaned up and purged from Amazon ECS record keeping, and
DescribeServices API operations on those services will return a ServiceNotFoundException
error.
deleteServiceRequest
- Future<DeleteServiceResult> deleteServiceAsync(DeleteServiceRequest deleteServiceRequest, AsyncHandler<DeleteServiceRequest,DeleteServiceResult> asyncHandler)
Deletes a specified service within a cluster. You can delete a service if you have no running tasks in it and the desired task count is zero. If the service is actively maintaining tasks, you cannot delete it, and you must update the service to a desired task count of zero. For more information, see UpdateService.
When you delete a service, if there are still running tasks that require cleanup, the service status moves from
ACTIVE
to DRAINING
, and the service is no longer visible in the console or in
ListServices API operations. After the tasks have stopped, then the service status moves from
DRAINING
to INACTIVE
. Services in the DRAINING
or INACTIVE
status can still be viewed with DescribeServices API operations; however, in the future,
INACTIVE
services may be cleaned up and purged from Amazon ECS record keeping, and
DescribeServices API operations on those services will return a ServiceNotFoundException
error.
deleteServiceRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<DeregisterContainerInstanceResult> deregisterContainerInstanceAsync(DeregisterContainerInstanceRequest deregisterContainerInstanceRequest)
Deregisters an Amazon ECS container instance from the specified cluster. This instance is no longer available to run tasks.
If you intend to use the container instance for some other purpose after deregistration, you should stop all of the tasks running on the container instance before deregistration to avoid any orphaned tasks from consuming resources.
Deregistering a container instance removes the instance from a cluster, but it does not terminate the EC2 instance; if you are finished using the instance, be sure to terminate it in the Amazon EC2 console to stop billing.
If you terminate a running container instance, Amazon ECS automatically deregisters the instance from your cluster (stopped container instances or instances with disconnected agents are not automatically deregistered when terminated).
deregisterContainerInstanceRequest
- Future<DeregisterContainerInstanceResult> deregisterContainerInstanceAsync(DeregisterContainerInstanceRequest deregisterContainerInstanceRequest, AsyncHandler<DeregisterContainerInstanceRequest,DeregisterContainerInstanceResult> asyncHandler)
Deregisters an Amazon ECS container instance from the specified cluster. This instance is no longer available to run tasks.
If you intend to use the container instance for some other purpose after deregistration, you should stop all of the tasks running on the container instance before deregistration to avoid any orphaned tasks from consuming resources.
Deregistering a container instance removes the instance from a cluster, but it does not terminate the EC2 instance; if you are finished using the instance, be sure to terminate it in the Amazon EC2 console to stop billing.
If you terminate a running container instance, Amazon ECS automatically deregisters the instance from your cluster (stopped container instances or instances with disconnected agents are not automatically deregistered when terminated).
deregisterContainerInstanceRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<DeregisterTaskDefinitionResult> deregisterTaskDefinitionAsync(DeregisterTaskDefinitionRequest deregisterTaskDefinitionRequest)
Deregisters the specified task definition by family and revision. Upon deregistration, the task definition is
marked as INACTIVE
. Existing tasks and services that reference an INACTIVE
task
definition continue to run without disruption. Existing services that reference an INACTIVE
task
definition can still scale up or down by modifying the service's desired count.
You cannot use an INACTIVE
task definition to run new tasks or create new services, and you cannot
update an existing service to reference an INACTIVE
task definition (although there may be up to a
10 minute window following deregistration where these restrictions have not yet taken effect).
deregisterTaskDefinitionRequest
- Future<DeregisterTaskDefinitionResult> deregisterTaskDefinitionAsync(DeregisterTaskDefinitionRequest deregisterTaskDefinitionRequest, AsyncHandler<DeregisterTaskDefinitionRequest,DeregisterTaskDefinitionResult> asyncHandler)
Deregisters the specified task definition by family and revision. Upon deregistration, the task definition is
marked as INACTIVE
. Existing tasks and services that reference an INACTIVE
task
definition continue to run without disruption. Existing services that reference an INACTIVE
task
definition can still scale up or down by modifying the service's desired count.
You cannot use an INACTIVE
task definition to run new tasks or create new services, and you cannot
update an existing service to reference an INACTIVE
task definition (although there may be up to a
10 minute window following deregistration where these restrictions have not yet taken effect).
deregisterTaskDefinitionRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<DescribeClustersResult> describeClustersAsync(DescribeClustersRequest describeClustersRequest)
Describes one or more of your clusters.
describeClustersRequest
- Future<DescribeClustersResult> describeClustersAsync(DescribeClustersRequest describeClustersRequest, AsyncHandler<DescribeClustersRequest,DescribeClustersResult> asyncHandler)
Describes one or more of your clusters.
describeClustersRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<DescribeClustersResult> describeClustersAsync()
Future<DescribeClustersResult> describeClustersAsync(AsyncHandler<DescribeClustersRequest,DescribeClustersResult> asyncHandler)
Future<DescribeContainerInstancesResult> describeContainerInstancesAsync(DescribeContainerInstancesRequest describeContainerInstancesRequest)
Describes Amazon EC2 Container Service container instances. Returns metadata about registered and remaining resources on each container instance requested.
describeContainerInstancesRequest
- Future<DescribeContainerInstancesResult> describeContainerInstancesAsync(DescribeContainerInstancesRequest describeContainerInstancesRequest, AsyncHandler<DescribeContainerInstancesRequest,DescribeContainerInstancesResult> asyncHandler)
Describes Amazon EC2 Container Service container instances. Returns metadata about registered and remaining resources on each container instance requested.
describeContainerInstancesRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<DescribeServicesResult> describeServicesAsync(DescribeServicesRequest describeServicesRequest)
Describes the specified services running in your cluster.
describeServicesRequest
- Future<DescribeServicesResult> describeServicesAsync(DescribeServicesRequest describeServicesRequest, AsyncHandler<DescribeServicesRequest,DescribeServicesResult> asyncHandler)
Describes the specified services running in your cluster.
describeServicesRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<DescribeTaskDefinitionResult> describeTaskDefinitionAsync(DescribeTaskDefinitionRequest describeTaskDefinitionRequest)
Describes a task definition. You can specify a family
and revision
to find information
about a specific task definition, or you can simply specify the family to find the latest ACTIVE
revision in that family.
You can only describe INACTIVE
task definitions while an active task or service references them.
describeTaskDefinitionRequest
- Future<DescribeTaskDefinitionResult> describeTaskDefinitionAsync(DescribeTaskDefinitionRequest describeTaskDefinitionRequest, AsyncHandler<DescribeTaskDefinitionRequest,DescribeTaskDefinitionResult> asyncHandler)
Describes a task definition. You can specify a family
and revision
to find information
about a specific task definition, or you can simply specify the family to find the latest ACTIVE
revision in that family.
You can only describe INACTIVE
task definitions while an active task or service references them.
describeTaskDefinitionRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<DescribeTasksResult> describeTasksAsync(DescribeTasksRequest describeTasksRequest)
Describes a specified task or tasks.
describeTasksRequest
- Future<DescribeTasksResult> describeTasksAsync(DescribeTasksRequest describeTasksRequest, AsyncHandler<DescribeTasksRequest,DescribeTasksResult> asyncHandler)
Describes a specified task or tasks.
describeTasksRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<DiscoverPollEndpointResult> discoverPollEndpointAsync(DiscoverPollEndpointRequest discoverPollEndpointRequest)
This action is only used by the Amazon EC2 Container Service agent, and it is not intended for use outside of the agent.
Returns an endpoint for the Amazon EC2 Container Service agent to poll for updates.
discoverPollEndpointRequest
- Future<DiscoverPollEndpointResult> discoverPollEndpointAsync(DiscoverPollEndpointRequest discoverPollEndpointRequest, AsyncHandler<DiscoverPollEndpointRequest,DiscoverPollEndpointResult> asyncHandler)
This action is only used by the Amazon EC2 Container Service agent, and it is not intended for use outside of the agent.
Returns an endpoint for the Amazon EC2 Container Service agent to poll for updates.
discoverPollEndpointRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<DiscoverPollEndpointResult> discoverPollEndpointAsync()
Future<DiscoverPollEndpointResult> discoverPollEndpointAsync(AsyncHandler<DiscoverPollEndpointRequest,DiscoverPollEndpointResult> asyncHandler)
Future<ListClustersResult> listClustersAsync(ListClustersRequest listClustersRequest)
Returns a list of existing clusters.
listClustersRequest
- Future<ListClustersResult> listClustersAsync(ListClustersRequest listClustersRequest, AsyncHandler<ListClustersRequest,ListClustersResult> asyncHandler)
Returns a list of existing clusters.
listClustersRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<ListClustersResult> listClustersAsync()
listClustersAsync(ListClustersRequest)
Future<ListClustersResult> listClustersAsync(AsyncHandler<ListClustersRequest,ListClustersResult> asyncHandler)
Future<ListContainerInstancesResult> listContainerInstancesAsync(ListContainerInstancesRequest listContainerInstancesRequest)
Returns a list of container instances in a specified cluster.
listContainerInstancesRequest
- Future<ListContainerInstancesResult> listContainerInstancesAsync(ListContainerInstancesRequest listContainerInstancesRequest, AsyncHandler<ListContainerInstancesRequest,ListContainerInstancesResult> asyncHandler)
Returns a list of container instances in a specified cluster.
listContainerInstancesRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<ListContainerInstancesResult> listContainerInstancesAsync()
Future<ListContainerInstancesResult> listContainerInstancesAsync(AsyncHandler<ListContainerInstancesRequest,ListContainerInstancesResult> asyncHandler)
Future<ListServicesResult> listServicesAsync(ListServicesRequest listServicesRequest)
Lists the services that are running in a specified cluster.
listServicesRequest
- Future<ListServicesResult> listServicesAsync(ListServicesRequest listServicesRequest, AsyncHandler<ListServicesRequest,ListServicesResult> asyncHandler)
Lists the services that are running in a specified cluster.
listServicesRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<ListServicesResult> listServicesAsync()
listServicesAsync(ListServicesRequest)
Future<ListServicesResult> listServicesAsync(AsyncHandler<ListServicesRequest,ListServicesResult> asyncHandler)
Future<ListTaskDefinitionFamiliesResult> listTaskDefinitionFamiliesAsync(ListTaskDefinitionFamiliesRequest listTaskDefinitionFamiliesRequest)
Returns a list of task definition families that are registered to your account (which may include task definition
families that no longer have any ACTIVE
task definition revisions).
You can filter out task definition families that do not contain any ACTIVE
task definition revisions
by setting the status
parameter to ACTIVE
. You can also filter the results with the
familyPrefix
parameter.
listTaskDefinitionFamiliesRequest
- Future<ListTaskDefinitionFamiliesResult> listTaskDefinitionFamiliesAsync(ListTaskDefinitionFamiliesRequest listTaskDefinitionFamiliesRequest, AsyncHandler<ListTaskDefinitionFamiliesRequest,ListTaskDefinitionFamiliesResult> asyncHandler)
Returns a list of task definition families that are registered to your account (which may include task definition
families that no longer have any ACTIVE
task definition revisions).
You can filter out task definition families that do not contain any ACTIVE
task definition revisions
by setting the status
parameter to ACTIVE
. You can also filter the results with the
familyPrefix
parameter.
listTaskDefinitionFamiliesRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<ListTaskDefinitionFamiliesResult> listTaskDefinitionFamiliesAsync()
Future<ListTaskDefinitionFamiliesResult> listTaskDefinitionFamiliesAsync(AsyncHandler<ListTaskDefinitionFamiliesRequest,ListTaskDefinitionFamiliesResult> asyncHandler)
Future<ListTaskDefinitionsResult> listTaskDefinitionsAsync(ListTaskDefinitionsRequest listTaskDefinitionsRequest)
Returns a list of task definitions that are registered to your account. You can filter the results by family name
with the familyPrefix
parameter or by status with the status
parameter.
listTaskDefinitionsRequest
- Future<ListTaskDefinitionsResult> listTaskDefinitionsAsync(ListTaskDefinitionsRequest listTaskDefinitionsRequest, AsyncHandler<ListTaskDefinitionsRequest,ListTaskDefinitionsResult> asyncHandler)
Returns a list of task definitions that are registered to your account. You can filter the results by family name
with the familyPrefix
parameter or by status with the status
parameter.
listTaskDefinitionsRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<ListTaskDefinitionsResult> listTaskDefinitionsAsync()
Future<ListTaskDefinitionsResult> listTaskDefinitionsAsync(AsyncHandler<ListTaskDefinitionsRequest,ListTaskDefinitionsResult> asyncHandler)
Future<ListTasksResult> listTasksAsync(ListTasksRequest listTasksRequest)
Returns a list of tasks for a specified cluster. You can filter the results by family name, by a particular
container instance, or by the desired status of the task with the family
,
containerInstance
, and desiredStatus
parameters.
Recently-stopped tasks might appear in the returned results. Currently, stopped tasks appear in the returned results for at least one hour.
listTasksRequest
- Future<ListTasksResult> listTasksAsync(ListTasksRequest listTasksRequest, AsyncHandler<ListTasksRequest,ListTasksResult> asyncHandler)
Returns a list of tasks for a specified cluster. You can filter the results by family name, by a particular
container instance, or by the desired status of the task with the family
,
containerInstance
, and desiredStatus
parameters.
Recently-stopped tasks might appear in the returned results. Currently, stopped tasks appear in the returned results for at least one hour.
listTasksRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<ListTasksResult> listTasksAsync()
listTasksAsync(ListTasksRequest)
Future<ListTasksResult> listTasksAsync(AsyncHandler<ListTasksRequest,ListTasksResult> asyncHandler)
Future<RegisterContainerInstanceResult> registerContainerInstanceAsync(RegisterContainerInstanceRequest registerContainerInstanceRequest)
This action is only used by the Amazon EC2 Container Service agent, and it is not intended for use outside of the agent.
Registers an EC2 instance into the specified cluster. This instance becomes available to place containers on.
registerContainerInstanceRequest
- Future<RegisterContainerInstanceResult> registerContainerInstanceAsync(RegisterContainerInstanceRequest registerContainerInstanceRequest, AsyncHandler<RegisterContainerInstanceRequest,RegisterContainerInstanceResult> asyncHandler)
This action is only used by the Amazon EC2 Container Service agent, and it is not intended for use outside of the agent.
Registers an EC2 instance into the specified cluster. This instance becomes available to place containers on.
registerContainerInstanceRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<RegisterTaskDefinitionResult> registerTaskDefinitionAsync(RegisterTaskDefinitionRequest registerTaskDefinitionRequest)
Registers a new task definition from the supplied family
and containerDefinitions
.
Optionally, you can add data volumes to your containers with the volumes
parameter. For more
information about task definition parameters and defaults, see Amazon ECS Task
Definitions in the Amazon EC2 Container Service Developer Guide.
You can specify an IAM role for your task with the taskRoleArn
parameter. When you specify an IAM
role for a task, its containers can then use the latest versions of the AWS CLI or SDKs to make API requests to
the AWS services that are specified in the IAM policy associated with the role. For more information, see IAM Roles for Tasks in
the Amazon EC2 Container Service Developer Guide.
You can specify a Docker networking mode for the containers in your task definition with the
networkMode
parameter. The available network modes correspond to those described in Network settings in the Docker run
reference.
registerTaskDefinitionRequest
- Future<RegisterTaskDefinitionResult> registerTaskDefinitionAsync(RegisterTaskDefinitionRequest registerTaskDefinitionRequest, AsyncHandler<RegisterTaskDefinitionRequest,RegisterTaskDefinitionResult> asyncHandler)
Registers a new task definition from the supplied family
and containerDefinitions
.
Optionally, you can add data volumes to your containers with the volumes
parameter. For more
information about task definition parameters and defaults, see Amazon ECS Task
Definitions in the Amazon EC2 Container Service Developer Guide.
You can specify an IAM role for your task with the taskRoleArn
parameter. When you specify an IAM
role for a task, its containers can then use the latest versions of the AWS CLI or SDKs to make API requests to
the AWS services that are specified in the IAM policy associated with the role. For more information, see IAM Roles for Tasks in
the Amazon EC2 Container Service Developer Guide.
You can specify a Docker networking mode for the containers in your task definition with the
networkMode
parameter. The available network modes correspond to those described in Network settings in the Docker run
reference.
registerTaskDefinitionRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<RunTaskResult> runTaskAsync(RunTaskRequest runTaskRequest)
Start a task using random placement and the default Amazon ECS scheduler. To use your own scheduler or place a
task on a specific container instance, use StartTask
instead.
The count
parameter is limited to 10 tasks per call.
runTaskRequest
- Future<RunTaskResult> runTaskAsync(RunTaskRequest runTaskRequest, AsyncHandler<RunTaskRequest,RunTaskResult> asyncHandler)
Start a task using random placement and the default Amazon ECS scheduler. To use your own scheduler or place a
task on a specific container instance, use StartTask
instead.
The count
parameter is limited to 10 tasks per call.
runTaskRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<StartTaskResult> startTaskAsync(StartTaskRequest startTaskRequest)
Starts a new task from the specified task definition on the specified container instance or instances. To use the
default Amazon ECS scheduler to place your task, use RunTask
instead.
The list of container instances to start tasks on is limited to 10.
startTaskRequest
- Future<StartTaskResult> startTaskAsync(StartTaskRequest startTaskRequest, AsyncHandler<StartTaskRequest,StartTaskResult> asyncHandler)
Starts a new task from the specified task definition on the specified container instance or instances. To use the
default Amazon ECS scheduler to place your task, use RunTask
instead.
The list of container instances to start tasks on is limited to 10.
startTaskRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<StopTaskResult> stopTaskAsync(StopTaskRequest stopTaskRequest)
Stops a running task.
When StopTask is called on a task, the equivalent of docker stop
is issued to the containers
running in the task. This results in a SIGTERM
and a 30-second timeout, after which
SIGKILL
is sent and the containers are forcibly stopped. If the container handles the
SIGTERM
gracefully and exits within 30 seconds from receiving it, no SIGKILL
is sent.
stopTaskRequest
- Future<StopTaskResult> stopTaskAsync(StopTaskRequest stopTaskRequest, AsyncHandler<StopTaskRequest,StopTaskResult> asyncHandler)
Stops a running task.
When StopTask is called on a task, the equivalent of docker stop
is issued to the containers
running in the task. This results in a SIGTERM
and a 30-second timeout, after which
SIGKILL
is sent and the containers are forcibly stopped. If the container handles the
SIGTERM
gracefully and exits within 30 seconds from receiving it, no SIGKILL
is sent.
stopTaskRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<SubmitContainerStateChangeResult> submitContainerStateChangeAsync(SubmitContainerStateChangeRequest submitContainerStateChangeRequest)
This action is only used by the Amazon EC2 Container Service agent, and it is not intended for use outside of the agent.
Sent to acknowledge that a container changed states.
submitContainerStateChangeRequest
- Future<SubmitContainerStateChangeResult> submitContainerStateChangeAsync(SubmitContainerStateChangeRequest submitContainerStateChangeRequest, AsyncHandler<SubmitContainerStateChangeRequest,SubmitContainerStateChangeResult> asyncHandler)
This action is only used by the Amazon EC2 Container Service agent, and it is not intended for use outside of the agent.
Sent to acknowledge that a container changed states.
submitContainerStateChangeRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<SubmitContainerStateChangeResult> submitContainerStateChangeAsync()
Future<SubmitContainerStateChangeResult> submitContainerStateChangeAsync(AsyncHandler<SubmitContainerStateChangeRequest,SubmitContainerStateChangeResult> asyncHandler)
Future<SubmitTaskStateChangeResult> submitTaskStateChangeAsync(SubmitTaskStateChangeRequest submitTaskStateChangeRequest)
This action is only used by the Amazon EC2 Container Service agent, and it is not intended for use outside of the agent.
Sent to acknowledge that a task changed states.
submitTaskStateChangeRequest
- Future<SubmitTaskStateChangeResult> submitTaskStateChangeAsync(SubmitTaskStateChangeRequest submitTaskStateChangeRequest, AsyncHandler<SubmitTaskStateChangeRequest,SubmitTaskStateChangeResult> asyncHandler)
This action is only used by the Amazon EC2 Container Service agent, and it is not intended for use outside of the agent.
Sent to acknowledge that a task changed states.
submitTaskStateChangeRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<UpdateContainerAgentResult> updateContainerAgentAsync(UpdateContainerAgentRequest updateContainerAgentRequest)
Updates the Amazon ECS container agent on a specified container instance. Updating the Amazon ECS container agent does not interrupt running tasks or services on the container instance. The process for updating the agent differs depending on whether your container instance was launched with the Amazon ECS-optimized AMI or another operating system.
UpdateContainerAgent
requires the Amazon ECS-optimized AMI or Amazon Linux with the
ecs-init
service installed and running. For help updating the Amazon ECS container agent on other
operating systems, see Manually Updating the Amazon ECS Container Agent in the Amazon EC2 Container Service Developer Guide.
updateContainerAgentRequest
- Future<UpdateContainerAgentResult> updateContainerAgentAsync(UpdateContainerAgentRequest updateContainerAgentRequest, AsyncHandler<UpdateContainerAgentRequest,UpdateContainerAgentResult> asyncHandler)
Updates the Amazon ECS container agent on a specified container instance. Updating the Amazon ECS container agent does not interrupt running tasks or services on the container instance. The process for updating the agent differs depending on whether your container instance was launched with the Amazon ECS-optimized AMI or another operating system.
UpdateContainerAgent
requires the Amazon ECS-optimized AMI or Amazon Linux with the
ecs-init
service installed and running. For help updating the Amazon ECS container agent on other
operating systems, see Manually Updating the Amazon ECS Container Agent in the Amazon EC2 Container Service Developer Guide.
updateContainerAgentRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Future<UpdateServiceResult> updateServiceAsync(UpdateServiceRequest updateServiceRequest)
Modifies the desired count, deployment configuration, or task definition used in a service.
You can add to or subtract from the number of instantiations of a task definition in a service by specifying the
cluster that the service is running in and a new desiredCount
parameter.
You can use UpdateService to modify your task definition and deploy a new version of your service.
You can also update the deployment configuration of a service. When a deployment is triggered by updating the
task definition of a service, the service scheduler uses the deployment configuration parameters,
minimumHealthyPercent
and maximumPercent
, to determine the deployment strategy.
If the minimumHealthyPercent
is below 100%, the scheduler can ignore the desiredCount
temporarily during a deployment. For example, if your service has a desiredCount
of four tasks, a
minimumHealthyPercent
of 50% allows the scheduler to stop two existing tasks before starting two new
tasks. Tasks for services that do not use a load balancer are considered healthy if they are in the
RUNNING
state; tasks for services that do use a load balancer are considered healthy if they
are in the RUNNING
state and the container instance it is hosted on is reported as healthy by the
load balancer.
The maximumPercent
parameter represents an upper limit on the number of running tasks during a
deployment, which enables you to define the deployment batch size. For example, if your service has a
desiredCount
of four tasks, a maximumPercent
value of 200% starts four new tasks before
stopping the four older tasks (provided that the cluster resources required to do this are available).
When UpdateService stops a task during a deployment, the equivalent of docker stop
is issued
to the containers running in the task. This results in a SIGTERM
and a 30-second timeout, after
which SIGKILL
is sent and the containers are forcibly stopped. If the container handles the
SIGTERM
gracefully and exits within 30 seconds from receiving it, no SIGKILL
is sent.
When the service scheduler launches new tasks, it attempts to balance them across the Availability Zones in your cluster with the following logic:
Determine which of the container instances in your cluster can support your service's task definition (for example, they have the required CPU, memory, ports, and container instance attributes).
Sort the valid container instances by the fewest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service.
updateServiceRequest
- Future<UpdateServiceResult> updateServiceAsync(UpdateServiceRequest updateServiceRequest, AsyncHandler<UpdateServiceRequest,UpdateServiceResult> asyncHandler)
Modifies the desired count, deployment configuration, or task definition used in a service.
You can add to or subtract from the number of instantiations of a task definition in a service by specifying the
cluster that the service is running in and a new desiredCount
parameter.
You can use UpdateService to modify your task definition and deploy a new version of your service.
You can also update the deployment configuration of a service. When a deployment is triggered by updating the
task definition of a service, the service scheduler uses the deployment configuration parameters,
minimumHealthyPercent
and maximumPercent
, to determine the deployment strategy.
If the minimumHealthyPercent
is below 100%, the scheduler can ignore the desiredCount
temporarily during a deployment. For example, if your service has a desiredCount
of four tasks, a
minimumHealthyPercent
of 50% allows the scheduler to stop two existing tasks before starting two new
tasks. Tasks for services that do not use a load balancer are considered healthy if they are in the
RUNNING
state; tasks for services that do use a load balancer are considered healthy if they
are in the RUNNING
state and the container instance it is hosted on is reported as healthy by the
load balancer.
The maximumPercent
parameter represents an upper limit on the number of running tasks during a
deployment, which enables you to define the deployment batch size. For example, if your service has a
desiredCount
of four tasks, a maximumPercent
value of 200% starts four new tasks before
stopping the four older tasks (provided that the cluster resources required to do this are available).
When UpdateService stops a task during a deployment, the equivalent of docker stop
is issued
to the containers running in the task. This results in a SIGTERM
and a 30-second timeout, after
which SIGKILL
is sent and the containers are forcibly stopped. If the container handles the
SIGTERM
gracefully and exits within 30 seconds from receiving it, no SIGKILL
is sent.
When the service scheduler launches new tasks, it attempts to balance them across the Availability Zones in your cluster with the following logic:
Determine which of the container instances in your cluster can support your service's task definition (for example, they have the required CPU, memory, ports, and container instance attributes).
Sort the valid container instances by the fewest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service.
updateServiceRequest
- asyncHandler
- Asynchronous callback handler for events in the lifecycle of the request. Users can provide an
implementation of the callback methods in this interface to receive notification of successful or
unsuccessful completion of the operation.Copyright © 2013 Amazon Web Services, Inc. All Rights Reserved.