@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class ContainerDefinition extends Object implements Serializable, Cloneable, StructuredPojo
Describes the container, as part of model definition.
Constructor and Description |
---|
ContainerDefinition() |
Modifier and Type | Method and Description |
---|---|
ContainerDefinition |
addEnvironmentEntry(String key,
String value)
Add a single Environment entry
|
ContainerDefinition |
clearEnvironmentEntries()
Removes all the entries added into Environment.
|
ContainerDefinition |
clone() |
boolean |
equals(Object obj) |
String |
getContainerHostname()
This parameter is ignored for models that contain only a
PrimaryContainer . |
Map<String,String> |
getEnvironment()
The environment variables to set in the Docker container.
|
String |
getImage()
The path where inference code is stored.
|
ImageConfig |
getImageConfig()
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon
Virtual Private Cloud (VPC).
|
String |
getMode()
Whether the container hosts a single model or multiple models.
|
String |
getModelDataUrl()
The S3 path where the model artifacts, which result from model training, are stored.
|
String |
getModelPackageName()
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
|
MultiModelConfig |
getMultiModelConfig()
Specifies additional configuration for multi-model endpoints.
|
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setContainerHostname(String containerHostname)
This parameter is ignored for models that contain only a
PrimaryContainer . |
void |
setEnvironment(Map<String,String> environment)
The environment variables to set in the Docker container.
|
void |
setImage(String image)
The path where inference code is stored.
|
void |
setImageConfig(ImageConfig imageConfig)
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon
Virtual Private Cloud (VPC).
|
void |
setMode(String mode)
Whether the container hosts a single model or multiple models.
|
void |
setModelDataUrl(String modelDataUrl)
The S3 path where the model artifacts, which result from model training, are stored.
|
void |
setModelPackageName(String modelPackageName)
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
|
void |
setMultiModelConfig(MultiModelConfig multiModelConfig)
Specifies additional configuration for multi-model endpoints.
|
String |
toString()
Returns a string representation of this object.
|
ContainerDefinition |
withContainerHostname(String containerHostname)
This parameter is ignored for models that contain only a
PrimaryContainer . |
ContainerDefinition |
withEnvironment(Map<String,String> environment)
The environment variables to set in the Docker container.
|
ContainerDefinition |
withImage(String image)
The path where inference code is stored.
|
ContainerDefinition |
withImageConfig(ImageConfig imageConfig)
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon
Virtual Private Cloud (VPC).
|
ContainerDefinition |
withMode(ContainerMode mode)
Whether the container hosts a single model or multiple models.
|
ContainerDefinition |
withMode(String mode)
Whether the container hosts a single model or multiple models.
|
ContainerDefinition |
withModelDataUrl(String modelDataUrl)
The S3 path where the model artifacts, which result from model training, are stored.
|
ContainerDefinition |
withModelPackageName(String modelPackageName)
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
|
ContainerDefinition |
withMultiModelConfig(MultiModelConfig multiModelConfig)
Specifies additional configuration for multi-model endpoints.
|
public void setContainerHostname(String containerHostname)
This parameter is ignored for models that contain only a PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely
identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics
to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
ContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned
based on the position of the ContainerDefinition
in the pipeline. If you specify a value for the
ContainerHostName
for any ContainerDefinition
that is part of an inference pipeline,
you must specify a value for the ContainerHostName
parameter of every
ContainerDefinition
in that pipeline.
containerHostname
- This parameter is ignored for models that contain only a PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of the parameter
uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and
Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
ContainerDefinition
that is part of an inference pipeline, a unique name is automatically
assigned based on the position of the ContainerDefinition
in the pipeline. If you specify a
value for the ContainerHostName
for any ContainerDefinition
that is part of an
inference pipeline, you must specify a value for the ContainerHostName
parameter of every
ContainerDefinition
in that pipeline.
public String getContainerHostname()
This parameter is ignored for models that contain only a PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely
identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics
to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
ContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned
based on the position of the ContainerDefinition
in the pipeline. If you specify a value for the
ContainerHostName
for any ContainerDefinition
that is part of an inference pipeline,
you must specify a value for the ContainerHostName
parameter of every
ContainerDefinition
in that pipeline.
PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of the parameter
uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and
Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
ContainerDefinition
that is part of an inference pipeline, a unique name is automatically
assigned based on the position of the ContainerDefinition
in the pipeline. If you specify a
value for the ContainerHostName
for any ContainerDefinition
that is part of an
inference pipeline, you must specify a value for the ContainerHostName
parameter of every
ContainerDefinition
in that pipeline.
public ContainerDefinition withContainerHostname(String containerHostname)
This parameter is ignored for models that contain only a PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely
identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics
to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
ContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned
based on the position of the ContainerDefinition
in the pipeline. If you specify a value for the
ContainerHostName
for any ContainerDefinition
that is part of an inference pipeline,
you must specify a value for the ContainerHostName
parameter of every
ContainerDefinition
in that pipeline.
containerHostname
- This parameter is ignored for models that contain only a PrimaryContainer
.
When a ContainerDefinition
is part of an inference pipeline, the value of the parameter
uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and
Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
ContainerDefinition
that is part of an inference pipeline, a unique name is automatically
assigned based on the position of the ContainerDefinition
in the pipeline. If you specify a
value for the ContainerHostName
for any ContainerDefinition
that is part of an
inference pipeline, you must specify a value for the ContainerHostName
parameter of every
ContainerDefinition
in that pipeline.
public void setImage(String image)
The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker
registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own
custom algorithm instead of an algorithm provided by Amazon SageMaker, the inference code must meet Amazon
SageMaker requirements. Amazon SageMaker supports both registry/repository[:tag]
and
registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon
SageMaker
image
- The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a
Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are
using your own custom algorithm instead of an algorithm provided by Amazon SageMaker, the inference code
must meet Amazon SageMaker requirements. Amazon SageMaker supports both
registry/repository[:tag]
and registry/repository[@digest]
image path formats.
For more information, see Using Your Own Algorithms with
Amazon SageMakerpublic String getImage()
The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker
registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own
custom algorithm instead of an algorithm provided by Amazon SageMaker, the inference code must meet Amazon
SageMaker requirements. Amazon SageMaker supports both registry/repository[:tag]
and
registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon
SageMaker
registry/repository[:tag]
and registry/repository[@digest]
image path formats.
For more information, see Using Your Own Algorithms
with Amazon SageMakerpublic ContainerDefinition withImage(String image)
The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker
registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own
custom algorithm instead of an algorithm provided by Amazon SageMaker, the inference code must meet Amazon
SageMaker requirements. Amazon SageMaker supports both registry/repository[:tag]
and
registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon
SageMaker
image
- The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a
Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are
using your own custom algorithm instead of an algorithm provided by Amazon SageMaker, the inference code
must meet Amazon SageMaker requirements. Amazon SageMaker supports both
registry/repository[:tag]
and registry/repository[@digest]
image path formats.
For more information, see Using Your Own Algorithms with
Amazon SageMakerpublic void setImageConfig(ImageConfig imageConfig)
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers
imageConfig
- Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your
Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry,
see Use a Private Docker Registry for Real-Time Inference Containerspublic ImageConfig getImageConfig()
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers
public ContainerDefinition withImageConfig(ImageConfig imageConfig)
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers
imageConfig
- Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your
Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry,
see Use a Private Docker Registry for Real-Time Inference Containerspublic void setMode(String mode)
Whether the container hosts a single model or multiple models.
mode
- Whether the container hosts a single model or multiple models.ContainerMode
public String getMode()
Whether the container hosts a single model or multiple models.
ContainerMode
public ContainerDefinition withMode(String mode)
Whether the container hosts a single model or multiple models.
mode
- Whether the container hosts a single model or multiple models.ContainerMode
public ContainerDefinition withMode(ContainerMode mode)
Whether the container hosts a single model or multiple models.
mode
- Whether the container hosts a single model or multiple models.ContainerMode
public void setModelDataUrl(String modelDataUrl)
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for Amazon SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, Amazon SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your IAM user account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, Amazon SageMaker requires that you provide a S3 path to the
model artifacts in ModelDataUrl
.
modelDataUrl
- The S3 path where the model artifacts, which result from model training, are stored. This path must point
to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for Amazon SageMaker
built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms,
see Common
Parameters. The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, Amazon SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your IAM user account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, Amazon SageMaker requires that you provide a S3 path to
the model artifacts in ModelDataUrl
.
public String getModelDataUrl()
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for Amazon SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, Amazon SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your IAM user account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, Amazon SageMaker requires that you provide a S3 path to the
model artifacts in ModelDataUrl
.
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, Amazon SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your IAM user account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, Amazon SageMaker requires that you provide a S3 path
to the model artifacts in ModelDataUrl
.
public ContainerDefinition withModelDataUrl(String modelDataUrl)
The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for Amazon SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, Amazon SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your IAM user account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, Amazon SageMaker requires that you provide a S3 path to the
model artifacts in ModelDataUrl
.
modelDataUrl
- The S3 path where the model artifacts, which result from model training, are stored. This path must point
to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for Amazon SageMaker
built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms,
see Common
Parameters. The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, Amazon SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your IAM user account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide.
If you use a built-in algorithm to create a model, Amazon SageMaker requires that you provide a S3 path to
the model artifacts in ModelDataUrl
.
public Map<String,String> getEnvironment()
The environment variables to set in the Docker container. Each key and value in the Environment
string to string map can have length of up to 1024. We support up to 16 entries in the map.
Environment
string to string map can have length of up to 1024. We support up to 16 entries
in the map.public void setEnvironment(Map<String,String> environment)
The environment variables to set in the Docker container. Each key and value in the Environment
string to string map can have length of up to 1024. We support up to 16 entries in the map.
environment
- The environment variables to set in the Docker container. Each key and value in the
Environment
string to string map can have length of up to 1024. We support up to 16 entries
in the map.public ContainerDefinition withEnvironment(Map<String,String> environment)
The environment variables to set in the Docker container. Each key and value in the Environment
string to string map can have length of up to 1024. We support up to 16 entries in the map.
environment
- The environment variables to set in the Docker container. Each key and value in the
Environment
string to string map can have length of up to 1024. We support up to 16 entries
in the map.public ContainerDefinition addEnvironmentEntry(String key, String value)
public ContainerDefinition clearEnvironmentEntries()
public void setModelPackageName(String modelPackageName)
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
modelPackageName
- The name or Amazon Resource Name (ARN) of the model package to use to create the model.public String getModelPackageName()
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
public ContainerDefinition withModelPackageName(String modelPackageName)
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
modelPackageName
- The name or Amazon Resource Name (ARN) of the model package to use to create the model.public void setMultiModelConfig(MultiModelConfig multiModelConfig)
Specifies additional configuration for multi-model endpoints.
multiModelConfig
- Specifies additional configuration for multi-model endpoints.public MultiModelConfig getMultiModelConfig()
Specifies additional configuration for multi-model endpoints.
public ContainerDefinition withMultiModelConfig(MultiModelConfig multiModelConfig)
Specifies additional configuration for multi-model endpoints.
multiModelConfig
- Specifies additional configuration for multi-model endpoints.public String toString()
toString
in class Object
Object.toString()
public ContainerDefinition clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.