Class ProductionVariant
- java.lang.Object
-
- software.amazon.awssdk.services.sagemaker.model.ProductionVariant
-
- All Implemented Interfaces:
Serializable
,SdkPojo
,ToCopyableBuilder<ProductionVariant.Builder,ProductionVariant>
@Generated("software.amazon.awssdk:codegen") public final class ProductionVariant extends Object implements SdkPojo, Serializable, ToCopyableBuilder<ProductionVariant.Builder,ProductionVariant>
Identifies a model that you want to host and the resources chosen to deploy for hosting it. If you are deploying multiple models, tell SageMaker how to distribute traffic among the models by specifying variant weights. For more information on production variants, check Production variants.
- See Also:
- Serialized Form
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static interface
ProductionVariant.Builder
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description ProductionVariantAcceleratorType
acceleratorType()
The size of the Elastic Inference (EI) instance to use for the production variant.String
acceleratorTypeAsString()
The size of the Elastic Inference (EI) instance to use for the production variant.static ProductionVariant.Builder
builder()
Integer
containerStartupHealthCheckTimeoutInSeconds()
The timeout value, in seconds, for your inference container to pass health check by SageMaker Hosting.ProductionVariantCoreDumpConfig
coreDumpConfig()
Specifies configuration for a core dump from the model container when the process crashes.Boolean
enableSSMAccess()
You can use this parameter to turn on native Amazon Web Services Systems Manager (SSM) access for a production variant behind an endpoint.boolean
equals(Object obj)
boolean
equalsBySdkFields(Object obj)
<T> Optional<T>
getValueForField(String fieldName, Class<T> clazz)
int
hashCode()
ProductionVariantInferenceAmiVersion
inferenceAmiVersion()
Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images.String
inferenceAmiVersionAsString()
Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images.Integer
initialInstanceCount()
Number of instances to launch initially.Float
initialVariantWeight()
Determines initial traffic distribution among all of the models that you specify in the endpoint configuration.ProductionVariantInstanceType
instanceType()
The ML compute instance type.String
instanceTypeAsString()
The ML compute instance type.ProductionVariantManagedInstanceScaling
managedInstanceScaling()
Settings that control the range in the number of instances that the endpoint provisions as it scales up or down to accommodate traffic.Integer
modelDataDownloadTimeoutInSeconds()
The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this production variant.String
modelName()
The name of the model that you want to host.ProductionVariantRoutingConfig
routingConfig()
Settings that control how the endpoint routes incoming traffic to the instances that the endpoint hosts.List<SdkField<?>>
sdkFields()
static Class<? extends ProductionVariant.Builder>
serializableBuilderClass()
ProductionVariantServerlessConfig
serverlessConfig()
The serverless configuration for an endpoint.ProductionVariant.Builder
toBuilder()
String
toString()
Returns a string representation of this object.String
variantName()
The name of the production variant.Integer
volumeSizeInGB()
The size, in GB, of the ML storage volume attached to individual inference instance associated with the production variant.-
Methods inherited from class java.lang.Object
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
-
Methods inherited from interface software.amazon.awssdk.utils.builder.ToCopyableBuilder
copy
-
-
-
-
Method Detail
-
variantName
public final String variantName()
The name of the production variant.
- Returns:
- The name of the production variant.
-
modelName
public final String modelName()
The name of the model that you want to host. This is the name that you specified when creating the model.
- Returns:
- The name of the model that you want to host. This is the name that you specified when creating the model.
-
initialInstanceCount
public final Integer initialInstanceCount()
Number of instances to launch initially.
- Returns:
- Number of instances to launch initially.
-
instanceType
public final ProductionVariantInstanceType instanceType()
The ML compute instance type.
If the service returns an enum value that is not available in the current SDK version,
instanceType
will returnProductionVariantInstanceType.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available frominstanceTypeAsString()
.- Returns:
- The ML compute instance type.
- See Also:
ProductionVariantInstanceType
-
instanceTypeAsString
public final String instanceTypeAsString()
The ML compute instance type.
If the service returns an enum value that is not available in the current SDK version,
instanceType
will returnProductionVariantInstanceType.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available frominstanceTypeAsString()
.- Returns:
- The ML compute instance type.
- See Also:
ProductionVariantInstanceType
-
initialVariantWeight
public final Float initialVariantWeight()
Determines initial traffic distribution among all of the models that you specify in the endpoint configuration. The traffic to a production variant is determined by the ratio of the
VariantWeight
to the sum of allVariantWeight
values across all ProductionVariants. If unspecified, it defaults to 1.0.- Returns:
- Determines initial traffic distribution among all of the models that you specify in the endpoint
configuration. The traffic to a production variant is determined by the ratio of the
VariantWeight
to the sum of allVariantWeight
values across all ProductionVariants. If unspecified, it defaults to 1.0.
-
acceleratorType
public final ProductionVariantAcceleratorType acceleratorType()
The size of the Elastic Inference (EI) instance to use for the production variant. EI instances provide on-demand GPU computing for inference. For more information, see Using Elastic Inference in Amazon SageMaker.
If the service returns an enum value that is not available in the current SDK version,
acceleratorType
will returnProductionVariantAcceleratorType.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available fromacceleratorTypeAsString()
.- Returns:
- The size of the Elastic Inference (EI) instance to use for the production variant. EI instances provide on-demand GPU computing for inference. For more information, see Using Elastic Inference in Amazon SageMaker.
- See Also:
ProductionVariantAcceleratorType
-
acceleratorTypeAsString
public final String acceleratorTypeAsString()
The size of the Elastic Inference (EI) instance to use for the production variant. EI instances provide on-demand GPU computing for inference. For more information, see Using Elastic Inference in Amazon SageMaker.
If the service returns an enum value that is not available in the current SDK version,
acceleratorType
will returnProductionVariantAcceleratorType.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available fromacceleratorTypeAsString()
.- Returns:
- The size of the Elastic Inference (EI) instance to use for the production variant. EI instances provide on-demand GPU computing for inference. For more information, see Using Elastic Inference in Amazon SageMaker.
- See Also:
ProductionVariantAcceleratorType
-
coreDumpConfig
public final ProductionVariantCoreDumpConfig coreDumpConfig()
Specifies configuration for a core dump from the model container when the process crashes.
- Returns:
- Specifies configuration for a core dump from the model container when the process crashes.
-
serverlessConfig
public final ProductionVariantServerlessConfig serverlessConfig()
The serverless configuration for an endpoint. Specifies a serverless endpoint configuration instead of an instance-based endpoint configuration.
- Returns:
- The serverless configuration for an endpoint. Specifies a serverless endpoint configuration instead of an instance-based endpoint configuration.
-
volumeSizeInGB
public final Integer volumeSizeInGB()
The size, in GB, of the ML storage volume attached to individual inference instance associated with the production variant. Currently only Amazon EBS gp2 storage volumes are supported.
- Returns:
- The size, in GB, of the ML storage volume attached to individual inference instance associated with the production variant. Currently only Amazon EBS gp2 storage volumes are supported.
-
modelDataDownloadTimeoutInSeconds
public final Integer modelDataDownloadTimeoutInSeconds()
The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this production variant.
- Returns:
- The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this production variant.
-
containerStartupHealthCheckTimeoutInSeconds
public final Integer containerStartupHealthCheckTimeoutInSeconds()
The timeout value, in seconds, for your inference container to pass health check by SageMaker Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests.
- Returns:
- The timeout value, in seconds, for your inference container to pass health check by SageMaker Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests.
-
enableSSMAccess
public final Boolean enableSSMAccess()
You can use this parameter to turn on native Amazon Web Services Systems Manager (SSM) access for a production variant behind an endpoint. By default, SSM access is disabled for all production variants behind an endpoint. You can turn on or turn off SSM access for a production variant behind an existing endpoint by creating a new endpoint configuration and calling
UpdateEndpoint
.- Returns:
- You can use this parameter to turn on native Amazon Web Services Systems Manager (SSM) access for a
production variant behind an endpoint. By default, SSM access is disabled for all production variants
behind an endpoint. You can turn on or turn off SSM access for a production variant behind an existing
endpoint by creating a new endpoint configuration and calling
UpdateEndpoint
.
-
managedInstanceScaling
public final ProductionVariantManagedInstanceScaling managedInstanceScaling()
Settings that control the range in the number of instances that the endpoint provisions as it scales up or down to accommodate traffic.
- Returns:
- Settings that control the range in the number of instances that the endpoint provisions as it scales up or down to accommodate traffic.
-
routingConfig
public final ProductionVariantRoutingConfig routingConfig()
Settings that control how the endpoint routes incoming traffic to the instances that the endpoint hosts.
- Returns:
- Settings that control how the endpoint routes incoming traffic to the instances that the endpoint hosts.
-
inferenceAmiVersion
public final ProductionVariantInferenceAmiVersion inferenceAmiVersion()
Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services optimizes these configurations for different machine learning workloads.
By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions.
The AMI version names, and their configurations, are the following:
- al2-ami-sagemaker-inference-gpu-2
-
-
Accelerator: GPU
-
NVIDIA driver version: 535.54.03
-
CUDA driver version: 12.2
-
Supported instance types: ml.g4dn.*, ml.g5.*, ml.g6.*, ml.p3.*, ml.p4d.*, ml.p4de.*, ml.p5.*
-
If the service returns an enum value that is not available in the current SDK version,
inferenceAmiVersion
will returnProductionVariantInferenceAmiVersion.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available frominferenceAmiVersionAsString()
.- Returns:
- Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is
configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services
optimizes these configurations for different machine learning workloads.
By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions.
The AMI version names, and their configurations, are the following:
- al2-ami-sagemaker-inference-gpu-2
-
-
Accelerator: GPU
-
NVIDIA driver version: 535.54.03
-
CUDA driver version: 12.2
-
Supported instance types: ml.g4dn.*, ml.g5.*, ml.g6.*, ml.p3.*, ml.p4d.*, ml.p4de.*, ml.p5.*
-
- See Also:
ProductionVariantInferenceAmiVersion
-
inferenceAmiVersionAsString
public final String inferenceAmiVersionAsString()
Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services optimizes these configurations for different machine learning workloads.
By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions.
The AMI version names, and their configurations, are the following:
- al2-ami-sagemaker-inference-gpu-2
-
-
Accelerator: GPU
-
NVIDIA driver version: 535.54.03
-
CUDA driver version: 12.2
-
Supported instance types: ml.g4dn.*, ml.g5.*, ml.g6.*, ml.p3.*, ml.p4d.*, ml.p4de.*, ml.p5.*
-
If the service returns an enum value that is not available in the current SDK version,
inferenceAmiVersion
will returnProductionVariantInferenceAmiVersion.UNKNOWN_TO_SDK_VERSION
. The raw value returned by the service is available frominferenceAmiVersionAsString()
.- Returns:
- Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is
configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services
optimizes these configurations for different machine learning workloads.
By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions.
The AMI version names, and their configurations, are the following:
- al2-ami-sagemaker-inference-gpu-2
-
-
Accelerator: GPU
-
NVIDIA driver version: 535.54.03
-
CUDA driver version: 12.2
-
Supported instance types: ml.g4dn.*, ml.g5.*, ml.g6.*, ml.p3.*, ml.p4d.*, ml.p4de.*, ml.p5.*
-
- See Also:
ProductionVariantInferenceAmiVersion
-
toBuilder
public ProductionVariant.Builder toBuilder()
- Specified by:
toBuilder
in interfaceToCopyableBuilder<ProductionVariant.Builder,ProductionVariant>
-
builder
public static ProductionVariant.Builder builder()
-
serializableBuilderClass
public static Class<? extends ProductionVariant.Builder> serializableBuilderClass()
-
equalsBySdkFields
public final boolean equalsBySdkFields(Object obj)
- Specified by:
equalsBySdkFields
in interfaceSdkPojo
-
toString
public final String toString()
Returns a string representation of this object. This is useful for testing and debugging. Sensitive data will be redacted from this string using a placeholder value.
-
-