Class ProductionVariant

    • Method Detail

      • variantName

        public final String variantName()

        The name of the production variant.

        Returns:
        The name of the production variant.
      • modelName

        public final String modelName()

        The name of the model that you want to host. This is the name that you specified when creating the model.

        Returns:
        The name of the model that you want to host. This is the name that you specified when creating the model.
      • initialInstanceCount

        public final Integer initialInstanceCount()

        Number of instances to launch initially.

        Returns:
        Number of instances to launch initially.
      • initialVariantWeight

        public final Float initialVariantWeight()

        Determines initial traffic distribution among all of the models that you specify in the endpoint configuration. The traffic to a production variant is determined by the ratio of the VariantWeight to the sum of all VariantWeight values across all ProductionVariants. If unspecified, it defaults to 1.0.

        Returns:
        Determines initial traffic distribution among all of the models that you specify in the endpoint configuration. The traffic to a production variant is determined by the ratio of the VariantWeight to the sum of all VariantWeight values across all ProductionVariants. If unspecified, it defaults to 1.0.
      • coreDumpConfig

        public final ProductionVariantCoreDumpConfig coreDumpConfig()

        Specifies configuration for a core dump from the model container when the process crashes.

        Returns:
        Specifies configuration for a core dump from the model container when the process crashes.
      • serverlessConfig

        public final ProductionVariantServerlessConfig serverlessConfig()

        The serverless configuration for an endpoint. Specifies a serverless endpoint configuration instead of an instance-based endpoint configuration.

        Returns:
        The serverless configuration for an endpoint. Specifies a serverless endpoint configuration instead of an instance-based endpoint configuration.
      • volumeSizeInGB

        public final Integer volumeSizeInGB()

        The size, in GB, of the ML storage volume attached to individual inference instance associated with the production variant. Currently only Amazon EBS gp2 storage volumes are supported.

        Returns:
        The size, in GB, of the ML storage volume attached to individual inference instance associated with the production variant. Currently only Amazon EBS gp2 storage volumes are supported.
      • modelDataDownloadTimeoutInSeconds

        public final Integer modelDataDownloadTimeoutInSeconds()

        The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this production variant.

        Returns:
        The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this production variant.
      • enableSSMAccess

        public final Boolean enableSSMAccess()

        You can use this parameter to turn on native Amazon Web Services Systems Manager (SSM) access for a production variant behind an endpoint. By default, SSM access is disabled for all production variants behind an endpoint. You can turn on or turn off SSM access for a production variant behind an existing endpoint by creating a new endpoint configuration and calling UpdateEndpoint.

        Returns:
        You can use this parameter to turn on native Amazon Web Services Systems Manager (SSM) access for a production variant behind an endpoint. By default, SSM access is disabled for all production variants behind an endpoint. You can turn on or turn off SSM access for a production variant behind an existing endpoint by creating a new endpoint configuration and calling UpdateEndpoint.
      • managedInstanceScaling

        public final ProductionVariantManagedInstanceScaling managedInstanceScaling()

        Settings that control the range in the number of instances that the endpoint provisions as it scales up or down to accommodate traffic.

        Returns:
        Settings that control the range in the number of instances that the endpoint provisions as it scales up or down to accommodate traffic.
      • routingConfig

        public final ProductionVariantRoutingConfig routingConfig()

        Settings that control how the endpoint routes incoming traffic to the instances that the endpoint hosts.

        Returns:
        Settings that control how the endpoint routes incoming traffic to the instances that the endpoint hosts.
      • inferenceAmiVersion

        public final ProductionVariantInferenceAmiVersion inferenceAmiVersion()

        Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services optimizes these configurations for different machine learning workloads.

        By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions.

        The AMI version names, and their configurations, are the following:

        al2-ami-sagemaker-inference-gpu-2
        • Accelerator: GPU

        • NVIDIA driver version: 535.54.03

        • CUDA driver version: 12.2

        • Supported instance types: ml.g4dn.*, ml.g5.*, ml.g6.*, ml.p3.*, ml.p4d.*, ml.p4de.*, ml.p5.*

        If the service returns an enum value that is not available in the current SDK version, inferenceAmiVersion will return ProductionVariantInferenceAmiVersion.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from inferenceAmiVersionAsString().

        Returns:
        Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services optimizes these configurations for different machine learning workloads.

        By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions.

        The AMI version names, and their configurations, are the following:

        al2-ami-sagemaker-inference-gpu-2
        • Accelerator: GPU

        • NVIDIA driver version: 535.54.03

        • CUDA driver version: 12.2

        • Supported instance types: ml.g4dn.*, ml.g5.*, ml.g6.*, ml.p3.*, ml.p4d.*, ml.p4de.*, ml.p5.*

        See Also:
        ProductionVariantInferenceAmiVersion
      • inferenceAmiVersionAsString

        public final String inferenceAmiVersionAsString()

        Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services optimizes these configurations for different machine learning workloads.

        By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions.

        The AMI version names, and their configurations, are the following:

        al2-ami-sagemaker-inference-gpu-2
        • Accelerator: GPU

        • NVIDIA driver version: 535.54.03

        • CUDA driver version: 12.2

        • Supported instance types: ml.g4dn.*, ml.g5.*, ml.g6.*, ml.p3.*, ml.p4d.*, ml.p4de.*, ml.p5.*

        If the service returns an enum value that is not available in the current SDK version, inferenceAmiVersion will return ProductionVariantInferenceAmiVersion.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from inferenceAmiVersionAsString().

        Returns:
        Specifies an option from a collection of preconfigured Amazon Machine Image (AMI) images. Each image is configured by Amazon Web Services with a set of software and driver versions. Amazon Web Services optimizes these configurations for different machine learning workloads.

        By selecting an AMI version, you can ensure that your inference environment is compatible with specific software requirements, such as CUDA driver versions, Linux kernel versions, or Amazon Web Services Neuron driver versions.

        The AMI version names, and their configurations, are the following:

        al2-ami-sagemaker-inference-gpu-2
        • Accelerator: GPU

        • NVIDIA driver version: 535.54.03

        • CUDA driver version: 12.2

        • Supported instance types: ml.g4dn.*, ml.g5.*, ml.g6.*, ml.p3.*, ml.p4d.*, ml.p4de.*, ml.p5.*

        See Also:
        ProductionVariantInferenceAmiVersion
      • hashCode

        public final int hashCode()
        Overrides:
        hashCode in class Object
      • equals

        public final boolean equals​(Object obj)
        Overrides:
        equals in class Object
      • toString

        public final String toString()
        Returns a string representation of this object. This is useful for testing and debugging. Sensitive data will be redacted from this string using a placeholder value.
        Overrides:
        toString in class Object
      • getValueForField

        public final <T> Optional<T> getValueForField​(String fieldName,
                                                      Class<T> clazz)