@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class CreateTransformJobRequest extends AmazonWebServiceRequest implements Serializable, Cloneable
NOOP| Constructor and Description | 
|---|
| CreateTransformJobRequest() | 
| Modifier and Type | Method and Description | 
|---|---|
| CreateTransformJobRequest | addEnvironmentEntry(String key,
                   String value) | 
| CreateTransformJobRequest | clearEnvironmentEntries()Removes all the entries added into Environment. | 
| CreateTransformJobRequest | clone()Creates a shallow clone of this object for all fields except the handler context. | 
| boolean | equals(Object obj) | 
| String | getBatchStrategy()
 Determines the number of records to include in a mini-batch. | 
| Map<String,String> | getEnvironment()
 The environment variables to set in the Docker container. | 
| Integer | getMaxConcurrentTransforms()
 The maximum number of parallel requests that can be sent to an algorithm container on an instance. | 
| Integer | getMaxPayloadInMB()
 The maximum payload size allowed, in MB. | 
| String | getModelName()
 The name of the model that you want to use for the transform job. | 
| List<Tag> | getTags()
 (Optional) An array of key-value pairs. | 
| TransformInput | getTransformInput()
 Describes the input source and the way the transform job consumes it. | 
| String | getTransformJobName()
 The name of the transform job. | 
| TransformOutput | getTransformOutput()
 Describes the results of the transform job. | 
| TransformResources | getTransformResources()
 Describes the resources, including ML instance types and ML instance count, to use for the transform job. | 
| int | hashCode() | 
| void | setBatchStrategy(String batchStrategy)
 Determines the number of records to include in a mini-batch. | 
| void | setEnvironment(Map<String,String> environment)
 The environment variables to set in the Docker container. | 
| void | setMaxConcurrentTransforms(Integer maxConcurrentTransforms)
 The maximum number of parallel requests that can be sent to an algorithm container on an instance. | 
| void | setMaxPayloadInMB(Integer maxPayloadInMB)
 The maximum payload size allowed, in MB. | 
| void | setModelName(String modelName)
 The name of the model that you want to use for the transform job. | 
| void | setTags(Collection<Tag> tags)
 (Optional) An array of key-value pairs. | 
| void | setTransformInput(TransformInput transformInput)
 Describes the input source and the way the transform job consumes it. | 
| void | setTransformJobName(String transformJobName)
 The name of the transform job. | 
| void | setTransformOutput(TransformOutput transformOutput)
 Describes the results of the transform job. | 
| void | setTransformResources(TransformResources transformResources)
 Describes the resources, including ML instance types and ML instance count, to use for the transform job. | 
| String | toString()Returns a string representation of this object. | 
| CreateTransformJobRequest | withBatchStrategy(BatchStrategy batchStrategy)
 Determines the number of records to include in a mini-batch. | 
| CreateTransformJobRequest | withBatchStrategy(String batchStrategy)
 Determines the number of records to include in a mini-batch. | 
| CreateTransformJobRequest | withEnvironment(Map<String,String> environment)
 The environment variables to set in the Docker container. | 
| CreateTransformJobRequest | withMaxConcurrentTransforms(Integer maxConcurrentTransforms)
 The maximum number of parallel requests that can be sent to an algorithm container on an instance. | 
| CreateTransformJobRequest | withMaxPayloadInMB(Integer maxPayloadInMB)
 The maximum payload size allowed, in MB. | 
| CreateTransformJobRequest | withModelName(String modelName)
 The name of the model that you want to use for the transform job. | 
| CreateTransformJobRequest | withTags(Collection<Tag> tags)
 (Optional) An array of key-value pairs. | 
| CreateTransformJobRequest | withTags(Tag... tags)
 (Optional) An array of key-value pairs. | 
| CreateTransformJobRequest | withTransformInput(TransformInput transformInput)
 Describes the input source and the way the transform job consumes it. | 
| CreateTransformJobRequest | withTransformJobName(String transformJobName)
 The name of the transform job. | 
| CreateTransformJobRequest | withTransformOutput(TransformOutput transformOutput)
 Describes the results of the transform job. | 
| CreateTransformJobRequest | withTransformResources(TransformResources transformResources)
 Describes the resources, including ML instance types and ML instance count, to use for the transform job. | 
addHandlerContext, getCloneRoot, getCloneSource, getCustomQueryParameters, getCustomRequestHeaders, getGeneralProgressListener, getHandlerContext, getReadLimit, getRequestClientOptions, getRequestCredentials, getRequestCredentialsProvider, getRequestMetricCollector, getSdkClientExecutionTimeout, getSdkRequestTimeout, putCustomQueryParameter, putCustomRequestHeader, setGeneralProgressListener, setRequestCredentials, setRequestCredentialsProvider, setRequestMetricCollector, setSdkClientExecutionTimeout, setSdkRequestTimeout, withGeneralProgressListener, withRequestCredentialsProvider, withRequestMetricCollector, withSdkClientExecutionTimeout, withSdkRequestTimeoutpublic void setTransformJobName(String transformJobName)
The name of the transform job. The name must be unique within an AWS Region in an AWS account.
transformJobName - The name of the transform job. The name must be unique within an AWS Region in an AWS account.public String getTransformJobName()
The name of the transform job. The name must be unique within an AWS Region in an AWS account.
public CreateTransformJobRequest withTransformJobName(String transformJobName)
The name of the transform job. The name must be unique within an AWS Region in an AWS account.
transformJobName - The name of the transform job. The name must be unique within an AWS Region in an AWS account.public void setModelName(String modelName)
 The name of the model that you want to use for the transform job. ModelName must be the name of an
 existing Amazon SageMaker model within an AWS Region in an AWS account.
 
modelName - The name of the model that you want to use for the transform job. ModelName must be the name
        of an existing Amazon SageMaker model within an AWS Region in an AWS account.public String getModelName()
 The name of the model that you want to use for the transform job. ModelName must be the name of an
 existing Amazon SageMaker model within an AWS Region in an AWS account.
 
ModelName must be the name
         of an existing Amazon SageMaker model within an AWS Region in an AWS account.public CreateTransformJobRequest withModelName(String modelName)
 The name of the model that you want to use for the transform job. ModelName must be the name of an
 existing Amazon SageMaker model within an AWS Region in an AWS account.
 
modelName - The name of the model that you want to use for the transform job. ModelName must be the name
        of an existing Amazon SageMaker model within an AWS Region in an AWS account.public void setMaxConcurrentTransforms(Integer maxConcurrentTransforms)
 The maximum number of parallel requests that can be sent to an algorithm container on an instance. This is good
 for algorithms that implement multiple workers on larger instances . The default value is 1. To
 allow Amazon SageMaker to determine the appropriate number for MaxConcurrentTransforms, do not set
 the value in the API.
 
maxConcurrentTransforms - The maximum number of parallel requests that can be sent to an algorithm container on an instance. This is
        good for algorithms that implement multiple workers on larger instances . The default value is
        1. To allow Amazon SageMaker to determine the appropriate number for
        MaxConcurrentTransforms, do not set the value in the API.public Integer getMaxConcurrentTransforms()
 The maximum number of parallel requests that can be sent to an algorithm container on an instance. This is good
 for algorithms that implement multiple workers on larger instances . The default value is 1. To
 allow Amazon SageMaker to determine the appropriate number for MaxConcurrentTransforms, do not set
 the value in the API.
 
1. To allow Amazon SageMaker to determine the appropriate number for
         MaxConcurrentTransforms, do not set the value in the API.public CreateTransformJobRequest withMaxConcurrentTransforms(Integer maxConcurrentTransforms)
 The maximum number of parallel requests that can be sent to an algorithm container on an instance. This is good
 for algorithms that implement multiple workers on larger instances . The default value is 1. To
 allow Amazon SageMaker to determine the appropriate number for MaxConcurrentTransforms, do not set
 the value in the API.
 
maxConcurrentTransforms - The maximum number of parallel requests that can be sent to an algorithm container on an instance. This is
        good for algorithms that implement multiple workers on larger instances . The default value is
        1. To allow Amazon SageMaker to determine the appropriate number for
        MaxConcurrentTransforms, do not set the value in the API.public void setMaxPayloadInMB(Integer maxPayloadInMB)
 The maximum payload size allowed, in MB. A payload is the data portion of a record (without metadata). The value
 in MaxPayloadInMB must be greater or equal to the size of a single record. You can approximate the
 size of a record by dividing the size of your dataset by the number of records. Then multiply this value by the
 number of records you want in a mini-batch. We recommend to enter a slightly larger value than this to ensure the
 records fit within the maximum payload size. The default value is 6 MB.
 
 For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set the
 value to 0. This feature only works in supported algorithms. Currently, Amazon SageMaker built-in
 algorithms do not support this feature.
 
maxPayloadInMB - The maximum payload size allowed, in MB. A payload is the data portion of a record (without metadata). The
        value in MaxPayloadInMB must be greater or equal to the size of a single record. You can
        approximate the size of a record by dividing the size of your dataset by the number of records. Then
        multiply this value by the number of records you want in a mini-batch. We recommend to enter a slightly
        larger value than this to ensure the records fit within the maximum payload size. The default value is
        6 MB. 
        
        For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set
        the value to 0. This feature only works in supported algorithms. Currently, Amazon SageMaker
        built-in algorithms do not support this feature.
public Integer getMaxPayloadInMB()
 The maximum payload size allowed, in MB. A payload is the data portion of a record (without metadata). The value
 in MaxPayloadInMB must be greater or equal to the size of a single record. You can approximate the
 size of a record by dividing the size of your dataset by the number of records. Then multiply this value by the
 number of records you want in a mini-batch. We recommend to enter a slightly larger value than this to ensure the
 records fit within the maximum payload size. The default value is 6 MB.
 
 For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set the
 value to 0. This feature only works in supported algorithms. Currently, Amazon SageMaker built-in
 algorithms do not support this feature.
 
MaxPayloadInMB must be greater or equal to the size of a single record. You can
         approximate the size of a record by dividing the size of your dataset by the number of records. Then
         multiply this value by the number of records you want in a mini-batch. We recommend to enter a slightly
         larger value than this to ensure the records fit within the maximum payload size. The default value is
         6 MB. 
         
         For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding,
         set the value to 0. This feature only works in supported algorithms. Currently, Amazon
         SageMaker built-in algorithms do not support this feature.
public CreateTransformJobRequest withMaxPayloadInMB(Integer maxPayloadInMB)
 The maximum payload size allowed, in MB. A payload is the data portion of a record (without metadata). The value
 in MaxPayloadInMB must be greater or equal to the size of a single record. You can approximate the
 size of a record by dividing the size of your dataset by the number of records. Then multiply this value by the
 number of records you want in a mini-batch. We recommend to enter a slightly larger value than this to ensure the
 records fit within the maximum payload size. The default value is 6 MB.
 
 For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set the
 value to 0. This feature only works in supported algorithms. Currently, Amazon SageMaker built-in
 algorithms do not support this feature.
 
maxPayloadInMB - The maximum payload size allowed, in MB. A payload is the data portion of a record (without metadata). The
        value in MaxPayloadInMB must be greater or equal to the size of a single record. You can
        approximate the size of a record by dividing the size of your dataset by the number of records. Then
        multiply this value by the number of records you want in a mini-batch. We recommend to enter a slightly
        larger value than this to ensure the records fit within the maximum payload size. The default value is
        6 MB. 
        
        For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set
        the value to 0. This feature only works in supported algorithms. Currently, Amazon SageMaker
        built-in algorithms do not support this feature.
public void setBatchStrategy(String batchStrategy)
 Determines the number of records to include in a mini-batch. If you want to include only one record in a
 mini-batch, specify SingleRecord.. If you want mini-batches to contain a maximum of the number of
 records specified in the MaxPayloadInMB parameter, specify MultiRecord.
 
 If you set SplitType to Line and BatchStrategy to MultiRecord
 , a batch transform automatically splits your input data into the specified payload size. There's no need to
 split the dataset into smaller files or to use larger payload sizes unless the records in your dataset are very
 large.
 
batchStrategy - Determines the number of records to include in a mini-batch. If you want to include only one record in a
        mini-batch, specify SingleRecord.. If you want mini-batches to contain a maximum of the
        number of records specified in the MaxPayloadInMB parameter, specify MultiRecord
        .
        
        If you set SplitType to Line and BatchStrategy to
        MultiRecord, a batch transform automatically splits your input data into the specified
        payload size. There's no need to split the dataset into smaller files or to use larger payload sizes
        unless the records in your dataset are very large.
BatchStrategypublic String getBatchStrategy()
 Determines the number of records to include in a mini-batch. If you want to include only one record in a
 mini-batch, specify SingleRecord.. If you want mini-batches to contain a maximum of the number of
 records specified in the MaxPayloadInMB parameter, specify MultiRecord.
 
 If you set SplitType to Line and BatchStrategy to MultiRecord
 , a batch transform automatically splits your input data into the specified payload size. There's no need to
 split the dataset into smaller files or to use larger payload sizes unless the records in your dataset are very
 large.
 
SingleRecord.. If you want mini-batches to contain a maximum of the
         number of records specified in the MaxPayloadInMB parameter, specify
         MultiRecord.
         
         If you set SplitType to Line and BatchStrategy to
         MultiRecord, a batch transform automatically splits your input data into the specified
         payload size. There's no need to split the dataset into smaller files or to use larger payload sizes
         unless the records in your dataset are very large.
BatchStrategypublic CreateTransformJobRequest withBatchStrategy(String batchStrategy)
 Determines the number of records to include in a mini-batch. If you want to include only one record in a
 mini-batch, specify SingleRecord.. If you want mini-batches to contain a maximum of the number of
 records specified in the MaxPayloadInMB parameter, specify MultiRecord.
 
 If you set SplitType to Line and BatchStrategy to MultiRecord
 , a batch transform automatically splits your input data into the specified payload size. There's no need to
 split the dataset into smaller files or to use larger payload sizes unless the records in your dataset are very
 large.
 
batchStrategy - Determines the number of records to include in a mini-batch. If you want to include only one record in a
        mini-batch, specify SingleRecord.. If you want mini-batches to contain a maximum of the
        number of records specified in the MaxPayloadInMB parameter, specify MultiRecord
        .
        
        If you set SplitType to Line and BatchStrategy to
        MultiRecord, a batch transform automatically splits your input data into the specified
        payload size. There's no need to split the dataset into smaller files or to use larger payload sizes
        unless the records in your dataset are very large.
BatchStrategypublic CreateTransformJobRequest withBatchStrategy(BatchStrategy batchStrategy)
 Determines the number of records to include in a mini-batch. If you want to include only one record in a
 mini-batch, specify SingleRecord.. If you want mini-batches to contain a maximum of the number of
 records specified in the MaxPayloadInMB parameter, specify MultiRecord.
 
 If you set SplitType to Line and BatchStrategy to MultiRecord
 , a batch transform automatically splits your input data into the specified payload size. There's no need to
 split the dataset into smaller files or to use larger payload sizes unless the records in your dataset are very
 large.
 
batchStrategy - Determines the number of records to include in a mini-batch. If you want to include only one record in a
        mini-batch, specify SingleRecord.. If you want mini-batches to contain a maximum of the
        number of records specified in the MaxPayloadInMB parameter, specify MultiRecord
        .
        
        If you set SplitType to Line and BatchStrategy to
        MultiRecord, a batch transform automatically splits your input data into the specified
        payload size. There's no need to split the dataset into smaller files or to use larger payload sizes
        unless the records in your dataset are very large.
BatchStrategypublic Map<String,String> getEnvironment()
The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.
public void setEnvironment(Map<String,String> environment)
The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.
environment - The environment variables to set in the Docker container. We support up to 16 key and values entries in
        the map.public CreateTransformJobRequest withEnvironment(Map<String,String> environment)
The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.
environment - The environment variables to set in the Docker container. We support up to 16 key and values entries in
        the map.public CreateTransformJobRequest addEnvironmentEntry(String key, String value)
public CreateTransformJobRequest clearEnvironmentEntries()
public void setTransformInput(TransformInput transformInput)
Describes the input source and the way the transform job consumes it.
transformInput - Describes the input source and the way the transform job consumes it.public TransformInput getTransformInput()
Describes the input source and the way the transform job consumes it.
public CreateTransformJobRequest withTransformInput(TransformInput transformInput)
Describes the input source and the way the transform job consumes it.
transformInput - Describes the input source and the way the transform job consumes it.public void setTransformOutput(TransformOutput transformOutput)
Describes the results of the transform job.
transformOutput - Describes the results of the transform job.public TransformOutput getTransformOutput()
Describes the results of the transform job.
public CreateTransformJobRequest withTransformOutput(TransformOutput transformOutput)
Describes the results of the transform job.
transformOutput - Describes the results of the transform job.public void setTransformResources(TransformResources transformResources)
Describes the resources, including ML instance types and ML instance count, to use for the transform job.
transformResources - Describes the resources, including ML instance types and ML instance count, to use for the transform job.public TransformResources getTransformResources()
Describes the resources, including ML instance types and ML instance count, to use for the transform job.
public CreateTransformJobRequest withTransformResources(TransformResources transformResources)
Describes the resources, including ML instance types and ML instance count, to use for the transform job.
transformResources - Describes the resources, including ML instance types and ML instance count, to use for the transform job.public List<Tag> getTags()
(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
public void setTags(Collection<Tag> tags)
(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
tags - (Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.public CreateTransformJobRequest withTags(Tag... tags)
(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
 NOTE: This method appends the values to the existing list (if any). Use
 setTags(java.util.Collection) or withTags(java.util.Collection) if you want to override the
 existing values.
 
tags - (Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.public CreateTransformJobRequest withTags(Collection<Tag> tags)
(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
tags - (Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.public String toString()
toString in class ObjectObject.toString()public CreateTransformJobRequest clone()
AmazonWebServiceRequestclone in class AmazonWebServiceRequestObject.clone()Copyright © 2013 Amazon Web Services, Inc. All Rights Reserved.