@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class CreateTransformJobRequest extends AmazonWebServiceRequest implements Serializable, Cloneable
NOOP
Constructor and Description |
---|
CreateTransformJobRequest() |
Modifier and Type | Method and Description |
---|---|
CreateTransformJobRequest |
addEnvironmentEntry(String key,
String value)
Add a single Environment entry
|
CreateTransformJobRequest |
clearEnvironmentEntries()
Removes all the entries added into Environment.
|
CreateTransformJobRequest |
clone()
Creates a shallow clone of this object for all fields except the handler context.
|
boolean |
equals(Object obj) |
String |
getBatchStrategy()
Specifies the number of records to include in a mini-batch for an HTTP inference request.
|
DataProcessing |
getDataProcessing()
The data structure used to specify the data to be used for inference in a batch transform job and to associate
the data that is relevant to the prediction results in the output.
|
Map<String,String> |
getEnvironment()
The environment variables to set in the Docker container.
|
ExperimentConfig |
getExperimentConfig() |
Integer |
getMaxConcurrentTransforms()
The maximum number of parallel requests that can be sent to each instance in a transform job.
|
Integer |
getMaxPayloadInMB()
The maximum allowed size of the payload, in MB.
|
ModelClientConfig |
getModelClientConfig()
Configures the timeout and maximum number of retries for processing a transform job invocation.
|
String |
getModelName()
The name of the model that you want to use for the transform job.
|
List<Tag> |
getTags()
(Optional) An array of key-value pairs.
|
TransformInput |
getTransformInput()
Describes the input source and the way the transform job consumes it.
|
String |
getTransformJobName()
The name of the transform job.
|
TransformOutput |
getTransformOutput()
Describes the results of the transform job.
|
TransformResources |
getTransformResources()
Describes the resources, including ML instance types and ML instance count, to use for the transform job.
|
int |
hashCode() |
void |
setBatchStrategy(String batchStrategy)
Specifies the number of records to include in a mini-batch for an HTTP inference request.
|
void |
setDataProcessing(DataProcessing dataProcessing)
The data structure used to specify the data to be used for inference in a batch transform job and to associate
the data that is relevant to the prediction results in the output.
|
void |
setEnvironment(Map<String,String> environment)
The environment variables to set in the Docker container.
|
void |
setExperimentConfig(ExperimentConfig experimentConfig) |
void |
setMaxConcurrentTransforms(Integer maxConcurrentTransforms)
The maximum number of parallel requests that can be sent to each instance in a transform job.
|
void |
setMaxPayloadInMB(Integer maxPayloadInMB)
The maximum allowed size of the payload, in MB.
|
void |
setModelClientConfig(ModelClientConfig modelClientConfig)
Configures the timeout and maximum number of retries for processing a transform job invocation.
|
void |
setModelName(String modelName)
The name of the model that you want to use for the transform job.
|
void |
setTags(Collection<Tag> tags)
(Optional) An array of key-value pairs.
|
void |
setTransformInput(TransformInput transformInput)
Describes the input source and the way the transform job consumes it.
|
void |
setTransformJobName(String transformJobName)
The name of the transform job.
|
void |
setTransformOutput(TransformOutput transformOutput)
Describes the results of the transform job.
|
void |
setTransformResources(TransformResources transformResources)
Describes the resources, including ML instance types and ML instance count, to use for the transform job.
|
String |
toString()
Returns a string representation of this object.
|
CreateTransformJobRequest |
withBatchStrategy(BatchStrategy batchStrategy)
Specifies the number of records to include in a mini-batch for an HTTP inference request.
|
CreateTransformJobRequest |
withBatchStrategy(String batchStrategy)
Specifies the number of records to include in a mini-batch for an HTTP inference request.
|
CreateTransformJobRequest |
withDataProcessing(DataProcessing dataProcessing)
The data structure used to specify the data to be used for inference in a batch transform job and to associate
the data that is relevant to the prediction results in the output.
|
CreateTransformJobRequest |
withEnvironment(Map<String,String> environment)
The environment variables to set in the Docker container.
|
CreateTransformJobRequest |
withExperimentConfig(ExperimentConfig experimentConfig) |
CreateTransformJobRequest |
withMaxConcurrentTransforms(Integer maxConcurrentTransforms)
The maximum number of parallel requests that can be sent to each instance in a transform job.
|
CreateTransformJobRequest |
withMaxPayloadInMB(Integer maxPayloadInMB)
The maximum allowed size of the payload, in MB.
|
CreateTransformJobRequest |
withModelClientConfig(ModelClientConfig modelClientConfig)
Configures the timeout and maximum number of retries for processing a transform job invocation.
|
CreateTransformJobRequest |
withModelName(String modelName)
The name of the model that you want to use for the transform job.
|
CreateTransformJobRequest |
withTags(Collection<Tag> tags)
(Optional) An array of key-value pairs.
|
CreateTransformJobRequest |
withTags(Tag... tags)
(Optional) An array of key-value pairs.
|
CreateTransformJobRequest |
withTransformInput(TransformInput transformInput)
Describes the input source and the way the transform job consumes it.
|
CreateTransformJobRequest |
withTransformJobName(String transformJobName)
The name of the transform job.
|
CreateTransformJobRequest |
withTransformOutput(TransformOutput transformOutput)
Describes the results of the transform job.
|
CreateTransformJobRequest |
withTransformResources(TransformResources transformResources)
Describes the resources, including ML instance types and ML instance count, to use for the transform job.
|
addHandlerContext, getCloneRoot, getCloneSource, getCustomQueryParameters, getCustomRequestHeaders, getGeneralProgressListener, getHandlerContext, getReadLimit, getRequestClientOptions, getRequestCredentials, getRequestCredentialsProvider, getRequestMetricCollector, getSdkClientExecutionTimeout, getSdkRequestTimeout, putCustomQueryParameter, putCustomRequestHeader, setGeneralProgressListener, setRequestCredentials, setRequestCredentialsProvider, setRequestMetricCollector, setSdkClientExecutionTimeout, setSdkRequestTimeout, withGeneralProgressListener, withRequestCredentialsProvider, withRequestMetricCollector, withSdkClientExecutionTimeout, withSdkRequestTimeout
public void setTransformJobName(String transformJobName)
The name of the transform job. The name must be unique within an AWS Region in an AWS account.
transformJobName
- The name of the transform job. The name must be unique within an AWS Region in an AWS account.public String getTransformJobName()
The name of the transform job. The name must be unique within an AWS Region in an AWS account.
public CreateTransformJobRequest withTransformJobName(String transformJobName)
The name of the transform job. The name must be unique within an AWS Region in an AWS account.
transformJobName
- The name of the transform job. The name must be unique within an AWS Region in an AWS account.public void setModelName(String modelName)
The name of the model that you want to use for the transform job. ModelName
must be the name of an
existing Amazon SageMaker model within an AWS Region in an AWS account.
modelName
- The name of the model that you want to use for the transform job. ModelName
must be the name
of an existing Amazon SageMaker model within an AWS Region in an AWS account.public String getModelName()
The name of the model that you want to use for the transform job. ModelName
must be the name of an
existing Amazon SageMaker model within an AWS Region in an AWS account.
ModelName
must be the name
of an existing Amazon SageMaker model within an AWS Region in an AWS account.public CreateTransformJobRequest withModelName(String modelName)
The name of the model that you want to use for the transform job. ModelName
must be the name of an
existing Amazon SageMaker model within an AWS Region in an AWS account.
modelName
- The name of the model that you want to use for the transform job. ModelName
must be the name
of an existing Amazon SageMaker model within an AWS Region in an AWS account.public void setMaxConcurrentTransforms(Integer maxConcurrentTransforms)
The maximum number of parallel requests that can be sent to each instance in a transform job. If
MaxConcurrentTransforms
is set to 0
or left unset, Amazon SageMaker checks the optional
execution-parameters to determine the settings for your chosen algorithm. If the execution-parameters endpoint is
not enabled, the default value is 1
. For more information on execution-parameters, see How Containers Serve Requests. For built-in algorithms, you don't need to set a value for
MaxConcurrentTransforms
.
maxConcurrentTransforms
- The maximum number of parallel requests that can be sent to each instance in a transform job. If
MaxConcurrentTransforms
is set to 0
or left unset, Amazon SageMaker checks the
optional execution-parameters to determine the settings for your chosen algorithm. If the
execution-parameters endpoint is not enabled, the default value is 1
. For more information on
execution-parameters, see How Containers Serve Requests. For built-in algorithms, you don't need to set a value for
MaxConcurrentTransforms
.public Integer getMaxConcurrentTransforms()
The maximum number of parallel requests that can be sent to each instance in a transform job. If
MaxConcurrentTransforms
is set to 0
or left unset, Amazon SageMaker checks the optional
execution-parameters to determine the settings for your chosen algorithm. If the execution-parameters endpoint is
not enabled, the default value is 1
. For more information on execution-parameters, see How Containers Serve Requests. For built-in algorithms, you don't need to set a value for
MaxConcurrentTransforms
.
MaxConcurrentTransforms
is set to 0
or left unset, Amazon SageMaker checks the
optional execution-parameters to determine the settings for your chosen algorithm. If the
execution-parameters endpoint is not enabled, the default value is 1
. For more information
on execution-parameters, see How Containers Serve Requests. For built-in algorithms, you don't need to set a value for
MaxConcurrentTransforms
.public CreateTransformJobRequest withMaxConcurrentTransforms(Integer maxConcurrentTransforms)
The maximum number of parallel requests that can be sent to each instance in a transform job. If
MaxConcurrentTransforms
is set to 0
or left unset, Amazon SageMaker checks the optional
execution-parameters to determine the settings for your chosen algorithm. If the execution-parameters endpoint is
not enabled, the default value is 1
. For more information on execution-parameters, see How Containers Serve Requests. For built-in algorithms, you don't need to set a value for
MaxConcurrentTransforms
.
maxConcurrentTransforms
- The maximum number of parallel requests that can be sent to each instance in a transform job. If
MaxConcurrentTransforms
is set to 0
or left unset, Amazon SageMaker checks the
optional execution-parameters to determine the settings for your chosen algorithm. If the
execution-parameters endpoint is not enabled, the default value is 1
. For more information on
execution-parameters, see How Containers Serve Requests. For built-in algorithms, you don't need to set a value for
MaxConcurrentTransforms
.public void setModelClientConfig(ModelClientConfig modelClientConfig)
Configures the timeout and maximum number of retries for processing a transform job invocation.
modelClientConfig
- Configures the timeout and maximum number of retries for processing a transform job invocation.public ModelClientConfig getModelClientConfig()
Configures the timeout and maximum number of retries for processing a transform job invocation.
public CreateTransformJobRequest withModelClientConfig(ModelClientConfig modelClientConfig)
Configures the timeout and maximum number of retries for processing a transform job invocation.
modelClientConfig
- Configures the timeout and maximum number of retries for processing a transform job invocation.public void setMaxPayloadInMB(Integer maxPayloadInMB)
The maximum allowed size of the payload, in MB. A payload is the data portion of a record (without
metadata). The value in MaxPayloadInMB
must be greater than, or equal to, the size of a single
record. To estimate the size of a record in MB, divide the size of your dataset by the number of records. To
ensure that the records fit within the maximum payload size, we recommend using a slightly larger value. The
default value is 6
MB.
For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set the
value to 0
. This feature works only in supported algorithms. Currently, Amazon SageMaker built-in
algorithms do not support HTTP chunked encoding.
maxPayloadInMB
- The maximum allowed size of the payload, in MB. A payload is the data portion of a record (without
metadata). The value in MaxPayloadInMB
must be greater than, or equal to, the size of a
single record. To estimate the size of a record in MB, divide the size of your dataset by the number of
records. To ensure that the records fit within the maximum payload size, we recommend using a slightly
larger value. The default value is 6
MB.
For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set
the value to 0
. This feature works only in supported algorithms. Currently, Amazon SageMaker
built-in algorithms do not support HTTP chunked encoding.
public Integer getMaxPayloadInMB()
The maximum allowed size of the payload, in MB. A payload is the data portion of a record (without
metadata). The value in MaxPayloadInMB
must be greater than, or equal to, the size of a single
record. To estimate the size of a record in MB, divide the size of your dataset by the number of records. To
ensure that the records fit within the maximum payload size, we recommend using a slightly larger value. The
default value is 6
MB.
For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set the
value to 0
. This feature works only in supported algorithms. Currently, Amazon SageMaker built-in
algorithms do not support HTTP chunked encoding.
MaxPayloadInMB
must be greater than, or equal to, the size of a
single record. To estimate the size of a record in MB, divide the size of your dataset by the number of
records. To ensure that the records fit within the maximum payload size, we recommend using a slightly
larger value. The default value is 6
MB.
For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding,
set the value to 0
. This feature works only in supported algorithms. Currently, Amazon
SageMaker built-in algorithms do not support HTTP chunked encoding.
public CreateTransformJobRequest withMaxPayloadInMB(Integer maxPayloadInMB)
The maximum allowed size of the payload, in MB. A payload is the data portion of a record (without
metadata). The value in MaxPayloadInMB
must be greater than, or equal to, the size of a single
record. To estimate the size of a record in MB, divide the size of your dataset by the number of records. To
ensure that the records fit within the maximum payload size, we recommend using a slightly larger value. The
default value is 6
MB.
For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set the
value to 0
. This feature works only in supported algorithms. Currently, Amazon SageMaker built-in
algorithms do not support HTTP chunked encoding.
maxPayloadInMB
- The maximum allowed size of the payload, in MB. A payload is the data portion of a record (without
metadata). The value in MaxPayloadInMB
must be greater than, or equal to, the size of a
single record. To estimate the size of a record in MB, divide the size of your dataset by the number of
records. To ensure that the records fit within the maximum payload size, we recommend using a slightly
larger value. The default value is 6
MB.
For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set
the value to 0
. This feature works only in supported algorithms. Currently, Amazon SageMaker
built-in algorithms do not support HTTP chunked encoding.
public void setBatchStrategy(String batchStrategy)
Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.
To enable the batch strategy, you must set the SplitType
property to Line
,
RecordIO
, or TFRecord
.
To use only one record when making an HTTP invocation request to a container, set BatchStrategy
to
SingleRecord
and SplitType
to Line
.
To fit as many records in a mini-batch as can fit within the MaxPayloadInMB
limit, set
BatchStrategy
to MultiRecord
and SplitType
to Line
.
batchStrategy
- Specifies the number of records to include in a mini-batch for an HTTP inference request. A record
is a single unit of input data that inference can be made on. For example, a single line in a CSV
file is a record.
To enable the batch strategy, you must set the SplitType
property to Line
,
RecordIO
, or TFRecord
.
To use only one record when making an HTTP invocation request to a container, set
BatchStrategy
to SingleRecord
and SplitType
to Line
.
To fit as many records in a mini-batch as can fit within the MaxPayloadInMB
limit, set
BatchStrategy
to MultiRecord
and SplitType
to Line
.
BatchStrategy
public String getBatchStrategy()
Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.
To enable the batch strategy, you must set the SplitType
property to Line
,
RecordIO
, or TFRecord
.
To use only one record when making an HTTP invocation request to a container, set BatchStrategy
to
SingleRecord
and SplitType
to Line
.
To fit as many records in a mini-batch as can fit within the MaxPayloadInMB
limit, set
BatchStrategy
to MultiRecord
and SplitType
to Line
.
To enable the batch strategy, you must set the SplitType
property to Line
,
RecordIO
, or TFRecord
.
To use only one record when making an HTTP invocation request to a container, set
BatchStrategy
to SingleRecord
and SplitType
to Line
.
To fit as many records in a mini-batch as can fit within the MaxPayloadInMB
limit, set
BatchStrategy
to MultiRecord
and SplitType
to Line
.
BatchStrategy
public CreateTransformJobRequest withBatchStrategy(String batchStrategy)
Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.
To enable the batch strategy, you must set the SplitType
property to Line
,
RecordIO
, or TFRecord
.
To use only one record when making an HTTP invocation request to a container, set BatchStrategy
to
SingleRecord
and SplitType
to Line
.
To fit as many records in a mini-batch as can fit within the MaxPayloadInMB
limit, set
BatchStrategy
to MultiRecord
and SplitType
to Line
.
batchStrategy
- Specifies the number of records to include in a mini-batch for an HTTP inference request. A record
is a single unit of input data that inference can be made on. For example, a single line in a CSV
file is a record.
To enable the batch strategy, you must set the SplitType
property to Line
,
RecordIO
, or TFRecord
.
To use only one record when making an HTTP invocation request to a container, set
BatchStrategy
to SingleRecord
and SplitType
to Line
.
To fit as many records in a mini-batch as can fit within the MaxPayloadInMB
limit, set
BatchStrategy
to MultiRecord
and SplitType
to Line
.
BatchStrategy
public CreateTransformJobRequest withBatchStrategy(BatchStrategy batchStrategy)
Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.
To enable the batch strategy, you must set the SplitType
property to Line
,
RecordIO
, or TFRecord
.
To use only one record when making an HTTP invocation request to a container, set BatchStrategy
to
SingleRecord
and SplitType
to Line
.
To fit as many records in a mini-batch as can fit within the MaxPayloadInMB
limit, set
BatchStrategy
to MultiRecord
and SplitType
to Line
.
batchStrategy
- Specifies the number of records to include in a mini-batch for an HTTP inference request. A record
is a single unit of input data that inference can be made on. For example, a single line in a CSV
file is a record.
To enable the batch strategy, you must set the SplitType
property to Line
,
RecordIO
, or TFRecord
.
To use only one record when making an HTTP invocation request to a container, set
BatchStrategy
to SingleRecord
and SplitType
to Line
.
To fit as many records in a mini-batch as can fit within the MaxPayloadInMB
limit, set
BatchStrategy
to MultiRecord
and SplitType
to Line
.
BatchStrategy
public Map<String,String> getEnvironment()
The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.
public void setEnvironment(Map<String,String> environment)
The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.
environment
- The environment variables to set in the Docker container. We support up to 16 key and values entries in
the map.public CreateTransformJobRequest withEnvironment(Map<String,String> environment)
The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.
environment
- The environment variables to set in the Docker container. We support up to 16 key and values entries in
the map.public CreateTransformJobRequest addEnvironmentEntry(String key, String value)
public CreateTransformJobRequest clearEnvironmentEntries()
public void setTransformInput(TransformInput transformInput)
Describes the input source and the way the transform job consumes it.
transformInput
- Describes the input source and the way the transform job consumes it.public TransformInput getTransformInput()
Describes the input source and the way the transform job consumes it.
public CreateTransformJobRequest withTransformInput(TransformInput transformInput)
Describes the input source and the way the transform job consumes it.
transformInput
- Describes the input source and the way the transform job consumes it.public void setTransformOutput(TransformOutput transformOutput)
Describes the results of the transform job.
transformOutput
- Describes the results of the transform job.public TransformOutput getTransformOutput()
Describes the results of the transform job.
public CreateTransformJobRequest withTransformOutput(TransformOutput transformOutput)
Describes the results of the transform job.
transformOutput
- Describes the results of the transform job.public void setTransformResources(TransformResources transformResources)
Describes the resources, including ML instance types and ML instance count, to use for the transform job.
transformResources
- Describes the resources, including ML instance types and ML instance count, to use for the transform job.public TransformResources getTransformResources()
Describes the resources, including ML instance types and ML instance count, to use for the transform job.
public CreateTransformJobRequest withTransformResources(TransformResources transformResources)
Describes the resources, including ML instance types and ML instance count, to use for the transform job.
transformResources
- Describes the resources, including ML instance types and ML instance count, to use for the transform job.public void setDataProcessing(DataProcessing dataProcessing)
The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output. The input filter provided allows you to exclude input data that is not needed for inference in a batch transform job. The output filter provided allows you to include input data relevant to interpreting the predictions in the output from the job. For more information, see Associate Prediction Results with their Corresponding Input Records.
dataProcessing
- The data structure used to specify the data to be used for inference in a batch transform job and to
associate the data that is relevant to the prediction results in the output. The input filter provided
allows you to exclude input data that is not needed for inference in a batch transform job. The output
filter provided allows you to include input data relevant to interpreting the predictions in the output
from the job. For more information, see Associate
Prediction Results with their Corresponding Input Records.public DataProcessing getDataProcessing()
The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output. The input filter provided allows you to exclude input data that is not needed for inference in a batch transform job. The output filter provided allows you to include input data relevant to interpreting the predictions in the output from the job. For more information, see Associate Prediction Results with their Corresponding Input Records.
public CreateTransformJobRequest withDataProcessing(DataProcessing dataProcessing)
The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output. The input filter provided allows you to exclude input data that is not needed for inference in a batch transform job. The output filter provided allows you to include input data relevant to interpreting the predictions in the output from the job. For more information, see Associate Prediction Results with their Corresponding Input Records.
dataProcessing
- The data structure used to specify the data to be used for inference in a batch transform job and to
associate the data that is relevant to the prediction results in the output. The input filter provided
allows you to exclude input data that is not needed for inference in a batch transform job. The output
filter provided allows you to include input data relevant to interpreting the predictions in the output
from the job. For more information, see Associate
Prediction Results with their Corresponding Input Records.public List<Tag> getTags()
(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
public void setTags(Collection<Tag> tags)
(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
tags
- (Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.public CreateTransformJobRequest withTags(Tag... tags)
(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
NOTE: This method appends the values to the existing list (if any). Use
setTags(java.util.Collection)
or withTags(java.util.Collection)
if you want to override the
existing values.
tags
- (Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.public CreateTransformJobRequest withTags(Collection<Tag> tags)
(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
tags
- (Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.public void setExperimentConfig(ExperimentConfig experimentConfig)
experimentConfig
- public ExperimentConfig getExperimentConfig()
public CreateTransformJobRequest withExperimentConfig(ExperimentConfig experimentConfig)
experimentConfig
- public String toString()
toString
in class Object
Object.toString()
public CreateTransformJobRequest clone()
AmazonWebServiceRequest
clone
in class AmazonWebServiceRequest
Object.clone()
Copyright © 2013 Amazon Web Services, Inc. All Rights Reserved.