@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class TransformInput extends Object implements Serializable, Cloneable, StructuredPojo
Describes the input source of a transform job and the way the transform job consumes it.
| Constructor and Description |
|---|
TransformInput() |
| Modifier and Type | Method and Description |
|---|---|
TransformInput |
clone() |
boolean |
equals(Object obj) |
String |
getCompressionType()
If your transform data is compressed, specify the compression type.
|
String |
getContentType()
The multipurpose internet mail extension (MIME) type of the data.
|
TransformDataSource |
getDataSource()
Describes the location of the channel data, which is, the S3 location of the input data that the model can
consume.
|
String |
getSplitType()
The method to use to split the transform job's data into smaller batches.
|
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller. |
void |
setCompressionType(String compressionType)
If your transform data is compressed, specify the compression type.
|
void |
setContentType(String contentType)
The multipurpose internet mail extension (MIME) type of the data.
|
void |
setDataSource(TransformDataSource dataSource)
Describes the location of the channel data, which is, the S3 location of the input data that the model can
consume.
|
void |
setSplitType(String splitType)
The method to use to split the transform job's data into smaller batches.
|
String |
toString()
Returns a string representation of this object.
|
TransformInput |
withCompressionType(CompressionType compressionType)
If your transform data is compressed, specify the compression type.
|
TransformInput |
withCompressionType(String compressionType)
If your transform data is compressed, specify the compression type.
|
TransformInput |
withContentType(String contentType)
The multipurpose internet mail extension (MIME) type of the data.
|
TransformInput |
withDataSource(TransformDataSource dataSource)
Describes the location of the channel data, which is, the S3 location of the input data that the model can
consume.
|
TransformInput |
withSplitType(SplitType splitType)
The method to use to split the transform job's data into smaller batches.
|
TransformInput |
withSplitType(String splitType)
The method to use to split the transform job's data into smaller batches.
|
public void setDataSource(TransformDataSource dataSource)
Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.
dataSource - Describes the location of the channel data, which is, the S3 location of the input data that the model can
consume.public TransformDataSource getDataSource()
Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.
public TransformInput withDataSource(TransformDataSource dataSource)
Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.
dataSource - Describes the location of the channel data, which is, the S3 location of the input data that the model can
consume.public void setContentType(String contentType)
The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.
contentType - The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with
each http call to transfer data to the transform job.public String getContentType()
The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.
public TransformInput withContentType(String contentType)
The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.
contentType - The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with
each http call to transfer data to the transform job.public void setCompressionType(String compressionType)
If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses
the data for the transform job accordingly. The default value is None.
compressionType - If your transform data is compressed, specify the compression type. Amazon SageMaker automatically
decompresses the data for the transform job accordingly. The default value is None.CompressionTypepublic String getCompressionType()
If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses
the data for the transform job accordingly. The default value is None.
None.CompressionTypepublic TransformInput withCompressionType(String compressionType)
If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses
the data for the transform job accordingly. The default value is None.
compressionType - If your transform data is compressed, specify the compression type. Amazon SageMaker automatically
decompresses the data for the transform job accordingly. The default value is None.CompressionTypepublic TransformInput withCompressionType(CompressionType compressionType)
If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses
the data for the transform job accordingly. The default value is None.
compressionType - If your transform data is compressed, specify the compression type. Amazon SageMaker automatically
decompresses the data for the transform job accordingly. The default value is None.CompressionTypepublic void setSplitType(String splitType)
The method to use to split the transform job's data into smaller batches. If you don't want to split the data,
specify None. If you want to split records on a newline character boundary, specify
Line. To split records according to the RecordIO format, specify RecordIO. The default
value is None.
Amazon SageMaker sends the maximum number of records per batch in each request up to the MaxPayloadInMB limit. For more information, see RecordIO data format.
For information about the RecordIO format, see Data Format.
splitType - The method to use to split the transform job's data into smaller batches. If you don't want to split the
data, specify None. If you want to split records on a newline character boundary, specify
Line. To split records according to the RecordIO format, specify RecordIO. The
default value is None.
Amazon SageMaker sends the maximum number of records per batch in each request up to the MaxPayloadInMB limit. For more information, see RecordIO data format.
For information about the RecordIO format, see Data Format.
SplitTypepublic String getSplitType()
The method to use to split the transform job's data into smaller batches. If you don't want to split the data,
specify None. If you want to split records on a newline character boundary, specify
Line. To split records according to the RecordIO format, specify RecordIO. The default
value is None.
Amazon SageMaker sends the maximum number of records per batch in each request up to the MaxPayloadInMB limit. For more information, see RecordIO data format.
For information about the RecordIO format, see Data Format.
None. If you want to split records on a newline character boundary, specify
Line. To split records according to the RecordIO format, specify RecordIO. The
default value is None.
Amazon SageMaker sends the maximum number of records per batch in each request up to the MaxPayloadInMB limit. For more information, see RecordIO data format.
For information about the RecordIO format, see Data Format.
SplitTypepublic TransformInput withSplitType(String splitType)
The method to use to split the transform job's data into smaller batches. If you don't want to split the data,
specify None. If you want to split records on a newline character boundary, specify
Line. To split records according to the RecordIO format, specify RecordIO. The default
value is None.
Amazon SageMaker sends the maximum number of records per batch in each request up to the MaxPayloadInMB limit. For more information, see RecordIO data format.
For information about the RecordIO format, see Data Format.
splitType - The method to use to split the transform job's data into smaller batches. If you don't want to split the
data, specify None. If you want to split records on a newline character boundary, specify
Line. To split records according to the RecordIO format, specify RecordIO. The
default value is None.
Amazon SageMaker sends the maximum number of records per batch in each request up to the MaxPayloadInMB limit. For more information, see RecordIO data format.
For information about the RecordIO format, see Data Format.
SplitTypepublic TransformInput withSplitType(SplitType splitType)
The method to use to split the transform job's data into smaller batches. If you don't want to split the data,
specify None. If you want to split records on a newline character boundary, specify
Line. To split records according to the RecordIO format, specify RecordIO. The default
value is None.
Amazon SageMaker sends the maximum number of records per batch in each request up to the MaxPayloadInMB limit. For more information, see RecordIO data format.
For information about the RecordIO format, see Data Format.
splitType - The method to use to split the transform job's data into smaller batches. If you don't want to split the
data, specify None. If you want to split records on a newline character boundary, specify
Line. To split records according to the RecordIO format, specify RecordIO. The
default value is None.
Amazon SageMaker sends the maximum number of records per batch in each request up to the MaxPayloadInMB limit. For more information, see RecordIO data format.
For information about the RecordIO format, see Data Format.
SplitTypepublic String toString()
toString in class ObjectObject.toString()public TransformInput clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojoProtocolMarshaller.marshall in interface StructuredPojoprotocolMarshaller - Implementation of ProtocolMarshaller used to marshall this object's data.Copyright © 2013 Amazon Web Services, Inc. All Rights Reserved.