Class TransformInput
- java.lang.Object
-
- software.amazon.awssdk.services.sagemaker.model.TransformInput
-
- All Implemented Interfaces:
Serializable,SdkPojo,ToCopyableBuilder<TransformInput.Builder,TransformInput>
@Generated("software.amazon.awssdk:codegen") public final class TransformInput extends Object implements SdkPojo, Serializable, ToCopyableBuilder<TransformInput.Builder,TransformInput>
Describes the input source of a transform job and the way the transform job consumes it.
- See Also:
- Serialized Form
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static interfaceTransformInput.Builder
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description static TransformInput.Builderbuilder()CompressionTypecompressionType()If your transform data is compressed, specify the compression type.StringcompressionTypeAsString()If your transform data is compressed, specify the compression type.StringcontentType()The multipurpose internet mail extension (MIME) type of the data.TransformDataSourcedataSource()Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.booleanequals(Object obj)booleanequalsBySdkFields(Object obj)<T> Optional<T>getValueForField(String fieldName, Class<T> clazz)inthashCode()Map<String,SdkField<?>>sdkFieldNameToField()List<SdkField<?>>sdkFields()static Class<? extends TransformInput.Builder>serializableBuilderClass()SplitTypesplitType()The method to use to split the transform job's data files into smaller batches.StringsplitTypeAsString()The method to use to split the transform job's data files into smaller batches.TransformInput.BuildertoBuilder()StringtoString()Returns a string representation of this object.-
Methods inherited from class java.lang.Object
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
-
Methods inherited from interface software.amazon.awssdk.utils.builder.ToCopyableBuilder
copy
-
-
-
-
Method Detail
-
dataSource
public final TransformDataSource dataSource()
Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.
- Returns:
- Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.
-
contentType
public final String contentType()
The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.
- Returns:
- The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.
-
compressionType
public final CompressionType compressionType()
If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is
None.If the service returns an enum value that is not available in the current SDK version,
compressionTypewill returnCompressionType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromcompressionTypeAsString().- Returns:
- If your transform data is compressed, specify the compression type. Amazon SageMaker automatically
decompresses the data for the transform job accordingly. The default value is
None. - See Also:
CompressionType
-
compressionTypeAsString
public final String compressionTypeAsString()
If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is
None.If the service returns an enum value that is not available in the current SDK version,
compressionTypewill returnCompressionType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromcompressionTypeAsString().- Returns:
- If your transform data is compressed, specify the compression type. Amazon SageMaker automatically
decompresses the data for the transform job accordingly. The default value is
None. - See Also:
CompressionType
-
splitType
public final SplitType splitType()
The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for
SplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation.If the service returns an enum value that is not available in the current SDK version,
splitTypewill returnSplitType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromsplitTypeAsString().- Returns:
- The method to use to split the transform job's data files into smaller batches. Splitting is necessary
when the total size of each object is too large to fit in a single request. You can also use data
splitting to improve performance by processing multiple concurrent mini-batches. The default value for
SplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation. -
- See Also:
SplitType
-
-
splitTypeAsString
public final String splitTypeAsString()
The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for
SplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation.If the service returns an enum value that is not available in the current SDK version,
splitTypewill returnSplitType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromsplitTypeAsString().- Returns:
- The method to use to split the transform job's data files into smaller batches. Splitting is necessary
when the total size of each object is too large to fit in a single request. You can also use data
splitting to improve performance by processing multiple concurrent mini-batches. The default value for
SplitTypeisNone, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLineto split records on a newline character boundary.SplitTypealso supports a number of record-oriented binary data formats. Currently, the supported record formats are:-
RecordIO
-
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategyandMaxPayloadInMBparameters. When the value ofBatchStrategyisMultiRecord, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMBlimit. If the value ofBatchStrategyisSingleRecord, Amazon SageMaker sends individual records in each request.Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategyis set toSingleRecord. Padding is not removed if the value ofBatchStrategyis set toMultiRecord.For more information about
RecordIO, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord, see Consuming TFRecord data in the TensorFlow documentation. -
- See Also:
SplitType
-
-
toBuilder
public TransformInput.Builder toBuilder()
- Specified by:
toBuilderin interfaceToCopyableBuilder<TransformInput.Builder,TransformInput>
-
builder
public static TransformInput.Builder builder()
-
serializableBuilderClass
public static Class<? extends TransformInput.Builder> serializableBuilderClass()
-
equalsBySdkFields
public final boolean equalsBySdkFields(Object obj)
- Specified by:
equalsBySdkFieldsin interfaceSdkPojo
-
toString
public final String toString()
Returns a string representation of this object. This is useful for testing and debugging. Sensitive data will be redacted from this string using a placeholder value.
-
sdkFieldNameToField
public final Map<String,SdkField<?>> sdkFieldNameToField()
- Specified by:
sdkFieldNameToFieldin interfaceSdkPojo
-
-