@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class TrainingJobDefinition extends Object implements Serializable, Cloneable, StructuredPojo
Defines the input needed to run a training job using the algorithm.
Constructor and Description |
---|
TrainingJobDefinition() |
Modifier and Type | Method and Description |
---|---|
TrainingJobDefinition |
addHyperParametersEntry(String key,
String value)
Add a single HyperParameters entry
|
TrainingJobDefinition |
clearHyperParametersEntries()
Removes all the entries added into HyperParameters.
|
TrainingJobDefinition |
clone() |
boolean |
equals(Object obj) |
Map<String,String> |
getHyperParameters()
The hyperparameters used for the training job.
|
List<Channel> |
getInputDataConfig()
An array of
Channel objects, each of which specifies an input source. |
OutputDataConfig |
getOutputDataConfig()
the path to the S3 bucket where you want to store model artifacts.
|
ResourceConfig |
getResourceConfig()
The resources, including the ML compute instances and ML storage volumes, to use for model training.
|
StoppingCondition |
getStoppingCondition()
Specifies a limit to how long a model training job can run.
|
String |
getTrainingInputMode()
The input mode used by the algorithm for the training job.
|
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setHyperParameters(Map<String,String> hyperParameters)
The hyperparameters used for the training job.
|
void |
setInputDataConfig(Collection<Channel> inputDataConfig)
An array of
Channel objects, each of which specifies an input source. |
void |
setOutputDataConfig(OutputDataConfig outputDataConfig)
the path to the S3 bucket where you want to store model artifacts.
|
void |
setResourceConfig(ResourceConfig resourceConfig)
The resources, including the ML compute instances and ML storage volumes, to use for model training.
|
void |
setStoppingCondition(StoppingCondition stoppingCondition)
Specifies a limit to how long a model training job can run.
|
void |
setTrainingInputMode(String trainingInputMode)
The input mode used by the algorithm for the training job.
|
String |
toString()
Returns a string representation of this object.
|
TrainingJobDefinition |
withHyperParameters(Map<String,String> hyperParameters)
The hyperparameters used for the training job.
|
TrainingJobDefinition |
withInputDataConfig(Channel... inputDataConfig)
An array of
Channel objects, each of which specifies an input source. |
TrainingJobDefinition |
withInputDataConfig(Collection<Channel> inputDataConfig)
An array of
Channel objects, each of which specifies an input source. |
TrainingJobDefinition |
withOutputDataConfig(OutputDataConfig outputDataConfig)
the path to the S3 bucket where you want to store model artifacts.
|
TrainingJobDefinition |
withResourceConfig(ResourceConfig resourceConfig)
The resources, including the ML compute instances and ML storage volumes, to use for model training.
|
TrainingJobDefinition |
withStoppingCondition(StoppingCondition stoppingCondition)
Specifies a limit to how long a model training job can run.
|
TrainingJobDefinition |
withTrainingInputMode(String trainingInputMode)
The input mode used by the algorithm for the training job.
|
TrainingJobDefinition |
withTrainingInputMode(TrainingInputMode trainingInputMode)
The input mode used by the algorithm for the training job.
|
public void setTrainingInputMode(String trainingInputMode)
The input mode used by the algorithm for the training job. For the input modes that Amazon SageMaker algorithms support, see Algorithms.
If an algorithm supports the File
input mode, Amazon SageMaker downloads the training data from S3
to the provisioned ML storage Volume, and mounts the directory to docker volume for training container. If an
algorithm supports the Pipe
input mode, Amazon SageMaker streams data directly from S3 to the
container.
trainingInputMode
- The input mode used by the algorithm for the training job. For the input modes that Amazon SageMaker
algorithms support, see Algorithms.
If an algorithm supports the File
input mode, Amazon SageMaker downloads the training data
from S3 to the provisioned ML storage Volume, and mounts the directory to docker volume for training
container. If an algorithm supports the Pipe
input mode, Amazon SageMaker streams data
directly from S3 to the container.
TrainingInputMode
public String getTrainingInputMode()
The input mode used by the algorithm for the training job. For the input modes that Amazon SageMaker algorithms support, see Algorithms.
If an algorithm supports the File
input mode, Amazon SageMaker downloads the training data from S3
to the provisioned ML storage Volume, and mounts the directory to docker volume for training container. If an
algorithm supports the Pipe
input mode, Amazon SageMaker streams data directly from S3 to the
container.
If an algorithm supports the File
input mode, Amazon SageMaker downloads the training data
from S3 to the provisioned ML storage Volume, and mounts the directory to docker volume for training
container. If an algorithm supports the Pipe
input mode, Amazon SageMaker streams data
directly from S3 to the container.
TrainingInputMode
public TrainingJobDefinition withTrainingInputMode(String trainingInputMode)
The input mode used by the algorithm for the training job. For the input modes that Amazon SageMaker algorithms support, see Algorithms.
If an algorithm supports the File
input mode, Amazon SageMaker downloads the training data from S3
to the provisioned ML storage Volume, and mounts the directory to docker volume for training container. If an
algorithm supports the Pipe
input mode, Amazon SageMaker streams data directly from S3 to the
container.
trainingInputMode
- The input mode used by the algorithm for the training job. For the input modes that Amazon SageMaker
algorithms support, see Algorithms.
If an algorithm supports the File
input mode, Amazon SageMaker downloads the training data
from S3 to the provisioned ML storage Volume, and mounts the directory to docker volume for training
container. If an algorithm supports the Pipe
input mode, Amazon SageMaker streams data
directly from S3 to the container.
TrainingInputMode
public TrainingJobDefinition withTrainingInputMode(TrainingInputMode trainingInputMode)
The input mode used by the algorithm for the training job. For the input modes that Amazon SageMaker algorithms support, see Algorithms.
If an algorithm supports the File
input mode, Amazon SageMaker downloads the training data from S3
to the provisioned ML storage Volume, and mounts the directory to docker volume for training container. If an
algorithm supports the Pipe
input mode, Amazon SageMaker streams data directly from S3 to the
container.
trainingInputMode
- The input mode used by the algorithm for the training job. For the input modes that Amazon SageMaker
algorithms support, see Algorithms.
If an algorithm supports the File
input mode, Amazon SageMaker downloads the training data
from S3 to the provisioned ML storage Volume, and mounts the directory to docker volume for training
container. If an algorithm supports the Pipe
input mode, Amazon SageMaker streams data
directly from S3 to the container.
TrainingInputMode
public Map<String,String> getHyperParameters()
The hyperparameters used for the training job.
public void setHyperParameters(Map<String,String> hyperParameters)
The hyperparameters used for the training job.
hyperParameters
- The hyperparameters used for the training job.public TrainingJobDefinition withHyperParameters(Map<String,String> hyperParameters)
The hyperparameters used for the training job.
hyperParameters
- The hyperparameters used for the training job.public TrainingJobDefinition addHyperParametersEntry(String key, String value)
public TrainingJobDefinition clearHyperParametersEntries()
public List<Channel> getInputDataConfig()
An array of Channel
objects, each of which specifies an input source.
Channel
objects, each of which specifies an input source.public void setInputDataConfig(Collection<Channel> inputDataConfig)
An array of Channel
objects, each of which specifies an input source.
inputDataConfig
- An array of Channel
objects, each of which specifies an input source.public TrainingJobDefinition withInputDataConfig(Channel... inputDataConfig)
An array of Channel
objects, each of which specifies an input source.
NOTE: This method appends the values to the existing list (if any). Use
setInputDataConfig(java.util.Collection)
or withInputDataConfig(java.util.Collection)
if you
want to override the existing values.
inputDataConfig
- An array of Channel
objects, each of which specifies an input source.public TrainingJobDefinition withInputDataConfig(Collection<Channel> inputDataConfig)
An array of Channel
objects, each of which specifies an input source.
inputDataConfig
- An array of Channel
objects, each of which specifies an input source.public void setOutputDataConfig(OutputDataConfig outputDataConfig)
the path to the S3 bucket where you want to store model artifacts. Amazon SageMaker creates subfolders for the artifacts.
outputDataConfig
- the path to the S3 bucket where you want to store model artifacts. Amazon SageMaker creates subfolders for
the artifacts.public OutputDataConfig getOutputDataConfig()
the path to the S3 bucket where you want to store model artifacts. Amazon SageMaker creates subfolders for the artifacts.
public TrainingJobDefinition withOutputDataConfig(OutputDataConfig outputDataConfig)
the path to the S3 bucket where you want to store model artifacts. Amazon SageMaker creates subfolders for the artifacts.
outputDataConfig
- the path to the S3 bucket where you want to store model artifacts. Amazon SageMaker creates subfolders for
the artifacts.public void setResourceConfig(ResourceConfig resourceConfig)
The resources, including the ML compute instances and ML storage volumes, to use for model training.
resourceConfig
- The resources, including the ML compute instances and ML storage volumes, to use for model training.public ResourceConfig getResourceConfig()
The resources, including the ML compute instances and ML storage volumes, to use for model training.
public TrainingJobDefinition withResourceConfig(ResourceConfig resourceConfig)
The resources, including the ML compute instances and ML storage volumes, to use for model training.
resourceConfig
- The resources, including the ML compute instances and ML storage volumes, to use for model training.public void setStoppingCondition(StoppingCondition stoppingCondition)
Specifies a limit to how long a model training job can run. When the job reaches the time limit, Amazon SageMaker ends the training job. Use this API to cap model training costs.
To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts.
stoppingCondition
- Specifies a limit to how long a model training job can run. When the job reaches the time limit, Amazon
SageMaker ends the training job. Use this API to cap model training costs.
To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts.
public StoppingCondition getStoppingCondition()
Specifies a limit to how long a model training job can run. When the job reaches the time limit, Amazon SageMaker ends the training job. Use this API to cap model training costs.
To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts.
To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts.
public TrainingJobDefinition withStoppingCondition(StoppingCondition stoppingCondition)
Specifies a limit to how long a model training job can run. When the job reaches the time limit, Amazon SageMaker ends the training job. Use this API to cap model training costs.
To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts.
stoppingCondition
- Specifies a limit to how long a model training job can run. When the job reaches the time limit, Amazon
SageMaker ends the training job. Use this API to cap model training costs.
To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts.
public String toString()
toString
in class Object
Object.toString()
public TrainingJobDefinition clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.Copyright © 2013 Amazon Web Services, Inc. All Rights Reserved.