public class AmazonMachineLearningClient extends AmazonWebServiceClient implements AmazonMachineLearning
Definition of the public APIs exposed by Amazon Machine Learning
Modifier and Type | Field and Description |
---|---|
protected List<com.amazonaws.transform.JsonErrorUnmarshallerV2> |
jsonErrorUnmarshallers
List of exception unmarshallers for all Amazon Machine Learning
exceptions.
|
client, clientConfiguration, endpoint, LOGGING_AWS_REQUEST_METRIC, requestHandler2s, timeOffset
Constructor and Description |
---|
AmazonMachineLearningClient()
Constructs a new client to invoke service methods on Amazon Machine
Learning.
|
AmazonMachineLearningClient(AWSCredentials awsCredentials)
Constructs a new client to invoke service methods on Amazon Machine
Learning using the specified AWS account credentials.
|
AmazonMachineLearningClient(AWSCredentials awsCredentials,
ClientConfiguration clientConfiguration)
Constructs a new client to invoke service methods on Amazon Machine
Learning using the specified AWS account credentials and client
configuration options.
|
AmazonMachineLearningClient(AWSCredentialsProvider awsCredentialsProvider)
Constructs a new client to invoke service methods on Amazon Machine
Learning using the specified AWS account credentials provider.
|
AmazonMachineLearningClient(AWSCredentialsProvider awsCredentialsProvider,
ClientConfiguration clientConfiguration)
Constructs a new client to invoke service methods on Amazon Machine
Learning using the specified AWS account credentials provider and client
configuration options.
|
AmazonMachineLearningClient(AWSCredentialsProvider awsCredentialsProvider,
ClientConfiguration clientConfiguration,
RequestMetricCollector requestMetricCollector)
Constructs a new client to invoke service methods on Amazon Machine
Learning using the specified AWS account credentials provider, client
configuration options, and request metric collector.
|
AmazonMachineLearningClient(ClientConfiguration clientConfiguration)
Constructs a new client to invoke service methods on Amazon Machine
Learning.
|
Modifier and Type | Method and Description |
---|---|
CreateBatchPredictionResult |
createBatchPrediction(CreateBatchPredictionRequest createBatchPredictionRequest)
Generates predictions for a group of observations.
|
CreateDataSourceFromRDSResult |
createDataSourceFromRDS(CreateDataSourceFromRDSRequest createDataSourceFromRDSRequest)
Creates a
DataSource object from an Amazon Relational Database Service
(Amazon RDS). |
CreateDataSourceFromRedshiftResult |
createDataSourceFromRedshift(CreateDataSourceFromRedshiftRequest createDataSourceFromRedshiftRequest)
Creates a
DataSource from Amazon Redshift. |
CreateDataSourceFromS3Result |
createDataSourceFromS3(CreateDataSourceFromS3Request createDataSourceFromS3Request)
Creates a
DataSource object. |
CreateEvaluationResult |
createEvaluation(CreateEvaluationRequest createEvaluationRequest)
Creates a new
Evaluation of an MLModel . |
CreateMLModelResult |
createMLModel(CreateMLModelRequest createMLModelRequest)
Creates a new
MLModel using the data files and the recipe as
information sources. |
CreateRealtimeEndpointResult |
createRealtimeEndpoint(CreateRealtimeEndpointRequest createRealtimeEndpointRequest)
Creates a real-time endpoint for the
MLModel . |
DeleteBatchPredictionResult |
deleteBatchPrediction(DeleteBatchPredictionRequest deleteBatchPredictionRequest)
Assigns the DELETED status to a
BatchPrediction , rendering
it unusable. |
DeleteDataSourceResult |
deleteDataSource(DeleteDataSourceRequest deleteDataSourceRequest)
Assigns the DELETED status to a
DataSource , rendering it
unusable. |
DeleteEvaluationResult |
deleteEvaluation(DeleteEvaluationRequest deleteEvaluationRequest)
Assigns the
DELETED status to an Evaluation ,
rendering it unusable. |
DeleteMLModelResult |
deleteMLModel(DeleteMLModelRequest deleteMLModelRequest)
Assigns the DELETED status to an
MLModel , rendering it
unusable. |
DeleteRealtimeEndpointResult |
deleteRealtimeEndpoint(DeleteRealtimeEndpointRequest deleteRealtimeEndpointRequest)
Deletes a real time endpoint of an
MLModel . |
DescribeBatchPredictionsResult |
describeBatchPredictions()
Simplified method form for invoking the DescribeBatchPredictions
operation.
|
DescribeBatchPredictionsResult |
describeBatchPredictions(DescribeBatchPredictionsRequest describeBatchPredictionsRequest)
Returns a list of
BatchPrediction operations that match the
search criteria in the request. |
DescribeDataSourcesResult |
describeDataSources()
Simplified method form for invoking the DescribeDataSources operation.
|
DescribeDataSourcesResult |
describeDataSources(DescribeDataSourcesRequest describeDataSourcesRequest)
Returns a list of
DataSource that match the search criteria
in the request. |
DescribeEvaluationsResult |
describeEvaluations()
Simplified method form for invoking the DescribeEvaluations operation.
|
DescribeEvaluationsResult |
describeEvaluations(DescribeEvaluationsRequest describeEvaluationsRequest)
Returns a list of
DescribeEvaluations that match the search
criteria in the request. |
DescribeMLModelsResult |
describeMLModels()
Simplified method form for invoking the DescribeMLModels operation.
|
DescribeMLModelsResult |
describeMLModels(DescribeMLModelsRequest describeMLModelsRequest)
Returns a list of
MLModel that match the search criteria in
the request. |
GetBatchPredictionResult |
getBatchPrediction(GetBatchPredictionRequest getBatchPredictionRequest)
Returns a
BatchPrediction that includes detailed metadata,
status, and data file information for a Batch Prediction
request. |
ResponseMetadata |
getCachedResponseMetadata(AmazonWebServiceRequest request)
Returns additional metadata for a previously executed successful,
request, typically used for debugging issues where a service isn't acting
as expected.
|
GetDataSourceResult |
getDataSource(GetDataSourceRequest getDataSourceRequest)
Returns a
DataSource that includes metadata and data file
information, as well as the current status of the DataSource
. |
GetEvaluationResult |
getEvaluation(GetEvaluationRequest getEvaluationRequest)
Returns an
Evaluation that includes metadata as well as the
current status of the Evaluation . |
GetMLModelResult |
getMLModel(GetMLModelRequest getMLModelRequest)
Returns an
MLModel that includes detailed metadata, and data
source information as well as the current status of the
MLModel . |
PredictResult |
predict(PredictRequest predictRequest)
Generates a prediction for the observation using the specified
ML Model . |
UpdateBatchPredictionResult |
updateBatchPrediction(UpdateBatchPredictionRequest updateBatchPredictionRequest)
Updates the
BatchPredictionName of a
BatchPrediction . |
UpdateDataSourceResult |
updateDataSource(UpdateDataSourceRequest updateDataSourceRequest)
Updates the
DataSourceName of a DataSource . |
UpdateEvaluationResult |
updateEvaluation(UpdateEvaluationRequest updateEvaluationRequest)
Updates the
EvaluationName of an Evaluation . |
UpdateMLModelResult |
updateMLModel(UpdateMLModelRequest updateMLModelRequest)
Updates the
MLModelName and the ScoreThreshold
of an MLModel . |
addRequestHandler, addRequestHandler, beforeMarshalling, configSigner, configSigner, configureRegion, createExecutionContext, createExecutionContext, createExecutionContext, endClientExecution, endClientExecution, findRequestMetricCollector, getRequestMetricsCollector, getServiceAbbreviation, getServiceName, getServiceNameIntern, getSigner, getSignerByURI, getSignerRegionOverride, getTimeOffset, isProfilingEnabled, isRequestMetricsEnabled, removeRequestHandler, removeRequestHandler, requestMetricCollector, setEndpoint, setEndpoint, setEndpointPrefix, setRegion, setServiceNameIntern, setSignerRegionOverride, setTimeOffset, shutdown, withEndpoint, withRegion, withRegion, withTimeOffset
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
setEndpoint, setRegion, shutdown
protected List<com.amazonaws.transform.JsonErrorUnmarshallerV2> jsonErrorUnmarshallers
public AmazonMachineLearningClient()
All service calls made using this new client object are blocking, and will not return until the service call completes.
DefaultAWSCredentialsProviderChain
public AmazonMachineLearningClient(ClientConfiguration clientConfiguration)
All service calls made using this new client object are blocking, and will not return until the service call completes.
clientConfiguration
- The client configuration options controlling how this client
connects to Amazon Machine Learning (ex: proxy settings, retry
counts, etc.).DefaultAWSCredentialsProviderChain
public AmazonMachineLearningClient(AWSCredentials awsCredentials)
All service calls made using this new client object are blocking, and will not return until the service call completes.
awsCredentials
- The AWS credentials (access key ID and secret key) to use when
authenticating with AWS services.public AmazonMachineLearningClient(AWSCredentials awsCredentials, ClientConfiguration clientConfiguration)
All service calls made using this new client object are blocking, and will not return until the service call completes.
awsCredentials
- The AWS credentials (access key ID and secret key) to use when
authenticating with AWS services.clientConfiguration
- The client configuration options controlling how this client
connects to Amazon Machine Learning (ex: proxy settings, retry
counts, etc.).public AmazonMachineLearningClient(AWSCredentialsProvider awsCredentialsProvider)
All service calls made using this new client object are blocking, and will not return until the service call completes.
awsCredentialsProvider
- The AWS credentials provider which will provide credentials to
authenticate requests with AWS services.public AmazonMachineLearningClient(AWSCredentialsProvider awsCredentialsProvider, ClientConfiguration clientConfiguration)
All service calls made using this new client object are blocking, and will not return until the service call completes.
awsCredentialsProvider
- The AWS credentials provider which will provide credentials to
authenticate requests with AWS services.clientConfiguration
- The client configuration options controlling how this client
connects to Amazon Machine Learning (ex: proxy settings, retry
counts, etc.).public AmazonMachineLearningClient(AWSCredentialsProvider awsCredentialsProvider, ClientConfiguration clientConfiguration, RequestMetricCollector requestMetricCollector)
All service calls made using this new client object are blocking, and will not return until the service call completes.
awsCredentialsProvider
- The AWS credentials provider which will provide credentials to
authenticate requests with AWS services.clientConfiguration
- The client configuration options controlling how this client
connects to Amazon Machine Learning (ex: proxy settings, retry
counts, etc.).requestMetricCollector
- optional request metric collectorpublic CreateBatchPredictionResult createBatchPrediction(CreateBatchPredictionRequest createBatchPredictionRequest)
Generates predictions for a group of observations. The observations to
process exist in one or more data files referenced by a
DataSource
. This operation creates a new
BatchPrediction
, and uses an MLModel
and the
data files referenced by the DataSource
as information
sources.
CreateBatchPrediction
is an asynchronous operation. In
response to CreateBatchPrediction
, Amazon Machine Learning
(Amazon ML) immediately returns and sets the BatchPrediction
status to PENDING
. After the BatchPrediction
completes, Amazon ML sets the status to COMPLETED
.
You can poll for status updates by using the GetBatchPrediction
operation and checking the Status
parameter of the result.
After the COMPLETED
status appears, the results are
available in the location specified by the OutputUri
parameter.
createBatchPrediction
in interface AmazonMachineLearning
createBatchPredictionRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.InternalServerException
- An error on the server occurred when trying to process a request.IdempotentParameterMismatchException
- A second request to use or change an object was not allowed. This
can result from retrying a request using a parameter that was not
present in the original request.public CreateDataSourceFromRDSResult createDataSourceFromRDS(CreateDataSourceFromRDSRequest createDataSourceFromRDSRequest)
Creates a DataSource
object from an Amazon Relational Database Service
(Amazon RDS). A DataSource
references data that can be used
to perform CreateMLModel, CreateEvaluation, or
CreateBatchPrediction operations.
CreateDataSourceFromRDS
is an asynchronous operation. In
response to CreateDataSourceFromRDS
, Amazon Machine Learning
(Amazon ML) immediately returns and sets the DataSource
status to PENDING
. After the DataSource
is
created and ready for use, Amazon ML sets the Status
parameter to COMPLETED
. DataSource
in
COMPLETED
or PENDING
status can only be used to
perform CreateMLModel, CreateEvaluation, or
CreateBatchPrediction operations.
If Amazon ML cannot accept the input source, it sets the
Status
parameter to FAILED
and includes an
error message in the Message
attribute of the
GetDataSource operation response.
createDataSourceFromRDS
in interface AmazonMachineLearning
createDataSourceFromRDSRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.InternalServerException
- An error on the server occurred when trying to process a request.IdempotentParameterMismatchException
- A second request to use or change an object was not allowed. This
can result from retrying a request using a parameter that was not
present in the original request.public CreateDataSourceFromRedshiftResult createDataSourceFromRedshift(CreateDataSourceFromRedshiftRequest createDataSourceFromRedshiftRequest)
Creates a DataSource
from Amazon Redshift. A
DataSource
references data that can be used to perform
either CreateMLModel, CreateEvaluation or
CreateBatchPrediction operations.
CreateDataSourceFromRedshift
is an asynchronous operation.
In response to CreateDataSourceFromRedshift
, Amazon Machine
Learning (Amazon ML) immediately returns and sets the
DataSource
status to PENDING
. After the
DataSource
is created and ready for use, Amazon ML sets the
Status
parameter to COMPLETED
.
DataSource
in COMPLETED
or PENDING
status can only be used to perform CreateMLModel,
CreateEvaluation, or CreateBatchPrediction operations.
If Amazon ML cannot accept the input source, it sets the
Status
parameter to FAILED
and includes an
error message in the Message
attribute of the
GetDataSource operation response.
The observations should exist in the database hosted on an Amazon
Redshift cluster and should be specified by a SelectSqlQuery
. Amazon ML executes
Unload command in Amazon Redshift to transfer the result set of
SelectSqlQuery
to S3StagingLocation.
After the DataSource
is created, it's ready for use in
evaluations and batch predictions. If you plan to use the
DataSource
to train an MLModel
, the
DataSource
requires another item -- a recipe. A recipe
describes the observation variables that participate in training an
MLModel
. A recipe describes how each input variable will be
used in training. Will the variable be included or excluded from
training? Will the variable be manipulated, for example, combined with
another variable or split apart into word combinations? The recipe
provides answers to these questions. For more information, see the Amazon
Machine Learning Developer Guide.
createDataSourceFromRedshift
in interface AmazonMachineLearning
createDataSourceFromRedshiftRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.InternalServerException
- An error on the server occurred when trying to process a request.IdempotentParameterMismatchException
- A second request to use or change an object was not allowed. This
can result from retrying a request using a parameter that was not
present in the original request.public CreateDataSourceFromS3Result createDataSourceFromS3(CreateDataSourceFromS3Request createDataSourceFromS3Request)
Creates a DataSource
object. A DataSource
references data that can be used to perform CreateMLModel,
CreateEvaluation, or CreateBatchPrediction operations.
CreateDataSourceFromS3
is an asynchronous operation. In
response to CreateDataSourceFromS3
, Amazon Machine Learning
(Amazon ML) immediately returns and sets the DataSource
status to PENDING
. After the DataSource
is
created and ready for use, Amazon ML sets the Status
parameter to COMPLETED
. DataSource
in
COMPLETED
or PENDING
status can only be used to
perform CreateMLModel, CreateEvaluation or
CreateBatchPrediction operations.
If Amazon ML cannot accept the input source, it sets the
Status
parameter to FAILED
and includes an
error message in the Message
attribute of the
GetDataSource operation response.
The observation data used in a DataSource
should be ready to
use; that is, it should have a consistent structure, and missing data
values should be kept to a minimum. The observation data must reside in
one or more CSV files in an Amazon Simple Storage Service (Amazon S3)
bucket, along with a schema that describes the data items by name and
type. The same schema must be used for all of the data files referenced
by the DataSource
.
After the DataSource
has been created, it's ready to use in
evaluations and batch predictions. If you plan to use the
DataSource
to train an MLModel
, the
DataSource
requires another item: a recipe. A recipe
describes the observation variables that participate in training an
MLModel
. A recipe describes how each input variable will be
used in training. Will the variable be included or excluded from
training? Will the variable be manipulated, for example, combined with
another variable, or split apart into word combinations? The recipe
provides answers to these questions. For more information, see the Amazon
Machine Learning Developer Guide.
createDataSourceFromS3
in interface AmazonMachineLearning
createDataSourceFromS3Request
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.InternalServerException
- An error on the server occurred when trying to process a request.IdempotentParameterMismatchException
- A second request to use or change an object was not allowed. This
can result from retrying a request using a parameter that was not
present in the original request.public CreateEvaluationResult createEvaluation(CreateEvaluationRequest createEvaluationRequest)
Creates a new Evaluation
of an MLModel
. An
MLModel
is evaluated on a set of observations associated to
a DataSource
. Like a DataSource
for an
MLModel
, the DataSource
for an
Evaluation
contains values for the Target Variable. The
Evaluation
compares the predicted result for each
observation to the actual outcome and provides a summary so that you know
how effective the MLModel
functions on the test data.
Evaluation generates a relevant performance metric such as BinaryAUC,
RegressionRMSE or MulticlassAvgFScore based on the corresponding
MLModelType
: BINARY
, REGRESSION
or
MULTICLASS
.
CreateEvaluation
is an asynchronous operation. In response
to CreateEvaluation
, Amazon Machine Learning (Amazon ML)
immediately returns and sets the evaluation status to
PENDING
. After the Evaluation
is created and
ready for use, Amazon ML sets the status to COMPLETED
.
You can use the GetEvaluation operation to check progress of the evaluation during the creation operation.
createEvaluation
in interface AmazonMachineLearning
createEvaluationRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.InternalServerException
- An error on the server occurred when trying to process a request.IdempotentParameterMismatchException
- A second request to use or change an object was not allowed. This
can result from retrying a request using a parameter that was not
present in the original request.public CreateMLModelResult createMLModel(CreateMLModelRequest createMLModelRequest)
Creates a new MLModel
using the data files and the recipe as
information sources.
An MLModel
is nearly immutable. Users can only update the
MLModelName
and the ScoreThreshold
in an
MLModel
without creating a new MLModel
.
CreateMLModel
is an asynchronous operation. In response to
CreateMLModel
, Amazon Machine Learning (Amazon ML)
immediately returns and sets the MLModel
status to
PENDING
. After the MLModel
is created and ready
for use, Amazon ML sets the status to COMPLETED
.
You can use the GetMLModel operation to check progress of the
MLModel
during the creation operation.
CreateMLModel requires a DataSource
with computed
statistics, which can be created by setting
ComputeStatistics
to true
in
CreateDataSourceFromRDS, CreateDataSourceFromS3, or
CreateDataSourceFromRedshift operations.
createMLModel
in interface AmazonMachineLearning
createMLModelRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.InternalServerException
- An error on the server occurred when trying to process a request.IdempotentParameterMismatchException
- A second request to use or change an object was not allowed. This
can result from retrying a request using a parameter that was not
present in the original request.public CreateRealtimeEndpointResult createRealtimeEndpoint(CreateRealtimeEndpointRequest createRealtimeEndpointRequest)
Creates a real-time endpoint for the MLModel
. The endpoint
contains the URI of the MLModel
; that is, the location to
send real-time prediction requests for the specified MLModel
.
createRealtimeEndpoint
in interface AmazonMachineLearning
createRealtimeEndpointRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.ResourceNotFoundException
- A specified resource cannot be located.InternalServerException
- An error on the server occurred when trying to process a request.public DeleteBatchPredictionResult deleteBatchPrediction(DeleteBatchPredictionRequest deleteBatchPredictionRequest)
Assigns the DELETED status to a BatchPrediction
, rendering
it unusable.
After using the DeleteBatchPrediction
operation, you can use
the GetBatchPrediction operation to verify that the status of the
BatchPrediction
changed to DELETED.
Caution: The result of the DeleteBatchPrediction
operation is irreversible.
deleteBatchPrediction
in interface AmazonMachineLearning
deleteBatchPredictionRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.ResourceNotFoundException
- A specified resource cannot be located.InternalServerException
- An error on the server occurred when trying to process a request.public DeleteDataSourceResult deleteDataSource(DeleteDataSourceRequest deleteDataSourceRequest)
Assigns the DELETED status to a DataSource
, rendering it
unusable.
After using the DeleteDataSource
operation, you can use the
GetDataSource operation to verify that the status of the
DataSource
changed to DELETED.
Caution: The results of the DeleteDataSource
operation are irreversible.
deleteDataSource
in interface AmazonMachineLearning
deleteDataSourceRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.ResourceNotFoundException
- A specified resource cannot be located.InternalServerException
- An error on the server occurred when trying to process a request.public DeleteEvaluationResult deleteEvaluation(DeleteEvaluationRequest deleteEvaluationRequest)
Assigns the DELETED
status to an Evaluation
,
rendering it unusable.
After invoking the DeleteEvaluation
operation, you can use
the GetEvaluation operation to verify that the status of the
Evaluation
changed to DELETED
.
Caution: The results of the DeleteEvaluation
operation are irreversible.
deleteEvaluation
in interface AmazonMachineLearning
deleteEvaluationRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.ResourceNotFoundException
- A specified resource cannot be located.InternalServerException
- An error on the server occurred when trying to process a request.public DeleteMLModelResult deleteMLModel(DeleteMLModelRequest deleteMLModelRequest)
Assigns the DELETED status to an MLModel
, rendering it
unusable.
After using the DeleteMLModel
operation, you can use the
GetMLModel operation to verify that the status of the
MLModel
changed to DELETED.
Caution: The result of the DeleteMLModel
operation is
irreversible.
deleteMLModel
in interface AmazonMachineLearning
deleteMLModelRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.ResourceNotFoundException
- A specified resource cannot be located.InternalServerException
- An error on the server occurred when trying to process a request.public DeleteRealtimeEndpointResult deleteRealtimeEndpoint(DeleteRealtimeEndpointRequest deleteRealtimeEndpointRequest)
Deletes a real time endpoint of an MLModel
.
deleteRealtimeEndpoint
in interface AmazonMachineLearning
deleteRealtimeEndpointRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.ResourceNotFoundException
- A specified resource cannot be located.InternalServerException
- An error on the server occurred when trying to process a request.public DescribeBatchPredictionsResult describeBatchPredictions(DescribeBatchPredictionsRequest describeBatchPredictionsRequest)
Returns a list of BatchPrediction
operations that match the
search criteria in the request.
describeBatchPredictions
in interface AmazonMachineLearning
describeBatchPredictionsRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.InternalServerException
- An error on the server occurred when trying to process a request.public DescribeBatchPredictionsResult describeBatchPredictions()
AmazonMachineLearning
describeBatchPredictions
in interface AmazonMachineLearning
AmazonMachineLearning.describeBatchPredictions(DescribeBatchPredictionsRequest)
public DescribeDataSourcesResult describeDataSources(DescribeDataSourcesRequest describeDataSourcesRequest)
Returns a list of DataSource
that match the search criteria
in the request.
describeDataSources
in interface AmazonMachineLearning
describeDataSourcesRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.InternalServerException
- An error on the server occurred when trying to process a request.public DescribeDataSourcesResult describeDataSources()
AmazonMachineLearning
describeDataSources
in interface AmazonMachineLearning
AmazonMachineLearning.describeDataSources(DescribeDataSourcesRequest)
public DescribeEvaluationsResult describeEvaluations(DescribeEvaluationsRequest describeEvaluationsRequest)
Returns a list of DescribeEvaluations
that match the search
criteria in the request.
describeEvaluations
in interface AmazonMachineLearning
describeEvaluationsRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.InternalServerException
- An error on the server occurred when trying to process a request.public DescribeEvaluationsResult describeEvaluations()
AmazonMachineLearning
describeEvaluations
in interface AmazonMachineLearning
AmazonMachineLearning.describeEvaluations(DescribeEvaluationsRequest)
public DescribeMLModelsResult describeMLModels(DescribeMLModelsRequest describeMLModelsRequest)
Returns a list of MLModel
that match the search criteria in
the request.
describeMLModels
in interface AmazonMachineLearning
describeMLModelsRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.InternalServerException
- An error on the server occurred when trying to process a request.public DescribeMLModelsResult describeMLModels()
AmazonMachineLearning
describeMLModels
in interface AmazonMachineLearning
AmazonMachineLearning.describeMLModels(DescribeMLModelsRequest)
public GetBatchPredictionResult getBatchPrediction(GetBatchPredictionRequest getBatchPredictionRequest)
Returns a BatchPrediction
that includes detailed metadata,
status, and data file information for a Batch Prediction
request.
getBatchPrediction
in interface AmazonMachineLearning
getBatchPredictionRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.ResourceNotFoundException
- A specified resource cannot be located.InternalServerException
- An error on the server occurred when trying to process a request.public GetDataSourceResult getDataSource(GetDataSourceRequest getDataSourceRequest)
Returns a DataSource
that includes metadata and data file
information, as well as the current status of the DataSource
.
GetDataSource
provides results in normal or verbose format.
The verbose format adds the schema description and the list of files
pointed to by the DataSource to the normal format.
getDataSource
in interface AmazonMachineLearning
getDataSourceRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.ResourceNotFoundException
- A specified resource cannot be located.InternalServerException
- An error on the server occurred when trying to process a request.public GetEvaluationResult getEvaluation(GetEvaluationRequest getEvaluationRequest)
Returns an Evaluation
that includes metadata as well as the
current status of the Evaluation
.
getEvaluation
in interface AmazonMachineLearning
getEvaluationRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.ResourceNotFoundException
- A specified resource cannot be located.InternalServerException
- An error on the server occurred when trying to process a request.public GetMLModelResult getMLModel(GetMLModelRequest getMLModelRequest)
Returns an MLModel
that includes detailed metadata, and data
source information as well as the current status of the
MLModel
.
GetMLModel
provides results in normal or verbose format.
getMLModel
in interface AmazonMachineLearning
getMLModelRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.ResourceNotFoundException
- A specified resource cannot be located.InternalServerException
- An error on the server occurred when trying to process a request.public PredictResult predict(PredictRequest predictRequest)
Generates a prediction for the observation using the specified
ML Model
.
Not all response parameters will be populated. Whether a response parameter is populated depends on the type of model requested.
predict
in interface AmazonMachineLearning
predictRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.ResourceNotFoundException
- A specified resource cannot be located.LimitExceededException
- The subscriber exceeded the maximum number of operations. This
exception can occur when listing objects such as
DataSource
.InternalServerException
- An error on the server occurred when trying to process a request.PredictorNotMountedException
- The exception is thrown when a predict request is made to an
unmounted MLModel
.public UpdateBatchPredictionResult updateBatchPrediction(UpdateBatchPredictionRequest updateBatchPredictionRequest)
Updates the BatchPredictionName
of a
BatchPrediction
.
You can use the GetBatchPrediction operation to view the contents of the updated data element.
updateBatchPrediction
in interface AmazonMachineLearning
updateBatchPredictionRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.ResourceNotFoundException
- A specified resource cannot be located.InternalServerException
- An error on the server occurred when trying to process a request.public UpdateDataSourceResult updateDataSource(UpdateDataSourceRequest updateDataSourceRequest)
Updates the DataSourceName
of a DataSource
.
You can use the GetDataSource operation to view the contents of the updated data element.
updateDataSource
in interface AmazonMachineLearning
updateDataSourceRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.ResourceNotFoundException
- A specified resource cannot be located.InternalServerException
- An error on the server occurred when trying to process a request.public UpdateEvaluationResult updateEvaluation(UpdateEvaluationRequest updateEvaluationRequest)
Updates the EvaluationName
of an Evaluation
.
You can use the GetEvaluation operation to view the contents of the updated data element.
updateEvaluation
in interface AmazonMachineLearning
updateEvaluationRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.ResourceNotFoundException
- A specified resource cannot be located.InternalServerException
- An error on the server occurred when trying to process a request.public UpdateMLModelResult updateMLModel(UpdateMLModelRequest updateMLModelRequest)
Updates the MLModelName
and the ScoreThreshold
of an MLModel
.
You can use the GetMLModel operation to view the contents of the updated data element.
updateMLModel
in interface AmazonMachineLearning
updateMLModelRequest
- InvalidInputException
- An error on the client occurred. Typically, the cause is an
invalid input value.ResourceNotFoundException
- A specified resource cannot be located.InternalServerException
- An error on the server occurred when trying to process a request.public ResponseMetadata getCachedResponseMetadata(AmazonWebServiceRequest request)
Response metadata is only cached for a limited period of time, so if you need to access this extra diagnostic information for an executed request, you should use this method to retrieve it as soon as possible after executing the request.
getCachedResponseMetadata
in interface AmazonMachineLearning
request
- The originally executed requestCopyright © 2015. All rights reserved.