org.platanios.tensorflow.api.learn.estimators
Evaluates the model managed by this estimator given the provided evaluation data, data
.
Evaluates the model managed by this estimator given the provided evaluation data, data
.
The evaluation process is iterative. In each step, a data batch is obtained from data
and internal metric value
accumulators are updated. The number of steps to perform is controlled through the maxSteps
argument. If set to
-1
, then all batches from data
will be processed.
Evaluation dataset. Each element is a tuple over input and training inputs (i.e., supervision labels).
Evaluation metrics to use.
Maximum number of evaluation steps to perform. If -1
, the evaluation process will run
until data
is exhausted.
Boolean indicator specifying whether to save the evaluation results as summaries in the working directory of this estimator.
Name for this evaluation. If provided, it will be used to generate an appropriate directory
name for the resulting summaries. If saveSummaries
is false
, this argument has no
effect. This is useful if the user needs to run multiple evaluations on different data
sets, such as on training data vs test data. Metrics for different evaluations are saved in
separate folders, and appear separately in TensorBoard.
Evaluation metric values at the end of the evaluation process. The return sequence matches the ordering of
metrics
.
InvalidArgumentException
If saveSummaries
is true
, but the estimator has no working directory
specified.
Infers output (i.e., computes predictions) for input
using the model managed by this estimator.
Infers output (i.e., computes predictions) for input
using the model managed by this estimator.
input
can be of one of the following types:
(input, output)
tuples corresponding to
each element in the dataset. Note that the predictions are computed lazily in this case, whenever an element
is requested from the returned iterator.IT
, in which case this method returns a prediction of type I
.Note that, ModelInferenceOutput
refers to the tensor type that corresponds to the symbolic type I
. For
example, if I
is (Output, Output)
, then ModelInferenceOutput
will be (Tensor, Tensor)
.
Input for the predictions.
Either an iterator over (IT, ModelInferenceOutput)
tuples, or a single element of type I
, depending on
the type of input
.
Trains the model managed by this estimator.
Trains the model managed by this estimator.
Training dataset. Each element is a tuple over input and training inputs (i.e., supervision labels).
Stop criteria to use for stopping the training iteration. For the default criteria please refer to the documentation of StopCriteria.
Checkpoint configuration used by this estimator.
Run configuration used for this estimator.
Configuration base for this estimator.
Configuration base for this estimator. This allows for setting up distributed training
environments, for example. Note that this is a *base* for a configuration because the
estimator might modify it and set some missing fields to appropriate default values, in
order to obtain its final configuration that can be obtain through its configuration
field.
Device function used by this estimator for managing replica device placement when using distributed training.
Gets an existing saver from the current graph, or creates a new one if none exists.
Gets an existing saver from the current graph, or creates a new one if none exists.
Model-generating function that can optionally have a Configuration argument which will be used to pass the estimator's configuration to the model and allows customizing the model based on the execution environment.
Model-generating function that can optionally have a Configuration argument which will be used to pass the estimator's configuration to the model and allows customizing the model based on the execution environment.
Random seed value to be used by the TensorFlow initializers in this estimator.
Session configuration used by this estimator.
Working directory used by this estimator, used to save model parameters, graph, etc.
Working directory used by this estimator, used to save model parameters, graph, etc. It can also be used to load checkpoints for a previously saved model.
Abstract class for estimators which are used to train, use, and evaluate TensorFlow models.
The Estimator class wraps a model which is specified by a
modelFunction
, which, given inputs and a number of other parameters, creates the ops necessary to perform training, evaluation, or predictions, and provides an interface for doing so.All outputs (checkpoints, event files, etc.) are written to a working directory, provided by
configurationBase
, or a subdirectory thereof. If a working directory is not set inconfigurationBase
, a temporary directory is used.The
configurationBase
argument can be passed a Configuration object containing information about the execution environment. It is passed on to themodelFunction
, if themodelFunction
has an argument with Configuration type (and input functions in the same manner). If theconfigurationBase
argument is not passed, it is instantiated by the Estimator. Not passing a configuration means that defaults useful for local execution are used. The Estimator class makes the configuration available to the model (for instance, to allow specialization based on the number of workers available), and also uses some of its fields to control internals, especially regarding saving checkpoints while training.For models that have hyper-parameters it is recommended to incorporate them in
modelFunction
before instantiating an estimator. This is in contrast to the TensorFlow Python API, but the reason behind the divergence is that the estimator class never uses the provided hyper-parameters. The recommended way to deal with hyper-parameters in the Scala API is to create a model function with two parameter lists, the first one being the hyper-parameters and the second one being those supported by the model-generating function (i.e., optionally a Mode and a Configuration).