org.platanios.tensorflow.api.learn.estimators
Checkpoint configuration used by this estimator.
Checkpoint configuration used by this estimator.
Run configuration used for this estimator.
Run configuration used for this estimator.
Configuration base for this estimator.
Configuration base for this estimator. This allows for setting up distributed training
environments, for example. Note that this is a *base* for a configuration because the
estimator might modify it and set some missing fields to appropriate default values, in
order to obtain its final configuration that can be obtain through its configuration
field.
Device function used by this estimator for managing replica device placement when using distributed training.
Device function used by this estimator for managing replica device placement when using distributed training.
Evaluates the model managed by this estimator given the provided evaluation data, data
.
Evaluates the model managed by this estimator given the provided evaluation data, data
.
The evaluation process is iterative. In each step, a data batch is obtained from data
and internal metric value
accumulators are updated. The number of steps to perform is controlled through the maxSteps
argument. If set to
-1
, then all batches from data
will be processed.
If metrics
is provided, it overrides the value provided in the constructor of this estimator.
Evaluation dataset. Each element is a tuple over input and training inputs (i.e., supervision labels).
Evaluation metrics to use.
Maximum number of evaluation steps to perform. If -1
, the evaluation process will run
until data
is exhausted.
Boolean indicator specifying whether to save the evaluation results as summaries in the working directory of this estimator.
Name for this evaluation. If provided, it will be used to generate an appropriate directory
name for the resulting summaries. If saveSummaries
is false
, this argument has no
effect. This is useful if the user needs to run multiple evaluations on different data
sets, such as on training data vs test data. Metrics for different evaluations are saved in
separate folders, and appear separately in TensorBoard.
Evaluation metric values at the end of the evaluation process. The return sequence matches the ordering of
metrics
.
InvalidArgumentException
If saveSummaries
is true
, but the estimator has no working directory
specified.
Hooks to use while evaluating.
Evaluates the model managed by this estimator given the provided evaluation data, data
.
Evaluates the model managed by this estimator given the provided evaluation data, data
.
This method requires that a checkpoint can be found in either checkpointPath
, if provided, or in this
estimator's working directory. It first loads the trained parameter values from the checkpoint specified by
checkpointPath
or from the latest checkpoint found in the working directory, and it then computes predictions
for input
.
The evaluation process is iterative. In each step, a data batch is obtained from data
and internal metric value
accumulators are updated. The number of steps to perform is controlled through the maxSteps
argument. If set to
-1
, then all batches from data
will be processed.
If hooks
or metrics
are provided, they override the values provided in the constructor of this estimator.
Evaluation dataset. Each element is a tuple over input and training inputs (i.e., supervision labels).
Evaluation metrics to use.
Maximum number of evaluation steps to perform. If -1
, the evaluation process will run
until data
is exhausted.
Hooks to use while evaluating.
Path to a checkpoint file to use. If null
, then the latest checkpoint found in this
estimator's working directory will be used.
Boolean indicator specifying whether to save the evaluation results as summaries in the working directory of this estimator.
Name for this evaluation. If provided, it will be used to generate an appropriate directory
name for the resulting summaries. If saveSummaries
is false
, this argument has no
effect. This is useful if the user needs to run multiple evaluations on different data
sets, such as on training data vs test data. Metrics for different evaluations are saved in
separate folders, and appear separately in TensorBoard.
Evaluation metric values at the end of the evaluation process. The return sequence matches
the ordering of metrics
.
CheckpointNotFoundException
If no checkpoint could be found. This can happen if checkpointPath
is null
and no checkpoint could be found in this estimator's working directory.
InvalidArgumentException
If saveSummaries
is true
, but the estimator has no working directory
specified.
Evaluation metrics to use.
Gets an existing saver from the current graph, or creates a new one if none exists.
Gets an existing saver from the current graph, or creates a new one if none exists.
Infers output (i.e., computes predictions) for input
using the model managed by this estimator.
Infers output (i.e., computes predictions) for input
using the model managed by this estimator.
input
can be of one of the following types:
(input, output)
tuples corresponding to
each element in the dataset. Note that the predictions are computed lazily in this case, whenever an element
is requested from the returned iterator.IT
, in which case this method returns a prediction of type I
.Note that, ModelInferenceOutput
refers to the tensor type that corresponds to the symbolic type I
. For
example, if I
is (Output, Output)
, then ModelInferenceOutput
will be (Tensor, Tensor)
.
Input for the predictions.
Either an iterator over (IT, ModelInferenceOutput)
tuples, or a single element of type I
, depending on
the type of input
.
Hooks to use while inferring.
Infers output (i.e., computes predictions) for input
using the model managed by this estimator.
Infers output (i.e., computes predictions) for input
using the model managed by this estimator.
This method requires that a checkpoint can be found in either checkpointPath
, if provided, or in this
estimator's working directory. It first loads the trained parameter values from the checkpoint specified by
checkpointPath
or from the latest checkpoint found in the working directory, and it then computes predictions
for input
.
input
can be of one of the following types:
(input, output)
tuples corresponding to
each element in the dataset. Note that the predictions are computed lazily in this case, whenever an element
is requested from the returned iterator.IT
, in which case this method returns a prediction of type I
.Note that, ModelInferenceOutput
refers to the tensor type that corresponds to the symbolic type I
. For
example, if I
is (Output, Output)
, then ModelInferenceOutput
will be (Tensor, Tensor)
.
If hooks
is provided, it overrides the value provided in the constructor of this estimator.
Input for the predictions.
Hooks to use while making predictions.
Path to a checkpoint file to use. If null
, then the latest checkpoint found in this
estimator's working directory will be used.
Either an iterator over (IT, ModelInferenceOutput)
tuples, or a single element of type I
, depending on
the type of input
.
CheckpointNotFoundException
If no checkpoint could be found. This can happen if checkpointPath
is null
and no checkpoint could be found in this estimator's working directory.
Model-generating function that can optionally have a Configuration argument which will be used to pass the estimator's configuration to the model and allows customizing the model based on the execution environment.
Model-generating function that can optionally have a Configuration argument which will be used to pass the estimator's configuration to the model and allows customizing the model based on the execution environment.
Random seed value to be used by the TensorFlow initializers in this estimator.
Random seed value to be used by the TensorFlow initializers in this estimator.
Session configuration used by this estimator.
Session configuration used by this estimator.
TensorBoard configuration to use while training.
TensorBoard configuration to use while training. If provided, a TensorBoard server is launched while training, using the provided configuration. In that case, it is required that TensorBoard is installed for the default Python environment in the system. If training in a distributed setting, the TensorBoard server is launched on the chief node.
Trains the model managed by this estimator.
Trains the model managed by this estimator.
Training dataset. Each element is a tuple over input and training inputs (i.e., supervision labels).
Stop criteria to use for stopping the training iteration. For the default criteria please refer to the documentation of StopCriteria.
Hooks to use while training for the chief node only.
Hooks to use while training for the chief node only. This argument is only useful for a distributed training setting.
Hooks to use while training (e.g., logging for the loss function value, etc.).
Trains the model managed by this estimator.
Trains the model managed by this estimator.
NOTE: If you provide any summary saver or checkpoint saver hooks in hooks
or chiefOnlyHooks
, then the
checkpoint configuration in this estimator's configuration
will be ignored for the chief and those hooks will be
used instead.
If any of hooks
, chiefOnlyHooks
, or tensorBoardConfig
are provided, they override the values provided in the
constructor of this estimator.
Training dataset. Each element is a tuple over input and training inputs (i.e., supervision labels).
Stop criteria to use for stopping the training iteration. For the default criteria please refer to the documentation of StopCriteria.
Hooks to use while training (e.g., logging for the loss function value, etc.).
Hooks to use while training for the chief node only. This argument is only useful for a distributed training setting.
If provided, a TensorBoard server is launched using the provided configuration. In that case, it is required that TensorBoard is installed for the default Python environment in the system. If training in a distributed setting, the TensorBoard server is launched on the chief node.
Working directory used by this estimator, used to save model parameters, graph, etc.
Working directory used by this estimator, used to save model parameters, graph, etc. It can also be used to load checkpoints for a previously saved model.
File-based estimator which is used to train, use, and evaluate TensorFlow models, and uses checkpoint files for storing and retrieving its state. This means that checkpoint files are written after every call to
train()
and are loaded on every call toinfer()
orevaluate()
.