Modifier and Type | Method and Description |
---|---|
Map<String,List<IEvaluation>> |
ListenerEvaluations.trainEvaluations()
Get the requested training evaluations
|
Map<String,List<IEvaluation>> |
ListenerEvaluations.validationEvaluations()
Get the requested validation evaluations
|
Modifier and Type | Method and Description |
---|---|
ListenerEvaluations.Builder |
ListenerEvaluations.Builder.addEvaluations(boolean validation,
String variableName,
int labelIndex,
IEvaluation... evaluations)
Add requested evaluations for a parm/variable, for either training or validation
|
ListenerEvaluations.Builder |
ListenerEvaluations.Builder.trainEvaluation(SDVariable variable,
int labelIndex,
IEvaluation... evaluations)
Add requested training evaluations for a parm/variable
|
ListenerEvaluations.Builder |
ListenerEvaluations.Builder.trainEvaluation(String variableName,
int labelIndex,
IEvaluation... evaluations)
Add requested training evaluations for a parm/variable
|
ListenerEvaluations.Builder |
ListenerEvaluations.Builder.validationEvaluation(SDVariable variable,
int labelIndex,
IEvaluation... evaluations)
Add requested validation evaluations for a parm/variable
|
ListenerEvaluations.Builder |
ListenerEvaluations.Builder.validationEvaluation(String variableName,
int labelIndex,
IEvaluation... evaluations)
Add requested validation evaluations for a parm/variable
|
Constructor and Description |
---|
ListenerEvaluations(Map<String,List<IEvaluation>> trainEvaluations,
Map<String,Integer> trainEvaluationLabels,
Map<String,List<IEvaluation>> validationEvaluations,
Map<String,Integer> validationEvaluationLabels) |
ListenerEvaluations(Map<String,List<IEvaluation>> trainEvaluations,
Map<String,Integer> trainEvaluationLabels,
Map<String,List<IEvaluation>> validationEvaluations,
Map<String,Integer> validationEvaluationLabels) |
Modifier and Type | Method and Description |
---|---|
<T extends IEvaluation<T>> |
EvaluationRecord.evaluation(Class<T> evalClass)
Get the evaluation of a given type
|
<T extends IEvaluation> |
EvaluationRecord.evaluation(SDVariable param)
Get the evaluation for a given param/variable
|
<T extends IEvaluation<T>> |
EvaluationRecord.evaluation(SDVariable param,
Class<T> evalClass)
Get the evaluation of a given type, for a given param/variable
|
<T extends IEvaluation> |
EvaluationRecord.evaluation(String param)
Get the evaluation for a given param/variable
|
<T extends IEvaluation<T>> |
EvaluationRecord.evaluation(String param,
Class<T> evalClass)
Get the evaluation of a given type, for a given param/variable
|
Modifier and Type | Method and Description |
---|---|
IEvaluation |
EvaluationRecord.evaluation(SDVariable param,
int index)
Get the evaluation for param at the specified index
|
IEvaluation |
EvaluationRecord.evaluation(String param,
int index)
Get the evaluation for param at the specified index
|
Modifier and Type | Method and Description |
---|---|
Map<String,List<IEvaluation>> |
EvaluationRecord.evaluations()
Get all evaluations
|
List<IEvaluation> |
EvaluationRecord.evaluations(SDVariable param)
Get evaluations for a given param/variable
|
List<IEvaluation> |
EvaluationRecord.evaluations(String param)
Get evaluations for a given param/variable
|
List<IEvaluation> |
History.trainingEval(SDVariable param)
Get the results of a training evaluation on a given parameter
Only works if there is only one evaluation for param.
|
List<IEvaluation> |
History.trainingEval(SDVariable param,
int index)
Get the results of a training evaluation on a given parameter at a given index
Note that it returns all recorded evaluations.
|
List<IEvaluation> |
History.trainingEval(String param)
Get the results of a training evaluation on a given parameter
Only works if there is only one evaluation for param.
|
List<IEvaluation> |
History.trainingEval(String param,
int index)
Get the results of a training evaluation on a given parameter at a given index
Note that it returns all recorded evaluations.
|
List<IEvaluation> |
History.validationEval(SDVariable param)
Get the results of a validation evaluation on a given parameter
Only works if there is only one evaluation for param.
|
List<IEvaluation> |
History.validationEval(SDVariable param,
int index)
Get the results of a validation evaluation on a given parameter at a given index
Note that it returns all recorded evaluations.
|
List<IEvaluation> |
History.validationEval(String param)
Get the results of a validation evaluation on a given parameter
Only works if there is only one evaluation for param.
|
List<IEvaluation> |
History.validationEval(String param,
int index)
Get the results of a validation evaluation on a given parameter at a given index
Note that it returns all recorded evaluations.
|
Constructor and Description |
---|
EvaluationRecord(Map<String,List<IEvaluation>> evaluations) |
Modifier and Type | Method and Description |
---|---|
TrainingConfig.Builder |
TrainingConfig.Builder.addEvaluations(boolean validation,
String variableName,
int labelIndex,
IEvaluation... evaluations)
Add requested evaluations for a parm/variable, for either training or validation.
|
void |
SameDiff.evaluate(DataSetIterator iterator,
String outputVariable,
IEvaluation... evaluations)
|
void |
SameDiff.evaluate(DataSetIterator iterator,
String outputVariable,
List<Listener> listeners,
IEvaluation... evaluations)
Evaluate the performance of a single variable's prediction.
For example, if the variable to evaluatate was called "softmax" you would use: |
void |
SameDiff.evaluate(MultiDataSetIterator iterator,
String outputVariable,
int labelIndex,
IEvaluation... evaluations)
|
void |
SameDiff.evaluate(MultiDataSetIterator iterator,
String outputVariable,
int labelIndex,
List<Listener> listeners,
IEvaluation... evaluations)
Evaluate the performance of a single variable's prediction.
For example, if the variable to evaluatate was called "softmax" you would use: |
TrainingConfig.Builder |
TrainingConfig.Builder.trainEvaluation(SDVariable variable,
int labelIndex,
IEvaluation... evaluations)
Add requested History training evaluations for a parm/variable.
|
TrainingConfig.Builder |
TrainingConfig.Builder.trainEvaluation(String variableName,
int labelIndex,
IEvaluation... evaluations)
Add requested History training evaluations for a parm/variable.
|
TrainingConfig.Builder |
TrainingConfig.Builder.validationEvaluation(SDVariable variable,
int labelIndex,
IEvaluation... evaluations)
Add requested History validation evaluations for a parm/variable.
|
TrainingConfig.Builder |
TrainingConfig.Builder.validationEvaluation(String variableName,
int labelIndex,
IEvaluation... evaluations)
Add requested History validation evaluations for a parm/variable.
|
Modifier and Type | Method and Description |
---|---|
void |
SameDiff.evaluate(DataSetIterator iterator,
Map<String,IEvaluation> variableEvals,
Listener... listeners)
Evaluation for multiple-output networks.
See SameDiff.evaluate(MultiDataSetIterator, Map, Map, Listener[]) . |
void |
SameDiff.evaluate(MultiDataSetIterator iterator,
Map<String,List<IEvaluation>> variableEvals,
Map<String,Integer> predictionLabelMapping,
Listener... listeners)
Perform evaluation using classes such as
Evaluation for classifier outputs
and RegressionEvaluation for regression outputs.Example: classifier evaluation Predictions variable name: "softmaxOutput" Evaluations to perform: Evaluation Data: single input, single output MultiDataSets Code: |
void |
SameDiff.evaluateMultiple(DataSetIterator iterator,
Map<String,List<IEvaluation>> variableEvals,
Listener... listeners)
Evaluation for multiple output networks - one or more.
|
Constructor and Description |
---|
TrainingConfig(IUpdater updater,
List<Regularization> regularization,
boolean minimize,
List<String> dataSetFeatureMapping,
List<String> dataSetLabelMapping,
List<String> dataSetFeatureMaskMapping,
List<String> dataSetLabelMaskMapping,
List<String> lossVariables,
Map<String,List<IEvaluation>> trainEvaluations,
Map<String,Integer> trainEvaluationLabels,
Map<String,List<IEvaluation>> validationEvaluations,
Map<String,Integer> validationEvaluationLabels) |
TrainingConfig(IUpdater updater,
List<Regularization> regularization,
boolean minimize,
List<String> dataSetFeatureMapping,
List<String> dataSetLabelMapping,
List<String> dataSetFeatureMaskMapping,
List<String> dataSetLabelMaskMapping,
List<String> lossVariables,
Map<String,List<IEvaluation>> trainEvaluations,
Map<String,Integer> trainEvaluationLabels,
Map<String,List<IEvaluation>> validationEvaluations,
Map<String,Integer> validationEvaluationLabels) |
Modifier and Type | Method and Description |
---|---|
EvaluationConfig |
EvaluationConfig.evaluate(SDVariable variable,
IEvaluation... evaluations)
|
EvaluationConfig |
EvaluationConfig.evaluate(SDVariable variable,
int labelIndex,
IEvaluation... evaluations)
|
EvaluationConfig |
EvaluationConfig.evaluate(String param,
IEvaluation... evaluations)
Add evaluations to be preformed on a specified variable, without setting a label index.
|
EvaluationConfig |
EvaluationConfig.evaluate(String param,
int labelIndex,
IEvaluation... evaluations)
Add evaluations to be preformed on a specified variable, and set that variable's label index.
|
Modifier and Type | Interface and Description |
---|---|
interface |
IEvaluation<T extends IEvaluation>
A general purpose interface for evaluating neural networks - methods are shared by implemetations such as
Evaluation , RegressionEvaluation , ROC , ROCMultiClass |
Modifier and Type | Class and Description |
---|---|
class |
BaseEvaluation<T extends BaseEvaluation>
BaseEvaluation implement common evaluation functionality (for time series, etc) for
Evaluation ,
RegressionEvaluation , ROC , ROCMultiClass etc. |
Modifier and Type | Method and Description |
---|---|
protected static <T extends IEvaluation> |
BaseEvaluation.attempFromLegacyFromJson(String json,
org.nd4j.shade.jackson.databind.exc.InvalidTypeIdException originalException)
Attempt to load DL4J IEvaluation JSON from 1.0.0-beta2 or earlier.
|
static <T extends IEvaluation> |
BaseEvaluation.fromJson(String json,
Class<T> clazz) |
static <T extends IEvaluation> |
BaseEvaluation.fromYaml(String yaml,
Class<T> clazz) |
Modifier and Type | Method and Description |
---|---|
Class<? extends IEvaluation> |
IMetric.getEvaluationClass()
The
IEvaluation class this metric is for |
Modifier and Type | Class and Description |
---|---|
class |
Evaluation
Evaluation metrics:
- precision, recall, f1, fBeta, accuracy, Matthews correlation coefficient, gMeasure - Top N accuracy (if using constructor Evaluation.Evaluation(List, int) )- Custom binary evaluation decision threshold (use constructor Evaluation.Evaluation(double) (default if not set is
argmax / 0.5)- Custom cost array, using Evaluation.Evaluation(INDArray) or Evaluation.Evaluation(List, INDArray) for multi-class Note: Care should be taken when using the Evaluation class for binary classification metrics such as F1, precision, recall, etc. |
class |
EvaluationBinary
EvaluationBinary: used for evaluating networks with binary classification outputs.
|
class |
EvaluationCalibration
EvaluationCalibration is an evaluation class designed to analyze the calibration of a classifier.
It provides a number of tools for this purpose: - Counts of the number of labels and predictions for each class - Reliability diagram (or reliability curve) - Residual plot (histogram) - Histograms of probabilities, including probabilities for each class separately References: - Reliability diagram: see for example Niculescu-Mizil and Caruana 2005, Predicting Good Probabilities With Supervised Learning - Residual plot: see Wallace and Dahabreh 2012, Class Probability Estimates are Unreliable for Imbalanced Data (and How to Fix Them) |
class |
ROC
ROC (Receiver Operating Characteristic) for binary classifiers.
ROC has 2 modes of operation: (a) Thresholded (less memory) (b) Exact (default; use numSteps == 0 to set. |
class |
ROCBinary
ROC (Receiver Operating Characteristic) for multi-task binary classifiers.
|
class |
ROCMultiClass
ROC (Receiver Operating Characteristic) for multi-class classifiers.
|
Modifier and Type | Method and Description |
---|---|
Class<? extends IEvaluation> |
ROC.Metric.getEvaluationClass() |
Class<? extends IEvaluation> |
ROCBinary.Metric.getEvaluationClass() |
Class<? extends IEvaluation> |
EvaluationBinary.Metric.getEvaluationClass() |
Class<? extends IEvaluation> |
ROCMultiClass.Metric.getEvaluationClass() |
Class<? extends IEvaluation> |
Evaluation.Metric.getEvaluationClass() |
Modifier and Type | Class and Description |
---|---|
class |
CustomEvaluation<T>
A evaluation using lambdas to calculate the score.
|
Modifier and Type | Method and Description |
---|---|
Class<? extends IEvaluation> |
CustomEvaluation.Metric.getEvaluationClass() |
Modifier and Type | Class and Description |
---|---|
class |
RegressionEvaluation
Evaluation method for the evaluation of regression algorithms.
Provides the following metrics, for each column: - MSE: mean squared error - MAE: mean absolute error - RMSE: root mean squared error - RSE: relative squared error - PC: pearson correlation coefficient - R^2: coefficient of determination See for example: http://www.saedsayad.com/model_evaluation_r.htm For classification, see Evaluation |
Modifier and Type | Method and Description |
---|---|
Class<? extends IEvaluation> |
RegressionEvaluation.Metric.getEvaluationClass() |
Copyright © 2019. All rights reserved.