Package | Description |
---|---|
org.nd4j.autodiff.samediff | |
org.nd4j.linalg.learning | |
org.nd4j.linalg.learning.config |
Modifier and Type | Method and Description |
---|---|
TrainingConfig.Builder |
TrainingConfig.Builder.updater(IUpdater updater)
|
Constructor and Description |
---|
TrainingConfig(IUpdater updater,
List<Regularization> regularization,
boolean minimize,
List<String> dataSetFeatureMapping,
List<String> dataSetLabelMapping,
List<String> dataSetFeatureMaskMapping,
List<String> dataSetLabelMaskMapping,
List<String> lossVariables)
Create a training configuration suitable for training both single input/output and multi input/output networks.
See also the TrainingConfig.Builder for creating a TrainingConfig |
TrainingConfig(IUpdater updater,
List<Regularization> regularization,
boolean minimize,
List<String> dataSetFeatureMapping,
List<String> dataSetLabelMapping,
List<String> dataSetFeatureMaskMapping,
List<String> dataSetLabelMaskMapping,
List<String> lossVariables,
Map<String,List<IEvaluation>> trainEvaluations,
Map<String,Integer> trainEvaluationLabels,
Map<String,List<IEvaluation>> validationEvaluations,
Map<String,Integer> validationEvaluationLabels) |
TrainingConfig(IUpdater updater,
List<Regularization> regularization,
String dataSetFeatureMapping,
String dataSetLabelMapping)
Create a training configuration suitable for training a single input, single output network.
See also the TrainingConfig.Builder for creating a TrainingConfig |
Modifier and Type | Interface and Description |
---|---|
interface |
GradientUpdater<T extends IUpdater>
Gradient modifications: Calculates an update and tracks related information for gradient changes over time
for handling updates.
|
Modifier and Type | Class and Description |
---|---|
class |
AdaDelta
https://www.matthewzeiler.com/mattzeiler/adadelta.pdf
https://arxiv.org/pdf/1212.5701v1.pdf
|
class |
AdaGrad
Vectorized Learning Rate used per Connection Weight
Adapted from: http://xcorr.net/2014/01/23/adagrad-eliminating-learning-rates-in-stochastic-gradient-descent/
See also http://cs231n.github.io/neural-networks-3/#ada
|
class |
Adam
The Adam updater.
|
class |
AdaMax
The AdaMax updater, a variant of Adam.
|
class |
AMSGrad
The AMSGrad updater
Reference: On the Convergence of Adam and Beyond - https://openreview.net/forum?id=ryQu7f-RZ |
class |
Nadam
Setup and DynamicCustomOpsBuilder for Nadam updater.
|
class |
Nesterovs
Nesterov's momentum.
|
class |
NoOp
NoOp updater: gradient updater that makes no changes to the gradient
|
class |
RmsProp
RMS Prop updates:
|
class |
Sgd
SGD updater applies a learning rate only
|
Modifier and Type | Method and Description |
---|---|
IUpdater |
IUpdater.clone()
Clone the updater
|
IUpdater |
AdaMax.clone() |
Copyright © 2020. All rights reserved.