Package | Description |
---|---|
org.nd4j.autodiff.samediff | |
org.nd4j.linalg.learning.regularization |
Modifier and Type | Method and Description |
---|---|
TrainingConfig.Builder |
TrainingConfig.Builder.addRegularization(Regularization... regularizations)
Add regularization to all trainable parameters in the network
|
TrainingConfig.Builder |
TrainingConfig.Builder.regularization(Regularization... regularization)
Set the regularization for all trainable parameters in the network.
|
Modifier and Type | Method and Description |
---|---|
TrainingConfig.Builder |
TrainingConfig.Builder.regularization(List<Regularization> regularization)
Set the regularization for all trainable parameters in the network.
|
Constructor and Description |
---|
TrainingConfig(IUpdater updater,
List<Regularization> regularization,
boolean minimize,
List<String> dataSetFeatureMapping,
List<String> dataSetLabelMapping,
List<String> dataSetFeatureMaskMapping,
List<String> dataSetLabelMaskMapping,
List<String> lossVariables)
Create a training configuration suitable for training both single input/output and multi input/output networks.
See also the TrainingConfig.Builder for creating a TrainingConfig |
TrainingConfig(IUpdater updater,
List<Regularization> regularization,
boolean minimize,
List<String> dataSetFeatureMapping,
List<String> dataSetLabelMapping,
List<String> dataSetFeatureMaskMapping,
List<String> dataSetLabelMaskMapping,
List<String> lossVariables,
Map<String,List<IEvaluation>> trainEvaluations,
Map<String,Integer> trainEvaluationLabels,
Map<String,List<IEvaluation>> validationEvaluations,
Map<String,Integer> validationEvaluationLabels) |
TrainingConfig(IUpdater updater,
List<Regularization> regularization,
String dataSetFeatureMapping,
String dataSetLabelMapping)
Create a training configuration suitable for training a single input, single output network.
See also the TrainingConfig.Builder for creating a TrainingConfig |
Modifier and Type | Class and Description |
---|---|
class |
L1Regularization
L1 regularization: Implements updating as follows:
L = loss + l1 * sum_i abs(w[i]) {@code w[i] -= updater(gradient[i] + l1 * sign(w[i])) - where sign(w[i]) is +/- 1 Note that L1 regularization is applied before the updater (Adam/Nesterov/etc) is applied. |
class |
L2Regularization
L2 regularization: very similar to
WeightDecay , but is applied before the updater is applied, not after. |
class |
WeightDecay
WeightDecay regularization: Updater is not applied to the regularization term gradients, and (optionally) applies the learning rate.
|
Modifier and Type | Method and Description |
---|---|
Regularization |
L2Regularization.clone() |
Regularization |
L1Regularization.clone() |
Regularization |
WeightDecay.clone() |
Regularization |
Regularization.clone() |
Copyright © 2019. All rights reserved.