public interface Regularization extends Serializable
L1Regularization
L2Regularization
WeightDecay
Modifier and Type | Interface and Description |
---|---|
static class |
Regularization.ApplyStep
ApplyStep determines how the regularization interacts with the optimization process - i.e., when it is applied
relative to updaters like Adam, Nesterov momentum, SGD, etc.
|
Modifier and Type | Method and Description |
---|---|
void |
apply(INDArray param,
INDArray gradView,
double lr,
int iteration,
int epoch)
Apply the regularization by modifying the gradient array in-place
|
Regularization.ApplyStep |
applyStep() |
Regularization |
clone() |
double |
score(INDArray param,
int iteration,
int epoch)
Calculate the loss function score component for the regularization.
For example, in L2 regularization, this would return L = 0.5 * sum_i param[i]^2 For regularization types that don't have a score component, this method can return 0. |
Regularization.ApplyStep applyStep()
Regularization.ApplyStep
void apply(INDArray param, INDArray gradView, double lr, int iteration, int epoch)
param
- Input array (usually parameters)gradView
- Gradient view array (should be modified/updated). Same shape and type as the input array.lr
- Current learning rateiteration
- Current network training iterationepoch
- Current network training epochdouble score(INDArray param, int iteration, int epoch)
L = 0.5 * sum_i param[i]^2
param
- Input array (usually parameters)iteration
- Current network training iterationepoch
- Current network training epochRegularization clone()
Copyright © 2020. All rights reserved.