public class L2Regularization extends Object implements Regularization
WeightDecay
, but is applied before the updater is applied, not after.
L = loss + l2 * 0.5 * sum_i w[i]^2
Regularization.ApplyStep
Constructor and Description |
---|
L2Regularization(double l2) |
L2Regularization(ISchedule l2) |
Modifier and Type | Method and Description |
---|---|
void |
apply(INDArray param,
INDArray gradView,
double lr,
int iteration,
int epoch)
Apply the regularization by modifying the gradient array in-place
|
Regularization.ApplyStep |
applyStep() |
Regularization |
clone() |
double |
score(INDArray param,
int iteration,
int epoch)
Calculate the loss function score component for the regularization.
For example, in L2 regularization, this would return L = 0.5 * sum_i param[i]^2 For regularization types that don't have a score component, this method can return 0. |
protected final ISchedule l2
public L2Regularization(double l2)
l2
- L2 regularization coefficientpublic L2Regularization(@NonNull ISchedule l2)
l2
- L2 regularization coefficient (schedule)public Regularization.ApplyStep applyStep()
applyStep
in interface Regularization
Regularization.ApplyStep
public void apply(INDArray param, INDArray gradView, double lr, int iteration, int epoch)
Regularization
apply
in interface Regularization
param
- Input array (usually parameters)gradView
- Gradient view array (should be modified/updated). Same shape and type as the input array.lr
- Current learning rateiteration
- Current network training iterationepoch
- Current network training epochpublic double score(INDArray param, int iteration, int epoch)
Regularization
L = 0.5 * sum_i param[i]^2
score
in interface Regularization
param
- Input array (usually parameters)iteration
- Current network training iterationepoch
- Current network training epochpublic Regularization clone()
clone
in interface Regularization
clone
in class Object
Copyright © 2019. All rights reserved.