public class L1Regularization extends Object implements Regularization
L = loss + l1 * sum_i abs(w[i])
Regularization.ApplyStep
Constructor and Description |
---|
L1Regularization(double l1) |
L1Regularization(@NonNull ISchedule l1) |
Modifier and Type | Method and Description |
---|---|
void |
apply(INDArray param,
INDArray gradView,
double lr,
int iteration,
int epoch)
Apply the regularization by modifying the gradient array in-place
|
Regularization.ApplyStep |
applyStep() |
Regularization |
clone() |
double |
score(INDArray param,
int iteration,
int epoch)
Calculate the loss function score component for the regularization.
For example, in L2 regularization, this would return L = 0.5 * sum_i param[i]^2 For regularization types that don't have a score component, this method can return 0. |
protected final ISchedule l1
public L1Regularization(double l1)
l1
- l1 regularization coefficientpublic L1Regularization(@NonNull @NonNull ISchedule l1)
l1
- L1 regularization coefficient (schedule)public Regularization.ApplyStep applyStep()
applyStep
in interface Regularization
Regularization.ApplyStep
public void apply(INDArray param, INDArray gradView, double lr, int iteration, int epoch)
Regularization
apply
in interface Regularization
param
- Input array (usually parameters)gradView
- Gradient view array (should be modified/updated). Same shape and type as the input array.lr
- Current learning rateiteration
- Current network training iterationepoch
- Current network training epochpublic double score(INDArray param, int iteration, int epoch)
Regularization
L = 0.5 * sum_i param[i]^2
score
in interface Regularization
param
- Input array (usually parameters)iteration
- Current network training iterationepoch
- Current network training epochpublic Regularization clone()
clone
in interface Regularization
clone
in class Object
Copyright © 2020. All rights reserved.