public class ActivationLReLU extends BaseActivationFunction
Modifier and Type | Field and Description |
---|---|
static double |
DEFAULT_ALPHA |
Constructor and Description |
---|
ActivationLReLU() |
ActivationLReLU(double alpha) |
Modifier and Type | Method and Description |
---|---|
Pair<INDArray,INDArray> |
backprop(INDArray in,
INDArray epsilon)
Backpropagate the errors through the activation function, given input z and epsilon dL/da.
Returns 2 INDArrays: (a) The gradient dL/dz, calculated from dL/da, and (b) The parameter gradients dL/dW, where w is the weights in the activation function. |
INDArray |
getActivation(INDArray in,
boolean training)
Carry out activation function on the input array (usually known as 'preOut' or 'z')
Implementations must overwrite "in", transform in place and return "in"
Can support separate behaviour during test
|
String |
toString() |
assertShape, numParams
public static final double DEFAULT_ALPHA
public ActivationLReLU()
public ActivationLReLU(double alpha)
public INDArray getActivation(INDArray in, boolean training)
IActivation
in
- input array.training
- true when training.public Pair<INDArray,INDArray> backprop(INDArray in, INDArray epsilon)
IActivation
in
- Input, before applying the activation function (z, or 'preOut')epsilon
- Gradient to be backpropagated: dL/da, where L is the loss functionCopyright © 2020. All rights reserved.