public class ActivationRectifiedTanh extends BaseActivationFunction
Constructor and Description |
---|
ActivationRectifiedTanh() |
Modifier and Type | Method and Description |
---|---|
org.nd4j.linalg.primitives.Pair<INDArray,INDArray> |
backprop(INDArray in,
INDArray epsilon)
Backpropagate the errors through the activation function, given input z and epsilon dL/da.
Returns 2 INDArrays: (a) The gradient dL/dz, calculated from dL/da, and (b) The parameter gradients dL/dW, where w is the weights in the activation function. |
INDArray |
getActivation(INDArray in,
boolean training)
Carry out activation function on the input array (usually known as 'preOut' or 'z')
Implementations must overwrite "in", transform in place and return "in"
Can support separate behaviour during test
|
String |
toString() |
assertShape, getGradientViewArray, getParametersViewArray, numParams, setGradientViewArray, setParametersViewArray
public INDArray getActivation(INDArray in, boolean training)
IActivation
public org.nd4j.linalg.primitives.Pair<INDArray,INDArray> backprop(INDArray in, INDArray epsilon)
IActivation
in
- Input, before applying the activation function (z, or 'preOut')epsilon
- Gradient to be backpropagated: dL/da, where L is the loss functionCopyright © 2019. All rights reserved.