Package | Description |
---|---|
org.nd4j.linalg.activations.impl |
Modifier and Type | Class and Description |
---|---|
class |
ActivationCube
f(x) = x^3
|
class |
ActivationELU
f(x) = alpha * (exp(x) - 1.0); x < 0
= x ; x>= 0
alpha defaults to 1, if not specified
|
class |
ActivationHardSigmoid
f(x) = min(1, max(0, 0.2*x + 0.5))
|
class |
ActivationHardTanH
⎧ 1, if x > 1
f(x) = ⎨ -1, if x < -1
⎩ x, otherwise
|
class |
ActivationIdentity
f(x) = x
|
class |
ActivationLReLU
Leaky RELU
f(x) = max(0, x) + alpha * min(0, x)
alpha defaults to 0.01
|
class |
ActivationRationalTanh
Rational tanh approximation
From https://arxiv.org/pdf/1508.01292v3
f(x) = 1.7159 * tanh(2x/3)
where tanh is approximated as follows,
tanh(y) ~ sgn(y) * { 1 - 1/(1+|y|+y^2+1.41645*y^4)}
Underlying implementation is in native code
|
class |
ActivationRectifiedTanh
Rectified tanh
Essentially max(0, tanh(x))
Underlying implementation is in native code
|
class |
ActivationReLU
f(x) = max(0, x)
|
class |
ActivationRReLU
f(x) = max(0,x) + alpha * min(0, x)
alpha is drawn from uniform(l,u) during training and is set to l+u/2 during test
l and u default to 1/8 and 1/3 respectively
Empirical Evaluation of Rectified Activations in Convolutional Network
|
class |
ActivationSELU
https://arxiv.org/pdf/1706.02515.pdf
|
class |
ActivationSigmoid
f(x) = 1 / (1 + exp(-x))
|
class |
ActivationSoftmax
f_i(x) = exp(x_i - shift) / sum_j exp(x_j - shift)
where shift = max_i(x_i)
|
class |
ActivationSoftPlus
f(x) = log(1+e^x)
|
class |
ActivationSoftSign
f_i(x) = x_i / (1+|x_i|)
|
class |
ActivationTanH
f(x) = (exp(x) - exp(-x)) / (exp(x) + exp(-x))
|
Copyright © 2017. All rights reserved.