Modifier and Type | Class and Description |
---|---|
class |
BaseGradientOp
A gradient op always makes the following assumptions:
there is always a y (beacuse of backpropagating
or using the chain rule)
and that it is special exec (for now)
This op opType sis meant to be used
to build derivative operations.
|
Modifier and Type | Class and Description |
---|---|
class |
LegacyPooling2D
Deprecated.
Note: This operation will be removed in a future release
|
Modifier and Type | Class and Description |
---|---|
class |
Abs
Abs elementwise function
|
class |
ACos
Log elementwise function
|
class |
ACosh
ACosh elementwise function
|
class |
All
Boolean AND pairwise transform
|
class |
And
Boolean AND pairwise transform
|
class |
ASin
Arcsin elementwise function
|
class |
ASinh
Arcsin elementwise function
|
class |
ATan
Arc Tangent elementwise function
|
class |
ATanh
tan elementwise function
|
class |
Ceil
Ceiling elementwise function
|
class |
Constant |
class |
Cos
Cosine elementwise function
|
class |
Cosh
Cosine Hyperbolic elementwise function
|
class |
Cube
Cube (x^3) elementwise function
|
class |
ELU
ELU: Exponential Linear Unit (alpha=1.0)
Introduced in paper: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) Djork-Arn?? Clevert, Thomas Unterthiner, Sepp Hochreiter (2015) http://arxiv.org/abs/1511.07289 |
class |
Erf
Gaussian error function (erf) function, which is defined as
|
class |
Erfc
Complementary Gaussian error function (erfc), defined as
|
class |
Exp
Element-wise exponential function
|
class |
Expm1
Element-wise exponential function minus 1, i.e.
|
class |
Floor
Floor elementwise function
|
class |
HardSigmoid
HardSigmoid function
|
class |
HardTanh
Hard tanh elementwise function
|
class |
Histogram |
class |
IsFinite
IsFinite function
|
class |
IsInf
IsInf function
|
class |
IsMax
[1, 2, 3, 1] -> [0, 0, 1, 0]
|
class |
IsNaN
IsNaN function
|
class |
LeakyReLU
Leaky Rectified linear unit.
|
class |
LegacyDropOut
DropOut implementation as Op
PLEASE NOTE: This is legacy DropOut implementation, please consider using op with the same opName from randomOps
|
class |
LegacyDropOutInverted
Inverted DropOut implementation as Op
PLEASE NOTE: This is legacy DropOutInverted implementation, please consider using op with the same opName from randomOps
|
class |
Log
Log elementwise function
|
class |
Log1p
Log1p function
|
class |
LogSigmoid
LogSigmoid function
|
class |
LogSigmoidDerivative
LogSigmoid derivative
|
class |
LogSoftMax
Log(softmax(X))
|
class |
LogX
Log on arbitrary base op
|
class |
MatchConditionTransform
Absolute sum the components
|
class |
MaxOut
Max out activation:
http://arxiv.org/pdf/1302.4389.pdf
|
class |
Negative
Negative function
|
class |
Not
Boolean AND pairwise transform
|
class |
OldAtan2Op
atan2 operation
|
class |
OldIdentity
Identity function
|
class |
OldReverse
OldReverse op
|
class |
OldSoftMax
Soft max function
row_maxes is a row vector (max for each row)
row_maxes = rowmaxes(input)
diff = exp(input - max) / diff.rowSums()
Outputs a probability distribution.
|
class |
OneMinus
1 - input
|
class |
Or
Boolean OR pairwise transform
|
class |
Pow
Pow function
|
class |
PowDerivative
Pow derivative
z = n * x ^ (n-1)
|
class |
RationalTanh
Rational Tanh Approximation elementwise function, as described at https://github.com/deeplearning4j/libnd4j/issues/351
|
class |
Reciprocal
Created by susaneraly on 3/28/18.
|
class |
RectifedLinear
Rectified linear units
|
class |
RectifiedTanh
RectifiedTanh
Essentially max(0, tanh(x))
|
class |
Relu6
Rectified linear unit 6, i.e.
|
class |
ReplaceNans
Element-wise "Replace NaN" implementation as Op
|
class |
Rint
Rint function
|
class |
Round
Rounding function
|
class |
RSqrt
RSqrt function
|
class |
SELU
SELU activation function
|
class |
Set
Set
|
class |
SetRange
Set range to a particular set of values
|
class |
Sigmoid
Sigmoid function
|
class |
SigmoidDerivative
Sigmoid derivative
|
class |
Sign
Signum function
|
class |
Sin
Log elementwise function
|
class |
Sinh
Sinh function
|
class |
SoftPlus |
class |
SoftSign
Softsign element-wise activation function.
|
class |
Sqrt
Sqrt function
|
class |
Stabilize
Stabilization function, forces values to be within a range
|
class |
Step
Unit step function.
|
class |
Swish
Swish function
|
class |
SwishDerivative
Swish derivative
|
class |
Tan
Tanh elementwise function
|
class |
TanDerivative
Tan Derivative elementwise function
|
class |
Tanh
Tanh elementwise function
|
class |
TanhDerivative
Tanh derivative
|
class |
TimesOneMinus
If x is input: output is x*(1-x)
|
class |
Xor
Boolean XOR pairwise transform
|
Modifier and Type | Class and Description |
---|---|
class |
Axpy
Level 1 blas op Axpy as libnd4j native op
|
class |
CopyOp
Copy operation
|
class |
FloorModOp
Floor mod
|
class |
FModOp
Floating-point mod
|
class |
OldAddOp
Add operation for two operands
|
class |
OldDivOp
Division operation
|
class |
OldFloorDivOp
Truncated division operation
|
class |
OldFModOp
Floating point remainder
|
class |
OldMulOp
Multiplication operation
|
class |
OldRDivOp
OldReverse Division operation
|
class |
OldSubOp
Division operation
|
class |
RemainderOp
Floating-point remainder operation
|
class |
TruncateDivOp
Truncated division operation
|
Modifier and Type | Class and Description |
---|---|
class |
CompareAndReplace
Element-wise Compare-and-Replace implementation as Op
Basically this op does the same as Compare-and-Set, but op.X is checked against Condition instead
|
class |
CompareAndSet
Element-wise Compare-and-set implementation as Op
Please check javadoc to specific constructors, for detail information.
|
class |
Eps
Bit mask over the ndarrays as to whether
the components are equal or not
|
class |
OldEqualTo
Bit mask over the ndarrays as to whether
the components are equal or not
|
class |
OldGreaterThan
Bit mask over the ndarrays as to whether
the components are greater than or not
|
class |
OldGreaterThanOrEqual
Bit mask over the ndarrays as to whether
the components are greater than or equal or not
|
class |
OldLessThan
Bit mask over the ndarrays as to whether
the components are less than or not
|
class |
OldLessThanOrEqual
Bit mask over the ndarrays as to whether
the components are less than or equal or not
|
class |
OldMax
Max function
|
class |
OldMin
Min function
|
class |
OldNotEqualTo
Not equal to function:
Bit mask over whether 2 elements are not equal or not
|
Modifier and Type | Class and Description |
---|---|
class |
CubeDerivative
Cube derivative, e.g.
|
class |
ELUDerivative
Derivative of ELU: Exponential Linear Unit (alpha=1.0)
Introduced in paper: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) Djork-Arn?? Clevert, Thomas Unterthiner, Sepp Hochreiter (2015) http://arxiv.org/abs/1511.07289 |
class |
GradientBackwardsMarker |
class |
HardSigmoidDerivative
HardSigmoid derivative
|
class |
HardTanhDerivative
Hard tanh elementwise derivative function
|
class |
LeakyReLUDerivative
Leaky ReLU derivative.
|
class |
LogSoftMaxDerivative |
class |
RationalTanhDerivative
Rational Tanh Derivative, as described at as described at https://github.com/deeplearning4j/libnd4j/issues/351
|
class |
RectifiedTanhDerivative
Rectified Tanh Derivative
|
class |
SELUDerivative
SELU Derivative elementwise function
https://arxiv.org/pdf/1706.02515.pdf
|
class |
SoftMaxDerivative |
class |
SoftSignDerivative
SoftSign derivative.
|
Copyright © 2018. All rights reserved.