Modifier and Type | Method and Description |
---|---|
Op |
SameDiffOpExecutioner.exec(Op op)
Execute the operation
|
Op |
SameDiffOpExecutioner.exec(Op op,
int... dimension)
Execute the operation along 1 or more dimensions
|
Modifier and Type | Method and Description |
---|---|
Op |
SameDiffOpExecutioner.exec(Op op)
Execute the operation
|
Op |
SameDiffOpExecutioner.exec(Op op,
int... dimension)
Execute the operation along 1 or more dimensions
|
INDArray |
SameDiffOpExecutioner.execAndReturn(Op op)
Execute and return a result
ndarray from the given op
|
void |
SameDiffOpExecutioner.invoke(Op op) |
SDVariable |
SameDiff.invoke(Op op,
SDVariable x)
Invoke an op by opName
|
SDVariable |
SameDiff.invoke(Op op,
SDVariable x,
SDVariable y)
Invoke an op by opName
|
void |
SameDiffOpExecutioner.iterateOverAllColumns(Op op)
Iterate over every column of every slice
|
void |
SameDiffOpExecutioner.iterateOverAllRows(Op op)
Iterate over every row of every slice
|
Modifier and Type | Method and Description |
---|---|
Op |
Activation.asTransformDerivative(INDArray in,
boolean dup)
Get the Activation function derivative (i.e., dOut/dIn) as an ND4J Transform, applied on either the input
or a copy of the input
|
Modifier and Type | Interface and Description |
---|---|
interface |
Accumulation
An accumulation is an op that given:
x -> the origin ndarray y -> the pairwise ndarray n -> the number of times to accumulate Of note here in the extra arguments. |
interface |
BroadcastOp
A broad cast op is one where a scalar
or less rank array
is broadcast to fill
a bigg er array.
|
interface |
GradientOp
A gradient op represents
a jacobian operation
|
interface |
GridOp
MetaOp is special op, that contains multiple ops
|
interface |
IndexAccumulation
An index accumulation is an operation that returns an index within
a NDArray.
Examples of IndexAccumulation operations include finding the index of the minimim value, index of the maximum value, index of the first element equal to value y, index of the maximum pair-wise difference between two NDArrays X and Y etc. Index accumulation is similar to Accumulation in that both are
accumulation/reduction operations, however index accumulation returns
an integer corresponding to an index, rather than a real (or complex)
value.Index accumulation operations generally have 3 inputs: x -> the origin ndarray y -> the pairwise ndarray (frequently null/not applicable) n -> the number of times to accumulate Note that IndexAccumulation op implementations should be stateless (other than the final result and x/y/n arguments) and hence threadsafe, such that they may be parallelized using the update, combineSubResults and set/getFinalResults methods. |
interface |
LossFunction
A loss function for computing
the delta between two arrays
|
interface |
MetaOp
MetaOp is special op, that contains multiple ops
|
interface |
RandomOp |
interface |
ScalarOp
Applies a scalar
along a bigger input array.
|
interface |
TransformOp
Transform operation:
stores the result in an ndarray
|
Modifier and Type | Class and Description |
---|---|
class |
BaseAccumulation
Base class for accumulation, initiates the initial entry
with respect to the child class.
|
class |
BaseBroadcastOp |
class |
BaseGradientOp
A gradient op always makes the following assumptions:
there is always a y (beacuse of backpropagating
or using the chain rule)
and that it is special exec (for now)
This op opType sis meant to be used
to build derivative operations.
|
class |
BaseIndexAccumulation
Index based reduction algo
|
class |
BaseOp
Base op.
|
class |
BaseScalarOp
Base scalar operation
|
class |
BaseTransformOp
A base op for basic getters and setters
|
class |
DefaultOpConverter |
class |
ShapeOp
Shape manipulation ops
|
Modifier and Type | Method and Description |
---|---|
Op |
MetaOp.getFirstOp() |
Op |
MetaOp.getSecondOp() |
Modifier and Type | Method and Description |
---|---|
static Op.Type |
BaseOp.getOpType(Op op) |
Constructor and Description |
---|
BlasOpErrorMessage(Op op) |
Modifier and Type | Method and Description |
---|---|
Op |
DefaultOpExecutioner.exec(Op op) |
Op |
OpExecutioner.exec(Op op)
Execute the operation
|
Op |
DefaultOpExecutioner.exec(Op op,
int... dimension) |
Op |
OpExecutioner.exec(Op op,
int... dimension)
Execute the operation along 1 or more dimensions
|
Modifier and Type | Method and Description |
---|---|
protected void |
DefaultOpExecutioner.checkForCompression(Op op) |
static void |
OpExecutionerUtil.checkForInf(Op op) |
static void |
OpExecutionerUtil.checkForNaN(Op op) |
protected void |
DefaultOpExecutioner.checkForWorkspaces(Op op) |
Op |
DefaultOpExecutioner.exec(Op op) |
Op |
OpExecutioner.exec(Op op)
Execute the operation
|
Op |
DefaultOpExecutioner.exec(Op op,
int... dimension) |
Op |
OpExecutioner.exec(Op op,
int... dimension)
Execute the operation along 1 or more dimensions
|
INDArray |
DefaultOpExecutioner.execAndReturn(Op op) |
INDArray |
OpExecutioner.execAndReturn(Op op)
Execute and return a result
ndarray from the given op
|
protected void |
DefaultOpExecutioner.interceptIntDataType(Op op)
This method checks if any Op operand has data opType of INT, and throws exception if any.
|
void |
DefaultOpExecutioner.iterateOverAllColumns(Op op) |
void |
OpExecutioner.iterateOverAllColumns(Op op)
Iterate over every column of every slice
|
void |
DefaultOpExecutioner.iterateOverAllRows(Op op) |
void |
OpExecutioner.iterateOverAllRows(Op op)
Iterate over every row of every slice
|
long |
DefaultOpExecutioner.profilingHookIn(Op op) |
long |
DefaultOpExecutioner.profilingHookIn(Op op,
DataBuffer... tadBuffers) |
void |
DefaultOpExecutioner.profilingHookOut(Op op,
long timeStart) |
static void |
DefaultOpExecutioner.validateDataType(DataBuffer.Type expectedType,
Op op)
Validate the data types
for the given operation
|
Modifier and Type | Method and Description |
---|---|
Op |
OpFactory.createShape(String name,
INDArray x,
INDArray z,
Object[] extraArgs) |
Op |
DefaultOpFactory.createShape(String name,
INDArray x,
INDArray z,
Object[] extraArgs) |
Op |
OpFactory.getOpByName(String opName)
This method returns Op instance if opName exists, null otherwise
|
Op |
DefaultOpFactory.getOpByName(String opName) |
Constructor and Description |
---|
GridPointers(Op op,
int... dimensions) |
OpDescriptor(Op op) |
Modifier and Type | Class and Description |
---|---|
class |
AMax
Calculate the absolute max over a vector
|
class |
AMean
Calculate the absolute mean of the given vector
|
class |
AMin
Calculate the absolute minimum over a vector
|
class |
ASum
Absolute sum the components
|
class |
Bias
Calculate a bias
|
class |
CountNonZero
Count the number of non-zero elements
|
class |
CountZero
Count the number of zero elements
|
class |
Dot
Dot product
|
class |
Entropy
Entropy Op - returns the entropy (information gain, or uncertainty of a random variable).
|
class |
EqualsWithEps
Operation for fast INDArrays equality checks
|
class |
LogEntropy
Log Entropy Op - returns the entropy (information gain, or uncertainty of a random variable).
|
class |
LogSumExp
LogSumExp - this op returns https://en.wikipedia.org/wiki/LogSumExp
|
class |
MatchCondition
Absolute sum the components
|
class |
Max
Calculate the max over a vector
|
class |
Mean
Calculate the mean of the vector
|
class |
Min
Calculate the min over a vector
|
class |
Norm1
Sum of absolute values
|
class |
Norm2
Sum of squared values (real)
Sum of squared complex modulus (complex)
|
class |
NormMax
The max absolute value
|
class |
Prod
Prod the components
|
class |
ShannonEntropy
Non-normalized Shannon Entropy Op - returns the entropy (information gain, or uncertainty of a random variable).
|
class |
StandardDeviation
Standard deviation (sqrt of variance)
|
class |
Sum
Sum the components
|
class |
Variance
Variance with bias correction.
|
Modifier and Type | Class and Description |
---|---|
class |
CosineDistance
Cosine distance
Note that you need to initialize
a scaling constant equal to the norm2 of the
vector
|
class |
CosineSimilarity
Cosine similarity
Note that you need to initialize
a scaling constant equal to the norm2 of the
vector
|
class |
EuclideanDistance
Euclidean distance
|
class |
HammingDistance
Hamming distance (simple)
|
class |
JaccardDistance
Jaccard distance (dissimilarity)
|
class |
ManhattanDistance
Manhattan distance
|
Modifier and Type | Class and Description |
---|---|
class |
BiasAddGrad |
class |
BroadcastAddOp |
class |
BroadcastAMax
Broadcast Abs Max comparison op
|
class |
BroadcastAMin
Broadcast Abs Min comparison op
|
class |
BroadcastCopyOp |
class |
BroadcastDivOp |
class |
BroadcastEqualTo |
class |
BroadcastGradientArgs |
class |
BroadcastGreaterThan |
class |
BroadcastGreaterThanOrEqual |
class |
BroadcastLessThan |
class |
BroadcastLessThanOrEqual |
class |
BroadcastMax
Broadcast Max comparison op
|
class |
BroadcastMin
Broadcast Min comparison op
|
class |
BroadcastMulOp |
class |
BroadcastNotEqual |
class |
BroadcastRDivOp
Broadcast reverse divide
|
class |
BroadcastRSubOp |
class |
BroadcastSubOp |
Modifier and Type | Class and Description |
---|---|
class |
BaseGridOp |
class |
FreeGridOp
Simple GridOp that operates on arbitrary number of Ops, that have no relations between them.
|
Constructor and Description |
---|
BaseGridOp(Op... ops) |
FreeGridOp(Op... ops) |
Constructor and Description |
---|
BaseGridOp(List<Op> ops) |
FreeGridOp(List<Op> ops) |
Modifier and Type | Class and Description |
---|---|
class |
FirstIndex
Calculate the index
of max value over a vector
|
class |
IAMax
Calculate the index of the max absolute value over a vector
|
class |
IAMin
Calculate the index of the max absolute value over a vector
|
class |
IMax
Calculate the index
of max value over a vector
|
class |
IMin
Calculate the index of min value over a vector
|
class |
LastIndex
Calculate the index
of max value over a vector
|
Modifier and Type | Class and Description |
---|---|
class |
LegacyPooling2D
Deprecated.
Note: This operation will be removed in a future release
|
Modifier and Type | Class and Description |
---|---|
class |
BaseMetaOp |
class |
InvertedPredicateMetaOp
This MetaOp covers case, when Op A and Op B are both using linear memory access
You're NOT supposed to directly call this op.
|
class |
PostulateMetaOp
You're NOT supposed to directly call this op.
|
class |
PredicateMetaOp
This MetaOp covers case, when Op A and Op B are both using linear memory access
You're NOT supposed to directly call this op.
|
class |
ReduceMetaOp
This is special case PredicateOp, with opB being only either Accumulation, Variance or Reduce3 op
|
Modifier and Type | Method and Description |
---|---|
Op |
BaseMetaOp.getFirstOp() |
Op |
BaseMetaOp.getSecondOp() |
Constructor and Description |
---|
BaseMetaOp(Op opA,
Op opB) |
InvertedPredicateMetaOp(Op opA,
Op opB) |
PredicateMetaOp(Op opA,
Op opB) |
Modifier and Type | Class and Description |
---|---|
class |
ScalarAdd
Scalar addition
|
class |
ScalarDivision
Scalar division
|
class |
ScalarFMod
Scalar floating-point remainder (fmod)
|
class |
ScalarMax
Scalar max operation.
|
class |
ScalarMin
Scalar max operation.
|
class |
ScalarMultiplication
Scalar multiplication
|
class |
ScalarRemainder
Scalar floating-point remainder
|
class |
ScalarReverseDivision
Scalar reverse division
|
class |
ScalarReverseSubtraction
Scalar reverse subtraction
|
class |
ScalarSet
Scalar max operation.
|
class |
ScalarSubtraction
Scalar subtraction
|
Modifier and Type | Class and Description |
---|---|
class |
ScalarEquals
Return a binary (0 or 1) when greater than a number
|
class |
ScalarGreaterThan
Return a binary (0 or 1) when greater than a number
|
class |
ScalarGreaterThanOrEqual
Return a binary (0 or 1) when greater than or equal to a number
|
class |
ScalarLessThan
Return a binary (0 or 1) when less than a number
|
class |
ScalarLessThanOrEqual
Return a binary (0 or 1) when less than
or equal to a number
|
class |
ScalarNotEquals
Return a binary (0 or 1)
when greater than a number
|
class |
ScalarSetValue
Scalar value set operation.
|
Modifier and Type | Class and Description |
---|---|
class |
RollAxis
Transpose function
|
Modifier and Type | Class and Description |
---|---|
class |
Abs
Abs elementwise function
|
class |
ACos
Log elementwise function
|
class |
ACosh
ACosh elementwise function
|
class |
All
Boolean AND pairwise transform
|
class |
And
Boolean AND pairwise transform
|
class |
ASin
Arcsin elementwise function
|
class |
ASinh
Arcsin elementwise function
|
class |
ATan
Arc Tangent elementwise function
|
class |
ATanh
tan elementwise function
|
class |
Ceil
Ceiling elementwise function
|
class |
Constant |
class |
Cos
Cosine elementwise function
|
class |
Cosh
Cosine Hyperbolic elementwise function
|
class |
Cube
Cube (x^3) elementwise function
|
class |
ELU
ELU: Exponential Linear Unit (alpha=1.0)
Introduced in paper: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) Djork-Arn?? Clevert, Thomas Unterthiner, Sepp Hochreiter (2015) http://arxiv.org/abs/1511.07289 |
class |
Erf
Gaussian error function (erf) function, which is defined as
|
class |
Erfc
Complementary Gaussian error function (erfc), defined as
|
class |
Exp
Element-wise exponential function
|
class |
Expm1
Element-wise exponential function minus 1, i.e.
|
class |
Floor
Floor elementwise function
|
class |
HardSigmoid
HardSigmoid function
|
class |
HardTanh
Hard tanh elementwise function
|
class |
Histogram |
class |
IsFinite
IsFinite function
|
class |
IsInf
IsInf function
|
class |
IsMax
[1, 2, 3, 1] -> [0, 0, 1, 0]
|
class |
IsNaN
IsNaN function
|
class |
LeakyReLU
Leaky Rectified linear unit.
|
class |
LegacyDropOut
DropOut implementation as Op
PLEASE NOTE: This is legacy DropOut implementation, please consider using op with the same opName from randomOps
|
class |
LegacyDropOutInverted
Inverted DropOut implementation as Op
PLEASE NOTE: This is legacy DropOutInverted implementation, please consider using op with the same opName from randomOps
|
class |
Log
Log elementwise function
|
class |
Log1p
Log1p function
|
class |
LogSigmoid
LogSigmoid function
|
class |
LogSigmoidDerivative
LogSigmoid derivative
|
class |
LogSoftMax
Log(softmax(X))
|
class |
LogX
Log on arbitrary base op
|
class |
MatchConditionTransform
Absolute sum the components
|
class |
MaxOut
Max out activation:
http://arxiv.org/pdf/1302.4389.pdf
|
class |
Negative
Negative function
|
class |
Not
Boolean AND pairwise transform
|
class |
OldAtan2Op
atan2 operation
|
class |
OldIdentity
Identity function
|
class |
OldReverse
OldReverse op
|
class |
OldSoftMax
Soft max function
row_maxes is a row vector (max for each row)
row_maxes = rowmaxes(input)
diff = exp(input - max) / diff.rowSums()
Outputs a probability distribution.
|
class |
OneMinus
1 - input
|
class |
Or
Boolean OR pairwise transform
|
class |
Pow
Pow function
|
class |
PowDerivative
Pow derivative
z = n * x ^ (n-1)
|
class |
RationalTanh
Rational Tanh Approximation elementwise function, as described at https://github.com/deeplearning4j/libnd4j/issues/351
|
class |
Reciprocal
Created by susaneraly on 3/28/18.
|
class |
RectifedLinear
Rectified linear units
|
class |
RectifiedTanh
RectifiedTanh
Essentially max(0, tanh(x))
|
class |
Relu6
Rectified linear unit 6, i.e.
|
class |
ReplaceNans
Element-wise "Replace NaN" implementation as Op
|
class |
Rint
Rint function
|
class |
Round
Rounding function
|
class |
RSqrt
RSqrt function
|
class |
SELU
SELU activation function
|
class |
Set
Set
|
class |
SetRange
Set range to a particular set of values
|
class |
Sigmoid
Sigmoid function
|
class |
SigmoidDerivative
Sigmoid derivative
|
class |
Sign
Signum function
|
class |
Sin
Log elementwise function
|
class |
Sinh
Sinh function
|
class |
SoftMaxDerivative
Softmax derivative
|
class |
SoftPlus |
class |
SoftSign
Softsign element-wise activation function.
|
class |
Sqrt
Sqrt function
|
class |
Stabilize
Stabilization function, forces values to be within a range
|
class |
Step
Unit step function.
|
class |
Swish
Swish function
|
class |
SwishDerivative
Swish derivative
|
class |
Tan
Tanh elementwise function
|
class |
TanDerivative
Tan Derivative elementwise function
|
class |
Tanh
Tanh elementwise function
|
class |
TanhDerivative
Tanh derivative
|
class |
TimesOneMinus
If x is input: output is x*(1-x)
|
class |
Xor
Boolean XOR pairwise transform
|
Modifier and Type | Class and Description |
---|---|
class |
Axpy
Level 1 blas op Axpy as libnd4j native op
|
class |
CopyOp
Copy operation
|
class |
FloorModOp
Floor mod
|
class |
FModOp
Floating-point mod
|
class |
OldAddOp
Add operation for two operands
|
class |
OldDivOp
Division operation
|
class |
OldFloorDivOp
Truncated division operation
|
class |
OldFModOp
Floating point remainder
|
class |
OldMulOp
Multiplication operation
|
class |
OldRDivOp
OldReverse Division operation
|
class |
OldSubOp
Division operation
|
class |
RemainderOp
Floating-point remainder operation
|
class |
TruncateDivOp
Truncated division operation
|
Modifier and Type | Class and Description |
---|---|
class |
CompareAndReplace
Element-wise Compare-and-Replace implementation as Op
Basically this op does the same as Compare-and-Set, but op.X is checked against Condition instead
|
class |
CompareAndSet
Element-wise Compare-and-set implementation as Op
Please check javadoc to specific constructors, for detail information.
|
class |
Eps
Bit mask over the ndarrays as to whether
the components are equal or not
|
class |
OldEqualTo
Bit mask over the ndarrays as to whether
the components are equal or not
|
class |
OldGreaterThan
Bit mask over the ndarrays as to whether
the components are greater than or not
|
class |
OldGreaterThanOrEqual
Bit mask over the ndarrays as to whether
the components are greater than or equal or not
|
class |
OldLessThan
Bit mask over the ndarrays as to whether
the components are less than or not
|
class |
OldLessThanOrEqual
Bit mask over the ndarrays as to whether
the components are less than or equal or not
|
class |
OldMax
Max function
|
class |
OldMin
Min function
|
class |
OldNotEqualTo
Not equal to function:
Bit mask over whether 2 elements are not equal or not
|
Modifier and Type | Class and Description |
---|---|
class |
CubeDerivative
Cube derivative, e.g.
|
class |
ELUDerivative
Derivative of ELU: Exponential Linear Unit (alpha=1.0)
Introduced in paper: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) Djork-Arn?? Clevert, Thomas Unterthiner, Sepp Hochreiter (2015) http://arxiv.org/abs/1511.07289 |
class |
GradientBackwardsMarker |
class |
HardSigmoidDerivative
HardSigmoid derivative
|
class |
HardTanhDerivative
Hard tanh elementwise derivative function
|
class |
LeakyReLUDerivative
Leaky ReLU derivative.
|
class |
LogSoftMaxDerivative |
class |
RationalTanhDerivative
Rational Tanh Derivative, as described at as described at https://github.com/deeplearning4j/libnd4j/issues/351
|
class |
RectifiedTanhDerivative
Rectified Tanh Derivative
|
class |
SELUDerivative
SELU Derivative elementwise function
https://arxiv.org/pdf/1706.02515.pdf
|
class |
SoftSignDerivative
SoftSign derivative.
|
Modifier and Type | Class and Description |
---|---|
class |
BaseRandomOp |
Modifier and Type | Class and Description |
---|---|
class |
AlphaDropOut
AlphaDropOut implementation as Op
|
class |
BernoulliDistribution
BernoulliDistribution implementation
|
class |
BinomialDistribution
This Op generates binomial distribution
|
class |
BinomialDistributionEx
This Op generates binomial distribution
|
class |
Choice
This Op implements numpy.choice method
It fills Z from source, following probabilities for each source element
|
class |
DropOut
Inverted DropOut implementation as Op
|
class |
DropOutInverted
Inverted DropOut implementation as Op
|
class |
GaussianDistribution
This Op generates normal distribution over provided mean and stddev
|
class |
Linspace
Linspace/arange Op implementation, generates from..to distribution within Z
|
class |
LogNormalDistribution
This Op generates log-normal distribution over provided mean and stddev
|
class |
ProbablisticMerge |
class |
TruncatedNormalDistribution
This Op generates truncated normal distribution over provided mean and stddev
|
class |
UniformDistribution |
Modifier and Type | Method and Description |
---|---|
protected String |
OpProfiler.getOpClass(Op op)
This method returns op class opName
|
void |
OpProfiler.OpProfilerListener.invoke(Op op) |
void |
OpProfiler.processOpCall(Op op)
This method tracks op calls
|
void |
OpProfiler.processOpCall(Op op,
DataBuffer... tadBuffers) |
void |
OpProfiler.processStackCall(Op op,
long timeStart)
This method builds
|
void |
OpProfiler.timeOpCall(Op op,
long startTime) |
Modifier and Type | Method and Description |
---|---|
void |
StringAggregator.putTime(String key,
Op op,
long timeSpent) |
Copyright © 2018. All rights reserved.