Modifier and Type | Method and Description |
---|---|
DifferentialFunction |
DifferentialFunction.dup()
Duplicate this function
|
Modifier and Type | Method and Description |
---|---|
DifferentialFunction |
SameDiff.getOpById(String id)
Get the function by the
DifferentialFunction#getOwnName() |
DifferentialFunction |
SameDiff.getVariableOutputOp(String variableName)
Get the differential function (if any) that this variable is the output for
|
DifferentialFunction[] |
SameDiff.ops()
Get an array of differential functions that have been defined for this SameDiff instance
|
Modifier and Type | Method and Description |
---|---|
void |
SameDiff.addArgsFor(SDVariable[] variables,
DifferentialFunction function)
Adds incoming arguments for the specified differential function to the graph
|
void |
SameDiff.addArgsFor(String[] variables,
DifferentialFunction function)
Adds incoming arguments for the specified differential function to the graph
|
void |
SameDiff.addOutgoingFor(SDVariable[] variables,
DifferentialFunction function)
Adds outgoing arguments to the graph for the specified DifferentialFunction
Also checks for input arguments and updates the graph adding an appropriate edge when the full graph is declared.
|
void |
SameDiff.addOutgoingFor(String[] varNames,
DifferentialFunction function)
Adds outgoing arguments to the graph for the specified DifferentialFunction
Also checks for input arguments and updates the graph adding an appropriate edge when the full graph is declared.
|
SDVariable[] |
SameDiff.generateOutputVariableForOp(DifferentialFunction function)
Generate the variables based on the given input op
and return the output variable names.
|
SDVariable[] |
SameDiff.generateOutputVariableForOp(DifferentialFunction function,
String baseName,
boolean isImport)
Generate the variables based on the given input op and return the output variable names.
|
String[] |
SameDiff.getInputsForOp(DifferentialFunction function)
Returns the name(s) of the inputs for the given function
|
SDVariable[] |
SameDiff.getInputVariablesForOp(DifferentialFunction function)
Get the input variable(s) for the specified differential function
|
String[] |
SameDiff.getOutputsForOp(DifferentialFunction function)
Returns the name(s) of the outputs for the given function
|
SDVariable[] |
SameDiff.getOutputVariablesForOp(DifferentialFunction function)
Get the output variable(s) for the specified differential function
|
boolean |
SameDiff.hasArgs(DifferentialFunction function)
Returns true if this function already has defined arguments
|
void |
SameDiff.putOpForId(String id,
DifferentialFunction function)
Put the function for the given id
|
void |
SameDiff.removeArgFromOp(String varName,
DifferentialFunction function)
Remove an argument for a function.
|
void |
SameDiff.replaceArgFor(int i,
SDVariable newArg,
DifferentialFunction function)
Replaces the argument at i with newArg for function
Does not use (or remove) ArgumentInterceptor stuff
|
Modifier and Type | Field and Description |
---|---|
protected DifferentialFunction |
SameDiffOp.op |
Modifier and Type | Method and Description |
---|---|
INDArray[] |
InferenceSession.doExec(DifferentialFunction op,
AbstractSession.FrameIter outputFrameIter,
Set<AbstractSession.VarId> opInputs,
Set<AbstractSession.VarId> allIterInputs,
Set<String> constAndPhInputs) |
INDArray[] |
InferenceSession.getOutputsHelperTensorArrayOps(DifferentialFunction op,
AbstractSession.FrameIter outputFrameIter,
Set<AbstractSession.VarId> opInputs,
Set<AbstractSession.VarId> allIterInputs)
Forward pass for TensorArray ops
|
Modifier and Type | Method and Description |
---|---|
static DifferentialFunction |
FlatBuffersMapper.cloneViaSerialize(SameDiff sd,
DifferentialFunction df) |
static DifferentialFunction |
FlatBuffersMapper.cloneViaSerialize(SameDiff sd,
DifferentialFunction df,
Map<String,Integer> nameToIdxMap) |
static DifferentialFunction |
FlatBuffersMapper.fromFlatNode(FlatNode fn) |
Modifier and Type | Method and Description |
---|---|
static int |
FlatBuffersMapper.asFlatNode(SameDiff sameDiff,
DifferentialFunction node,
com.google.flatbuffers.FlatBufferBuilder bufferBuilder,
List<SDVariable> variables,
Map<String,Integer> reverseMap,
Map<String,Integer> forwardMap,
Map<String,Integer> framesMap,
AtomicInteger idCounter,
Integer id) |
static DifferentialFunction |
FlatBuffersMapper.cloneViaSerialize(SameDiff sd,
DifferentialFunction df) |
static DifferentialFunction |
FlatBuffersMapper.cloneViaSerialize(SameDiff sd,
DifferentialFunction df,
Map<String,Integer> nameToIdxMap) |
Modifier and Type | Field and Description |
---|---|
protected DifferentialFunction |
SubGraph.rootNode |
Modifier and Type | Field and Description |
---|---|
protected List<DifferentialFunction> |
SubGraph.childNodes |
Modifier and Type | Method and Description |
---|---|
List<DifferentialFunction> |
SubGraph.allFunctionsInSubgraph() |
Modifier and Type | Method and Description |
---|---|
SubGraph |
SubGraphPredicate.getSubGraph(SameDiff sd,
DifferentialFunction rootFn)
Get the SubGraph that matches the predicate
|
boolean |
SubGraph.inSubgraph(DifferentialFunction df) |
boolean |
SubGraphPredicate.matches(SameDiff sameDiff,
DifferentialFunction rootFn)
Determine if the subgraph, starting with the root function, matches the predicate
|
abstract boolean |
OpPredicate.matches(SameDiff sameDiff,
DifferentialFunction function) |
Modifier and Type | Method and Description |
---|---|
DifferentialFunction |
DifferentialFunctionClassHolder.getInstance(String name) |
DifferentialFunction |
DifferentialFunctionClassHolder.getOpWithOnnxName(String onnxName) |
DifferentialFunction |
DifferentialFunctionClassHolder.getOpWithTensorflowName(String tensorflowName)
Get the
|
Modifier and Type | Method and Description |
---|---|
static Map<String,DifferentialFunction> |
ImportClassMapping.getOnnxOpMappingFunctions() |
static Map<String,DifferentialFunction> |
ImportClassMapping.getOpNameMapping() |
Map<String,DifferentialFunction> |
DifferentialFunctionClassHolder.getTensorFlowNames() |
static Map<String,DifferentialFunction> |
ImportClassMapping.getTFOpMappingFunctions() |
Modifier and Type | Method and Description |
---|---|
Map<String,Field> |
DifferentialFunctionClassHolder.getFieldsForFunction(DifferentialFunction function)
Get the fields for a given
DifferentialFunction |
Modifier and Type | Method and Description |
---|---|
void |
AttributeAdapter.mapAttributeFor(Object inputAttributeValue,
Field fieldFor,
DifferentialFunction on)
Map the attribute using the specified field
on the specified function on
adapting the given input type to
the type of the field for the specified function.
|
Modifier and Type | Method and Description |
---|---|
void |
StringNotEqualsAdapter.mapAttributeFor(Object inputAttributeValue,
Field fieldFor,
DifferentialFunction on) |
void |
ConditionalFieldValueNDArrayShapeAdapter.mapAttributeFor(Object inputAttributeValue,
Field fieldFor,
DifferentialFunction on) |
void |
NDArrayShapeAdapter.mapAttributeFor(Object inputAttributeValue,
Field fieldFor,
DifferentialFunction on) |
void |
StringEqualsAdapter.mapAttributeFor(Object inputAttributeValue,
Field fieldFor,
DifferentialFunction on) |
void |
BooleanAdapter.mapAttributeFor(Object inputAttributeValue,
Field fieldFor,
DifferentialFunction on) |
void |
DataTypeAdapter.mapAttributeFor(Object inputAttributeValue,
Field fieldFor,
DifferentialFunction on) |
void |
ConditionalFieldValueIntIndexArrayAdapter.mapAttributeFor(Object inputAttributeValue,
Field fieldFor,
DifferentialFunction on) |
void |
IntArrayIntIndexAdpater.mapAttributeFor(Object inputAttributeValue,
Field fieldFor,
DifferentialFunction on) |
void |
SizeThresholdIntArrayIntIndexAdpater.mapAttributeFor(Object inputAttributeValue,
Field fieldFor,
DifferentialFunction on) |
Modifier and Type | Method and Description |
---|---|
static void |
TFGraphMapper.initFunctionFromProperties(String mappedTfName,
DifferentialFunction on,
Map<String,AttrValue> attributesForNode,
NodeDef node,
GraphDef graph)
Deprecated.
To be removed
|
Modifier and Type | Class and Description |
---|---|
class |
BaseBroadcastBoolOp |
class |
BaseBroadcastOp |
class |
BaseIndexAccumulation
Index based reduction algo
|
class |
BaseOp
Base op.
|
class |
BaseReduceBoolOp |
class |
BaseReduceFloatOp |
class |
BaseReduceLongOp |
class |
BaseReduceOp
Base class for accumulation, initiates the initial entry
with respect to the child class.
|
class |
BaseReduceSameOp |
class |
BaseScalarBoolOp
Base scalar boolean operation
|
class |
BaseScalarOp
Base scalar operation
|
class |
BaseTransformAnyOp |
class |
BaseTransformBoolOp |
class |
BaseTransformFloatOp |
class |
BaseTransformOp
A base op for basic getters and setters
|
class |
BaseTransformSameOp |
class |
BaseTransformStrictOp |
class |
DynamicCustomOp
Basic implementation for CustomOp
|
class |
NoOp |
Modifier and Type | Class and Description |
---|---|
class |
AdjustContrast |
class |
AdjustContrastV2 |
class |
AdjustHue |
class |
AdjustSaturation |
class |
BarnesEdgeForces |
class |
BarnesHutGains
This op calculates gains - data used internally by Barnes-Hut-TSNE algorithm.
|
class |
BarnesHutSymmetrize |
class |
BaseAdjustContrast |
class |
BetaInc |
class |
BitCast |
class |
CompareAndBitpack |
class |
DivideNoNan |
class |
DrawBoundingBoxes |
class |
FakeQuantWithMinMaxVarsPerChannel |
class |
Flatten
This op takes arbitrary number of arrays as input, and returns single "flattened" vector
|
class |
FusedBatchNorm |
class |
KnnMinDistance |
class |
MatrixBandPart |
class |
Polygamma |
class |
RandomCrop |
class |
Roll |
class |
SpTreeCell |
class |
ToggleBits |
Modifier and Type | Class and Description |
---|---|
class |
BiasAdd
Bias addition gradient operation.
|
class |
BiasAddGrad |
class |
BroadcastAddOp |
class |
BroadcastAMax
Broadcast Abs Max comparison op
|
class |
BroadcastAMin
Broadcast Abs Min comparison op
|
class |
BroadcastCopyOp |
class |
BroadcastDivOp |
class |
BroadcastGradientArgs |
class |
BroadcastMax
Broadcast Max comparison op
|
class |
BroadcastMin
Broadcast Min comparison op
|
class |
BroadcastMulOp |
class |
BroadcastRDivOp
Broadcast reverse divide
|
class |
BroadcastRSubOp |
class |
BroadcastSubOp |
class |
BroadcastTo
BroadcastTo op: given 2 input arrays, content X and shape Y, broadcast X to the shape specified by the content of Y.
|
Modifier and Type | Class and Description |
---|---|
class |
BroadcastEqualTo |
class |
BroadcastGreaterThan |
class |
BroadcastGreaterThanOrEqual |
class |
BroadcastLessThan |
class |
BroadcastLessThanOrEqual |
class |
BroadcastNotEqual |
Modifier and Type | Class and Description |
---|---|
class |
Select |
class |
Where |
class |
WhereNumpy |
Modifier and Type | Class and Description |
---|---|
class |
BaseCompatOp |
class |
Enter |
class |
Exit |
class |
LoopCond |
class |
Merge |
class |
NextIteration |
class |
StopGradient |
class |
Switch
Switch op forwards input to one of two outputs based on the value of a predicate
|
Modifier and Type | Class and Description |
---|---|
class |
BaseGridOp |
class |
FreeGridOp
Simple GridOp that operates on arbitrary number of Ops, that have no relations between them.
|
Modifier and Type | Class and Description |
---|---|
class |
CropAndResize
CropAndResize Op
|
class |
ExtractImagePatches
Extract image patches op - a sliding window operation over 4d activations that puts the
output images patches into the depth dimension
|
class |
NonMaxSuppression
Non max suppression
|
class |
NonMaxSuppressionV3
Non max suppression
|
class |
ResizeBicubic
ResizeBicubic op wrapper
|
class |
ResizeBilinear
ResizeBilinear op wrapper
|
class |
ResizeNearestNeighbor
ResizeNearestNeighbor op wrapper
|
Modifier and Type | Class and Description |
---|---|
class |
FirstIndex
Calculate the index
of max value over a vector
|
class |
IAMax
Calculate the index of the max absolute value over a vector
|
class |
IAMin
Calculate the index of the max absolute value over a vector
|
class |
IMax
Calculate the index
of max value over a vector
|
class |
IMin
Calculate the index of min value over a vector
|
class |
LastIndex
Calculate the index
of max value over a vector
|
Modifier and Type | Class and Description |
---|---|
class |
ArgMax |
class |
ArgMin
ArgMin function
|
Modifier and Type | Class and Description |
---|---|
class |
ExternalErrorsFunction |
Modifier and Type | Class and Description |
---|---|
class |
AvgPooling2D
Average Pooling2D operation
|
class |
AvgPooling3D
Average Pooling3D operation
|
class |
BatchNorm
BatchNorm operation
|
class |
BatchNormDerivative
BatchNormDerivative operation
|
class |
Col2Im
Col2Im operation.
|
class |
Conv1D
Conv2D operation
|
class |
Conv1DDerivative
Conv1D Backprop operation
|
class |
Conv2D
Conv2D operation
|
class |
Conv2DDerivative
Conv2DDerivative operation
|
class |
Conv3D
Conv3D operation
|
class |
Conv3DDerivative
Conv3DDerivative operation
|
class |
DeConv2D
DeConv2D operation
|
class |
DeConv2DDerivative
DeConv2DDerivative operation
|
class |
DeConv2DTF
DeConv2D operation, TF-wrapper
|
class |
DeConv3D
DeConv3D operation
|
class |
DeConv3DDerivative
DeConv3DDerivative operation
|
class |
DeConv3DTF
DeConv3D operation, TF-wrapper
|
class |
DepthToSpace
Inverse operation to SpaceToDepth.
|
class |
DepthwiseConv2D
Depthwise Conv2D operation
|
class |
Im2col
Im2col operation
|
class |
Im2colBp
Im2col operation
|
class |
LocalResponseNormalization
LocalResponseNormalization operation
|
class |
LocalResponseNormalizationDerivative
LocalResponseNormalizationDerivative operation
|
class |
MaxPooling2D
Max Pooling2D operation
|
class |
MaxPooling3D
Max Pooling3D operation
|
class |
MaxPoolWithArgmax |
class |
Pooling2D
Pooling2D operation
|
class |
Pooling2DDerivative
Pooling2DDerivative operation
|
class |
Pooling3D
Pooling3D operation
|
class |
Pooling3DDerivative
Pooling3DDerivative operation
|
class |
SConv2D
Separable convolution 2D operation
|
class |
SConv2DDerivative
SConv2DDerivative operation
|
class |
SpaceToDepth
This operation takes 4D array in, in either NCHW or NHWC format, and moves data from spatial dimensions (HW)
to channels (C) for given blockSize
|
class |
Upsampling2d
Upsampling operation
|
class |
Upsampling2dDerivative
UpsamplingDerivative operation
|
Modifier and Type | Class and Description |
---|---|
class |
GRUCell
GRU cell for RNNs
|
class |
LSTMBlockCell
LSTM Block cell - represents forward pass for a single time step of an LSTM RNN.
Same operation used internally in op/layer LSTMLayer .Implementation of operation for LSTM layer with optional peep hole connections. S. |
class |
LSTMCell
LSTM cell
|
class |
LSTMLayer
LSTM layer implemented as a single operation.
|
class |
SRU
Simple recurrent unit
|
class |
SRUCell
A simple recurrent unit cell.
|
Modifier and Type | Class and Description |
---|---|
class |
AbsoluteDifferenceLoss
Absolute difference loss
|
class |
BaseLoss |
class |
CosineDistanceLoss
Cosine distance loss
|
class |
HingeLoss
Hinge loss
|
class |
HuberLoss
Huber loss
|
class |
L2Loss
L2 loss op wrapper
|
class |
LogLoss
Binary log loss, or cross entropy loss:
-1/numExamples * sum_i (labels[i] * log(predictions[i] + epsilon) + (1-labels[i]) * log(1-predictions[i] + epsilon)) |
class |
LogPoissonLoss
Log Poisson loss
Note: This expects that the input/predictions are log(x) not x!
|
class |
MeanPairwiseSquaredErrorLoss
Mean Pairwise Squared Error Loss
|
class |
MeanSquaredErrorLoss
Mean squared error loss
|
class |
SigmoidCrossEntropyLoss
Sigmoid cross entropy loss with logits
|
class |
SoftmaxCrossEntropyLoss
Softmax cross entropy loss
|
class |
SoftmaxCrossEntropyWithLogitsLoss
Softmax cross entropy loss with Logits
|
class |
SparseSoftmaxCrossEntropyLossWithLogits
Sparse softmax cross entropy loss with logits.
|
class |
WeightedCrossEntropyLoss
Weighted cross entropy loss with logits
|
Modifier and Type | Class and Description |
---|---|
class |
AbsoluteDifferenceLossBp
Absolute difference loss backprop
|
class |
BaseLossBp |
class |
CosineDistanceLossBp
Cosine distance loss
|
class |
HingeLossBp
Hinge loss
|
class |
HuberLossBp
Hinge loss
|
class |
LogLossBp
Binary log loss, or cross entropy loss:
-1/numExamples * sum_i (labels[i] * log(predictions[i] + epsilon) + (1-labels[i]) * log(1-predictions[i] + epsilon)) |
class |
LogPoissonLossBp
Log Poisson loss backprop
|
class |
MeanPairwiseSquaredErrorLossBp
Mean Pairwise Squared Error Loss Backprop
|
class |
MeanSquaredErrorLossBp
Mean squared error loss
|
class |
SigmoidCrossEntropyLossBp
Sigmoid cross entropy loss with logits
|
class |
SoftmaxCrossEntropyLossBp
Softmax cross entropy loss
|
class |
SoftmaxCrossEntropyWithLogitsLossBp
Softmax cross entropy loss with Logits
|
class |
SparseSoftmaxCrossEntropyLossWithLogitsBp
Sparse softmax cross entropy loss with logits.
|
Modifier and Type | Class and Description |
---|---|
class |
BaseMetaOp |
class |
InvertedPredicateMetaOp
This MetaOp covers case, when Op A and Op B are both using linear memory access
You're NOT supposed to directly call this op.
|
class |
PostulateMetaOp
You're NOT supposed to directly call this op.
|
class |
PredicateMetaOp
This MetaOp covers case, when Op A and Op B are both using linear memory access
You're NOT supposed to directly call this op.
|
class |
ReduceMetaOp
This is special case PredicateOp, with opB being only either ReduceOp, Variance or Reduce3 op
|
Modifier and Type | Class and Description |
---|---|
class |
CbowRound |
class |
SkipGramRound |
Modifier and Type | Class and Description |
---|---|
class |
HashCode
This is hashCode op wrapper.
|
class |
Mmul
Matrix multiplication/dot product
|
class |
MmulBp
Matrix multiplication/dot product Backprop
|
class |
Moments |
class |
NormalizeMoments |
class |
SufficientStatistics
Sufficient statistics: returns 3 or 4 output arrays:
If shift is not provided: count, sum of elements, sum of squares
If shift is provided: count, sum of elements, sum of squares, shift
|
class |
TensorMmul
TensorMmul
|
class |
ZeroFraction
Compute the fraction of zero elements
|
Modifier and Type | Class and Description |
---|---|
class |
All
Boolean AND accumulation
|
class |
Any
Boolean AND pairwise transform
|
class |
IsInf
IsInf function
|
class |
IsNaN
IsInf function
|
Modifier and Type | Class and Description |
---|---|
class |
BaseReductionBp |
class |
CumProdBp
Backprop op for cumulative product operation
|
class |
CumSumBp
Backprop op for cumulative sum operation
|
class |
DotBp
Backprop op for Dot pairwise reduction operation
|
class |
MaxBp
Backprop op for Max reduction operation
|
class |
MeanBp
Backprop op for Mean reduction operation
|
class |
MinBp
Backprop op for Min reduction operation
|
class |
Norm1Bp
Backprop op for Norm1 reduction operation
|
class |
Norm2Bp
Backprop op for Norm2 reduction operation
|
class |
NormMaxBp
Backprop op for Norm Max reduction operation
|
class |
ProdBp
Backprop op for Product reduction operation
|
class |
SquaredNormBp
Backprop op for squared norm (sum_i x_i^2) reduction operation
|
class |
StandardDeviationBp
Backprop op for standard deviation reduction operation
|
class |
SumBp
Backprop op for Sum reduction operation
|
class |
VarianceBp
Backprop op for variance reduction operation
|
Modifier and Type | Class and Description |
---|---|
class |
BatchMmul
Batched matrix multiplication.
|
class |
LogSumExp
LogSumExp - this op returns https://en.wikipedia.org/wiki/LogSumExp
|
Modifier and Type | Class and Description |
---|---|
class |
AMean
Calculate the absolute mean of the given vector
|
class |
Bias
Calculate a bias
|
class |
Entropy
Entropy Op - returns the entropy (information gain, or uncertainty of a random variable).
|
class |
LogEntropy
Log Entropy Op - returns the log entropy (information gain, or uncertainty of a random variable).
|
class |
Mean
Calculate the mean of the vector
|
class |
Norm1
Sum of absolute values
|
class |
Norm2
Sum of squared values (real)
Sum of squared complex modulus (complex)
|
class |
NormMax
The max absolute value
|
class |
ShannonEntropy
Non-normalized Shannon Entropy Op - returns the entropy (information gain, or uncertainty of a random variable).
|
class |
SquaredNorm
Squared norm (sum_i x_i^2) reduction operation
|
Modifier and Type | Class and Description |
---|---|
class |
CountNonZero
Count the number of non-zero elements
|
class |
CountZero
Count the number of zero elements
|
class |
MatchCondition
This operation returns number of elements matching specified condition
|
Modifier and Type | Class and Description |
---|---|
class |
AMax
Calculate the absolute max over a vector
|
class |
AMin
Calculate the absolute minimum over a vector
|
class |
ASum
Absolute sum the components
|
class |
Max
Calculate the max over an array
|
class |
Min
Calculate the min over an array
|
class |
Prod
Prod the components
|
class |
Sum
Sum the components
|
Modifier and Type | Class and Description |
---|---|
class |
BaseReduce3Op
Manhattan distance
|
class |
CosineDistance
Cosine distance
Note that you need to initialize
a scaling constant equal to the norm2 of the
vector
|
class |
CosineSimilarity
Cosine similarity
Note that you need to initialize
a scaling constant equal to the norm2 of the
vector
|
class |
Dot
Dot product.
|
class |
EqualsWithEps
Operation for fast INDArrays equality checks
|
class |
EuclideanDistance
Euclidean distance
|
class |
HammingDistance
Hamming distance (simple)
|
class |
JaccardDistance
Jaccard distance (dissimilarity)
|
class |
ManhattanDistance
Manhattan distance
|
Modifier and Type | Class and Description |
---|---|
class |
LeakyReLU
Leaky Rectified linear unit.
|
class |
LogX
Log on arbitrary base op
|
class |
Pow
Pow function
|
class |
PowDerivative
Pow derivative
z = n * x ^ (n-1)
|
class |
PRelu
Parameterized ReLU op
|
class |
RectifiedLinear
Rectified linear units
|
class |
RectifiedLinearDerivative |
class |
Relu6
Rectified linear unit 6, i.e.
|
class |
ReplaceNans
Element-wise "Replace NaN" implementation as Op
|
class |
ScalarAdd
Scalar addition
|
class |
ScalarDivision
Scalar division
|
class |
ScalarFMod
Scalar floating-point remainder (fmod aka 'floormod')
|
class |
ScalarMax
Scalar max operation.
|
class |
ScalarMin
Scalar max operation.
|
class |
ScalarMultiplication
Scalar multiplication
|
class |
ScalarRemainder
Scalar floating-point remainder
|
class |
ScalarReverseDivision
Scalar reverse division
|
class |
ScalarReverseSubtraction
Scalar reverse subtraction
|
class |
ScalarSet
Scalar max operation.
|
class |
ScalarSubtraction
Scalar subtraction
|
class |
Step
Unit step function.
|
Modifier and Type | Class and Description |
---|---|
class |
ScalarAnd
Return a binary (0 or 1)
when greater than a number
|
class |
ScalarEps
Return a binary (0 or 1) when greater than a number
|
class |
ScalarEquals
Return a binary (0 or 1) when greater than a number
|
class |
ScalarGreaterThan
Return a binary (0 or 1) when greater than a number
|
class |
ScalarGreaterThanOrEqual
Return a binary (0 or 1) when greater than or equal to a number
|
class |
ScalarLessThan
Return a binary (0 or 1) when less than a number
|
class |
ScalarLessThanOrEqual
Return a binary (0 or 1) when less than
or equal to a number
|
class |
ScalarNot
Return a binary (0 or 1)
when greater than a number
|
class |
ScalarNotEquals
Return a binary (0 or 1)
when greater than a number
|
class |
ScalarOr
Return a binary (0 or 1)
when greater than a number
|
class |
ScalarSetValue
Scalar value set operation.
|
class |
ScalarXor
Return a binary (0 or 1)
when greater than a number
|
Modifier and Type | Class and Description |
---|---|
class |
ScatterAdd
Created by farizrahman4u on 3/23/18.
|
class |
ScatterDiv
Created by farizrahman4u on 3/23/18.
|
class |
ScatterMax |
class |
ScatterMin |
class |
ScatterMul
Created by farizrahman4u on 3/23/18.
|
class |
ScatterNd
Scatter ND operation
|
class |
ScatterNdAdd
Scatter ND add operation
|
class |
ScatterNdSub
Scatter ND subtract operation
|
class |
ScatterNdUpdate
Scatter ND add operation
|
class |
ScatterSub
Created by farizrahman4u on 3/23/18.
|
class |
ScatterUpdate
Scatter update op
|
Modifier and Type | Class and Description |
---|---|
class |
ApplyGradientDescent
Reshape function
|
class |
BroadcastDynamicShape
Broadcast dynamic shape function
|
class |
Concat |
class |
ConfusionMatrix |
class |
Create
This operation creates a new, optionally nullified, array with a given shape, order and data type
|
class |
Cross
Pairwise cross-product of two tensors of the same shape.
|
class |
Diag
Computes a diagonal matrix of shape (n, n) from a vector of length n.
|
class |
DiagPart
Return the diagonal part of a tensor.
|
class |
ExpandDims
ExpandDims function
|
class |
Eye
Computes a batch of identity matrices of shape (numRows, numCols), returns a single tensor.
|
class |
Gather
Gather op
|
class |
GatherNd
GatherND op
|
class |
MergeAvg |
class |
MergeMax |
class |
MergeSum |
class |
MeshGrid |
class |
OneHot
Created by susaneraly on 3/14/18.
|
class |
OnesLike
OnesLike function - gives an output array with all values/entries being 1, with the same shape as the input.
|
class |
ParallelStack
Stacks n input tensors of same shape to tensor of rank n + 1.
|
class |
Permute
Permute function
|
class |
Rank
Rank function
|
class |
ReductionShape |
class |
Repeat
Repeat function
|
class |
Reshape
Reshape function
|
class |
SequenceMask
Created by farizrahman4u on 3/28/18.
|
class |
Shape
Returns the shape of the input array.
|
class |
ShapeN
Returns the shape of N input array as N output arrays
|
class |
Size
Returns the size of the input as a rank 0 array
|
class |
SizeAt
Returns the size of the input along given dimension as a rank 0 array
|
class |
Slice
Slice function
|
class |
Split
Split op
|
class |
SplitV
SplitV op
|
class |
Squeeze |
class |
Stack
Stack operation.
|
class |
StridedSlice
Strided Slice function
|
class |
Tile
Tile function
|
class |
Transpose
Transpose function
|
class |
Unstack
Unstack op conversion
|
class |
ZerosLike
Reshape function
|
Modifier and Type | Class and Description |
---|---|
class |
ConcatBp
Backprop op for concat
|
class |
SliceBp
Slice backprop function
|
class |
StridedSliceBp
Strided Slice backprop function
|
class |
TileBp
Tile backprop function
|
Modifier and Type | Class and Description |
---|---|
class |
BaseTensorOp |
class |
TensorArray |
class |
TensorArrayConcat |
class |
TensorArrayGather |
class |
TensorArrayRead |
class |
TensorArrayScatter |
class |
TensorArraySize |
class |
TensorArraySplit |
class |
TensorArrayWrite |
Modifier and Type | Class and Description |
---|---|
class |
StandardDeviation
Standard deviation (sqrt of variance)
|
class |
Variance
Variance with bias correction.
|
Modifier and Type | Class and Description |
---|---|
class |
Angle
Angle op for tensorflow import
Given ND4J currently only supports real arrays; hence by definition this always outputs 0 |
class |
Assert
Assertion op wrapper
|
class |
BaseDynamicTransformOp |
class |
BinCount
BinCount: counts the number of times each value appears in an integer array.
|
class |
CheckNumerics
CheckNumerics op wrapper
|
class |
Cholesky
Cholesky op wrapper
|
class |
Histogram
Histogram op wrapper
|
class |
HistogramFixedWidth
Histogram fixed with op
|
class |
IdentityN
IdentityN op wrapper
|
class |
MaxOut
Max out activation:
https://arxiv.org/pdf/1302.4389.pdf
|
class |
NthElement
NthElement op wrapper
|
class |
Pad
Pad op
|
class |
ReluLayer
Composed op: relu((X, W) + b)
|
Modifier and Type | Class and Description |
---|---|
class |
Assign
Identity function
|
class |
IsMax
[1, 2, 3, 1] -> [0, 0, 1, 0]
|
Modifier and Type | Class and Description |
---|---|
class |
BooleanNot
Boolean NOT transform
|
class |
IsFinite
IsFinite function
|
class |
MatchConditionTransform
Match condition transform
|
Modifier and Type | Class and Description |
---|---|
class |
ClipByNorm |
class |
ClipByNormBp |
class |
ClipByValue |
Modifier and Type | Class and Description |
---|---|
class |
CompareAndReplace
Element-wise Compare-and-Replace implementation as Op
Basically this op does the same as Compare-and-Set, but op.X is checked against Condition instead
|
class |
CompareAndSet
Element-wise Compare-and-set implementation as Op
Please check javadoc to specific constructors, for detail information.
|
class |
Eps
Bit mask over the ndarrays as to whether
the components are equal or not
|
Modifier and Type | Class and Description |
---|---|
class |
ATan2
Arc Tangent elementwise function
|
class |
BatchToSpace
N-dimensional batch to space operation.
|
class |
BatchToSpaceND
N-dimensional batch to space operation.
|
class |
BitsHammingDistance |
class |
BitwiseAnd
Bit-wise AND operation, broadcastable
|
class |
BitwiseOr
Bit-wise OR operation, broadcastable
|
class |
BitwiseXor
Bit-wise XOR operation, broadcastable
|
class |
Choose
This op allows us to (based on the passed in condition)
to return the element fulfilling the condition.
|
class |
CumProd |
class |
CumSum
Cumulative sum operation, optionally along dimension.
|
class |
CyclicRShiftBits
Element-wise roll operation, rolls bits to the left, <<
|
class |
CyclicShiftBits
Element-wise roll operation, rolls bits to the left, <<
|
class |
Dilation2D
Dilation2D op wrapper
|
class |
DotProductAttention
(optionally scaled) dot product attention
See also "Attention is all you need" (https://arxiv.org/abs/1706.03762, p.
|
class |
DotProductAttentionBp
(optionally scaled) dot product attention Backprop
See also "Attention is all you need" (https://arxiv.org/abs/1706.03762, p.
|
class |
DynamicPartition
Transforms a given input tensor into numPartitions partitions, as indicated by the indices in "partitions".
|
class |
DynamicStitch
Transforms a given input tensor into numPartitions partitions, as indicated by the indices in "partitions".
|
class |
EqualTo
Bit mask over the ndarrays as to whether
the components are equal or not
|
class |
FakeQuantWithMinMaxArgs
Fake quantization operation.
|
class |
FakeQuantWithMinMaxVars
Fake quantization operation.
|
class |
Fill
Fill an array of given "shape" with the provided "value", e.g.
|
class |
GreaterThan
Bit mask over the ndarrays as to whether
the components are greater than or not
|
class |
GreaterThanOrEqual
Bit mask over the ndarrays as to whether
the components are greater than or equal or not
|
class |
InTopK
In Top K op
|
class |
InvertPermutation
Inverse of index permutation.
|
class |
IsNonDecreasing
This op takes 1 n-dimensional array as input,
and returns true if for every adjacent pair we have x[i] <= x[i+1].
|
class |
IsNumericTensor
This op takes 1 n-dimensional array as input, and returns true if input is a numeric array.
|
class |
IsStrictlyIncreasing
This op takes 1 n-dimensional array as input,
and returns true if for every adjacent pair we have x[i] < x[i+1].
|
class |
LayerNorm
Composed op: g*standarize(x) + b
Bias is optional, and can be set as null
|
class |
LayerNormBp
Composed op: g*standarize(x) + b
Bias is optional, and can be set as null
|
class |
LessThan
Bit mask over the ndarrays as to whether
the components are less than or not
|
class |
LessThanOrEqual
Bit mask over the ndarrays as to whether
the components are less than or equal or not
|
class |
ListDiff |
class |
LogicalAnd |
class |
LogicalNot |
class |
LogicalOr |
class |
LogicalXor |
class |
LogMatrixDeterminant
Log Matrix Determinant op
Given input with shape [..., N, N] output the log determinant for each sub-matrix.
|
class |
LogSoftMax
Log(softmax(X))
|
class |
MatrixDeterminant
Matrix Determinant op
Given input with shape [..., N, N] output the determinant for each sub-matrix.
|
class |
MatrixDiag |
class |
MatrixDiagPart |
class |
MatrixInverse
Matrix Inverse Function
|
class |
MatrixSetDiag |
class |
MirrorPad |
class |
MultiHeadDotProductAttention
(optionally scaled) multi head dot product attention
See also "Attention is all you need" (https://arxiv.org/abs/1706.03762, pp.
|
class |
MultiHeadDotProductAttentionBp
(optionally scaled) multi head dot product attention Backprop
See also "Attention is all you need" (https://arxiv.org/abs/1706.03762, pp.
|
class |
NotEqualTo
Not equal to function:
Bit mask over whether 2 elements are not equal or not
|
class |
ParallelConcat |
class |
Reverse |
class |
ReverseSequence
Created by farizrahman4u on 3/16/18.
|
class |
ReverseV2
This is compatibility op for ReverseV2
|
class |
RShiftBits
Element-wise shift operation, shift bits to the right, >>
|
class |
ShiftBits
Element-wise shift operation, shift bits to the left, <<
|
class |
SoftMax
Soft max function
row_maxes is a row vector (max for each row)
row_maxes = rowmaxes(input)
diff = exp(input - max) / diff.rowSums()
Outputs a probability distribution.
|
class |
SpaceToBatch
N-dimensional space to batch operation.
|
class |
SpaceToBatchND
N-dimensional space to batch operation.
|
class |
Standardize |
class |
StandardizeBp |
class |
Svd
SVD - singular value decomposition
|
class |
ThresholdRelu
Threshold ReLU op.
|
class |
TopK
Top K op
|
class |
Trace
Matrix trace operation
|
class |
Unique |
class |
UniqueWithCounts |
class |
XwPlusB
Composed op: mmul (X, W) + b
|
class |
Zeta
Element-wise Zeta function.
|
Modifier and Type | Class and Description |
---|---|
class |
SegmentMax
Segment max operation
|
class |
SegmentMean
Segment mean operation
|
class |
SegmentMin
Segment min operation
|
class |
SegmentProd
Segment product operation
|
class |
SegmentSum
Segment sum operation
|
Modifier and Type | Class and Description |
---|---|
class |
Cast
Cast op wrapper.
|
Modifier and Type | Class and Description |
---|---|
class |
RSqrt
RSqrt function
|
class |
Sqrt
Sqrt function
|
Modifier and Type | Class and Description |
---|---|
class |
CubeBp
Cube backpropagation op - dL/dIn from in and dL/dOut
|
class |
CubeDerivative
Deprecated.
Use
CubeBp |
class |
DynamicPartitionBp
Backprop operation for dynamic partition
|
class |
EluBp
ELU backpropagation op - dL/dIn from in and dL/dOut
|
class |
GradientBackwardsMarker |
class |
HardSigmoidBp
Hard Sigmoid backpropagation op - dL/dIn from in and dL/dOut
|
class |
HardSigmoidDerivative
Deprecated.
Use
HardSigmoidBp |
class |
HardTanhBp
Hard Tanh backpropagation op - dL/dIn from in and dL/dOut
|
class |
HardTanhDerivative
Deprecated.
Use
HardTanhBp |
class |
LeakyReLUBp
LReLU backpropagation op - dL/dIn from in and dL/dOut
|
class |
LeakyReLUDerivative
Leaky ReLU derivative.
|
class |
LogSoftMaxDerivative |
class |
PReluBp
PRelu backpropagation op - dL/dIn from in and dL/dOut
|
class |
RationalTanhBp
Rational Tanh backpropagation op - dL/dIn from in and dL/dOut
|
class |
RationalTanhDerivative
Deprecated.
Use
RationalTanhBp |
class |
RectifiedTanhBp
Rectified Tanh backpropagation op - dL/dIn from in and dL/dOut
|
class |
RectifiedTanhDerivative
Deprecated.
Use
RectifiedTanhBp |
class |
Relu6Derivative
Derivative of Rectified linear unit 6, i.e.
|
class |
SeluBp
SELU backpropagation op - dL/dIn from in and dL/dOut
|
class |
SELUDerivative
Deprecated.
Use
SeluBp |
class |
SoftmaxBp
Softmax backpropagation op - dL/dIn from in and dL/dOut
|
class |
SoftPlusBp
SoftPlus backpropagation op - dL/dIn from in and dL/dOut
|
class |
SoftSignBp
SoftSign backpropagation op - dL/dIn from in and dL/dOut
|
class |
SoftSignDerivative
Deprecated.
Use
SoftSignBp |
class |
ThresholdReluBp
Threshold ReLU Backprop op - dL/dIn from in and dL/dOut
For
RectifiedLinear as well as ThresholdRelu . |
Modifier and Type | Class and Description |
---|---|
class |
BinaryMinimalRelativeError |
class |
BinaryRelativeError |
class |
RelativeError |
class |
Set
Set
|
Modifier and Type | Class and Description |
---|---|
class |
AddOp
Addition operation
|
class |
Axpy
Level 1 blas op Axpy as libnd4j native op
|
class |
CopyOp
Copy operation
|
class |
DivOp
Division operation
|
class |
FloorDivOp
Truncated division operation
|
class |
FloorModOp
Floor mod
|
class |
FModOp
Floating-point mod
|
class |
MergeAddOp
Addition operation for n operands, called "mergeadd" in libnd4j
|
class |
ModOp
Modulo operation
|
class |
MulOp
Multiplication operation
|
class |
PowPairwise
Pairwise version of PoW
|
class |
RDivOp
Reverse Division operation
|
class |
RealDivOp
RealDivision operation
|
class |
RemainderOp
Floating-point remainder operation
|
class |
RSubOp
Reverse subtraction operation
|
class |
SquaredDifferenceOp
Squared difference operation, i.e.
|
class |
SubOp
Subtraction operation
|
class |
TruncateDivOp
Truncated division operation
|
Modifier and Type | Class and Description |
---|---|
class |
AddBpOp
Addition backprop operation.
|
class |
BaseArithmeticBackpropOp
Base arithmetic backprop operation
|
class |
DivBpOp
Division backprop operation.
|
class |
FloorDivBpOp
Floor div backprop operation.
|
class |
FloorModBpOp
Floor div backprop operation.
|
class |
ModBpOp
Modulo backprop operation.
|
class |
MulBpOp
Division backprop operation.
|
class |
RDivBpOp
Division backprop operation.
|
class |
RSubBpOp
Division backprop operation.
|
class |
SquaredDifferenceBpOp
Backprop op for squared difference operation, i.e.
|
class |
SubBpOp
Division backprop operation.
|
Modifier and Type | Class and Description |
---|---|
class |
And
Boolean AND pairwise transform
|
class |
Not
Boolean AND pairwise transform
|
class |
Or
Boolean OR pairwise transform
|
class |
Xor
Boolean XOR pairwise transform
|
Modifier and Type | Class and Description |
---|---|
class |
Abs
Abs elementwise function
|
class |
Ceil
Ceiling elementwise function
|
class |
Cube
Cube (x^3) elementwise function
|
class |
Floor
Floor elementwise function
|
class |
Identity
Identity function
|
class |
Negative
Negative function
|
class |
OneMinus
1 - input
|
class |
Reciprocal
Created by susaneraly on 3/28/18.
|
class |
Round
Rounding function
|
class |
Sign
Signum function
|
class |
Square
Square function (x ^ 2)
|
class |
TimesOneMinus
If x is input: output is x*(1-x)
|
Modifier and Type | Class and Description |
---|---|
class |
UnsortedSegmentMax
Unsorted segment max operation
|
class |
UnsortedSegmentMean
Unsorted segment mean operation
|
class |
UnsortedSegmentMin
Unsorted segment min operation
|
class |
UnsortedSegmentProd
Unsorted segment product operation
|
class |
UnsortedSegmentSqrtN
Unsorted Sqrt(count) op
|
class |
UnsortedSegmentSum
Unsorted segment sum operation
|
Modifier and Type | Class and Description |
---|---|
class |
SegmentMaxBp
Segment max backprop operation
|
class |
SegmentMeanBp
Segment mean backprop operation
|
class |
SegmentMinBp
Segment min backprop operation
|
class |
SegmentProdBp
Segment product backprop operation
|
class |
SegmentSumBp
Segment sum backprop operation
|
class |
UnsortedSegmentMaxBp
Unsorted segment max backprop operation
|
class |
UnsortedSegmentMeanBp
Unsorted segment mean backprop operation
|
class |
UnsortedSegmentMinBp
Unsorted segment min backprop operation
|
class |
UnsortedSegmentProdBp
Unsorted segment product backprop operation
|
class |
UnsortedSegmentSqrtNBp
Unsorted segment sqrt(n) backprop operation
|
class |
UnsortedSegmentSumBp
Unsorted segment sum backprop operation
|
Modifier and Type | Class and Description |
---|---|
class |
ACos
Log elementwise function
|
class |
ACosh
ACosh elementwise function
|
class |
ASin
Arcsin elementwise function
|
class |
ASinh
Arcsin elementwise function
|
class |
ATan
Arc Tangent elementwise function
|
class |
ATanh
tan elementwise function
|
class |
Cos
Cosine elementwise function
|
class |
Cosh
Cosine Hyperbolic elementwise function
|
class |
ELU
ELU: Exponential Linear Unit (alpha=1.0)
Introduced in paper: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter (2015) https://arxiv.org/abs/1511.07289 |
class |
Erf
Gaussian error function (erf) function, which is defined as
|
class |
Erfc
Complementary Gaussian error function (erfc), defined as
|
class |
Exp
Element-wise exponential function
|
class |
Expm1
Element-wise exponential function minus 1, i.e.
|
class |
GELU
GELU activation function - Gaussian Error Linear Units
For more details, see Gaussian Error Linear Units (GELUs) - https://arxiv.org/abs/1606.08415 Note: This op implements both the sigmoid and tanh-based approximations; to use the sigmoid approximation (recommended) use precise=false; otherwise, use precise = true for the slower but marginally more accurate tanh version. |
class |
GELUDerivative
GELU derivative
|
class |
HardSigmoid
HardSigmoid function
|
class |
HardTanh
Hard tanh elementwise function
|
class |
Log
Log elementwise function
|
class |
Log1p
Log1p function
|
class |
LogSigmoid
LogSigmoid function
|
class |
Mish
Mish activation function
|
class |
MishDerivative
Mish derivative
|
class |
PreciseGELU
GELU activation function - Gaussian Error Linear Units
For more details, see Gaussian Error Linear Units (GELUs) - https://arxiv.org/abs/1606.08415 Note: This op implements both the sigmoid and tanh-based approximations; to use the sigmoid approximation (recommended) use precise=false; otherwise, use precise = true for the slower but marginally more accurate tanh version. |
class |
PreciseGELUDerivative
GELU derivative
|
class |
RationalTanh
Rational Tanh Approximation elementwise function, as described at https://github.com/deeplearning4j/libnd4j/issues/351
|
class |
RectifiedTanh
RectifiedTanh
Essentially max(0, tanh(x))
|
class |
Rint
Rint function
|
class |
SELU
SELU activation function
|
class |
SetRange
Set range to a particular set of values
|
class |
Sigmoid
Sigmoid function
|
class |
SigmoidDerivative
Deprecated.
|
class |
Sin
Log elementwise function
|
class |
Sinh
Sinh function
|
class |
SoftPlus |
class |
SoftSign
Softsign element-wise activation function.
|
class |
Stabilize
Stabilization function, forces values to be within a range
|
class |
Swish
Swish function
|
class |
SwishDerivative
Swish derivative
|
class |
Tan
Tanh elementwise function
|
class |
TanDerivative
Tan Derivative elementwise function
|
class |
Tanh
Tanh elementwise function
|
class |
TanhDerivative
Deprecated.
Use
TanhDerivative . |
Modifier and Type | Class and Description |
---|---|
class |
RestoreV2 |
class |
SaveV2 |
Modifier and Type | Class and Description |
---|---|
class |
BaseRandomOp |
Modifier and Type | Class and Description |
---|---|
class |
RandomStandardNormal
This op is a wrapper for RandomNormal Op
|
Modifier and Type | Class and Description |
---|---|
class |
DistributionUniform
Uniform distribution wrapper
|
class |
RandomBernoulli
Random bernoulli distribution: p(x=1) = p, p(x=0) = 1-p
i.e., output is 0 or 1 with probability p.
|
class |
RandomExponential
Random exponential distribution: p(x) = lambda * exp(-lambda * x)
|
class |
RandomNormal
Random normal distribution
|
Modifier and Type | Class and Description |
---|---|
class |
AlphaDropOut
AlphaDropOut implementation as Op
|
class |
BernoulliDistribution
BernoulliDistribution implementation
|
class |
BinomialDistribution
This Op generates binomial distribution
|
class |
BinomialDistributionEx
This Op generates binomial distribution
|
class |
Choice
This Op implements numpy.choice method
It fills Z from source, following probabilities for each source element
|
class |
DropOut
DropOut implementation as Op
|
class |
DropOutInverted
Inverted DropOut implementation as Op
|
class |
GaussianDistribution
This Op generates normal distribution over provided mean and stddev
|
class |
Linspace
Linspace/arange Op implementation, generates from..to distribution within Z
|
class |
LogNormalDistribution
This Op generates log-normal distribution over provided mean and stddev
|
class |
ProbablisticMerge |
class |
Range
Range Op implementation, generates from..to distribution within Z
|
class |
TruncatedNormalDistribution
This Op generates truncated normal distribution over provided mean and stddev
|
class |
UniformDistribution |
Copyright © 2019. All rights reserved.