Modifier and Type | Method and Description |
---|---|
INDArray[] |
BasicGraphExecutioner.executeGraph(int id,
SDVariable... variables)
This method executes
|
INDArray[] |
GraphExecutioner.executeGraph(int id,
SDVariable... variables)
This method executes
|
Modifier and Type | Method and Description |
---|---|
SDVariable |
DifferentialFunctionFactory.abs(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.acos(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.acosh(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.add(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.add(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.addi(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.addi(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.adjustContrast(SDVariable in,
SDVariable factor) |
SDVariable |
DifferentialFunctionFactory.adjustContrastV2(SDVariable in,
SDVariable factor) |
SDVariable |
DifferentialFunctionFactory.all(SDVariable input,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.amax(SDVariable in,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.amean(SDVariable in,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.amin(SDVariable in,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.and(SDVariable ix,
SDVariable iy) |
SDVariable |
DifferentialFunctionFactory.any(SDVariable input,
int... dimensions) |
SDVariable |
DifferentialFunction.arg()
Return the first argument
|
SDVariable |
DifferentialFunction.arg(int num)
Return the specified argument for this function
|
SDVariable |
DifferentialFunctionFactory.argmax(SDVariable in,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.argmin(SDVariable in,
boolean keepDims,
int... dimensions) |
SDVariable[] |
DifferentialFunction.args()
Return the arguments for a given function
|
SDVariable |
DifferentialFunctionFactory.asin(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.asinh(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.assign(SDVariable x,
Number num) |
SDVariable |
DifferentialFunctionFactory.assign(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.asum(SDVariable in,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.atan(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.atan2(SDVariable y,
SDVariable x) |
SDVariable |
DifferentialFunctionFactory.atanh(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.avgPooling2d(SDVariable input,
Pooling2DConfig pooling2DConfig)
Average pooling 2d operation.
|
SDVariable |
DifferentialFunctionFactory.avgPooling3d(SDVariable input,
Pooling3DConfig pooling3DConfig)
Avg pooling 3d operation.
|
SDVariable[] |
DifferentialFunctionFactory.batchMmul(SDVariable[] matrices,
boolean transposeA,
boolean transposeB) |
SDVariable[] |
DifferentialFunctionFactory.batchMmul(SDVariable[] matricesA,
SDVariable[] matricesB) |
SDVariable[] |
DifferentialFunctionFactory.batchMmul(SDVariable[] matricesA,
SDVariable[] matricesB,
boolean transposeA,
boolean transposeB) |
SDVariable |
DifferentialFunctionFactory.batchNorm(SDVariable input,
SDVariable mean,
SDVariable variance,
SDVariable gamma,
SDVariable beta,
boolean applyGamma,
boolean applyBeta,
double epsilon,
int... axis)
Batch norm operation.
|
SDVariable |
DifferentialFunctionFactory.batchToSpace(SDVariable differentialFunction,
int[] blocks,
int[][] crops) |
SDVariable |
DifferentialFunctionFactory.betainc(SDVariable a,
SDVariable b,
SDVariable x) |
SDVariable |
DifferentialFunctionFactory.biasAdd(SDVariable input,
SDVariable bias,
boolean nchw) |
SDVariable[] |
DifferentialFunctionFactory.biasAddBp(SDVariable input,
SDVariable bias,
SDVariable grad,
boolean nchw) |
SDVariable |
DifferentialFunctionFactory.bitCast(SDVariable in,
SDVariable dataType) |
SDVariable |
DifferentialFunctionFactory.bitwiseAnd(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.bitwiseHammingDist(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.bitwiseOr(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.bitwiseXor(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.cast(SDVariable toCast,
DataType toType) |
SDVariable |
DifferentialFunctionFactory.ceil(SDVariable x) |
SDVariable |
DifferentialFunctionFactory.clipByNorm(SDVariable x,
double clipValue) |
SDVariable |
DifferentialFunctionFactory.clipByNorm(SDVariable x,
double clipValue,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.clipByValue(SDVariable x,
double clipValueMin,
double clipValueMax) |
SDVariable |
DifferentialFunctionFactory.col2Im(SDVariable input,
Conv2DConfig config) |
SDVariable |
DifferentialFunctionFactory.compareAndBitpack(SDVariable threshold) |
SDVariable |
DifferentialFunctionFactory.concat(int dimension,
SDVariable... inputs) |
SDVariable |
DifferentialFunctionFactory.confusionMatrix(SDVariable labels,
SDVariable pred,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.confusionMatrix(SDVariable labels,
SDVariable pred,
Integer numClasses) |
SDVariable |
DifferentialFunctionFactory.confusionMatrix(SDVariable labels,
SDVariable pred,
Integer numClasses,
SDVariable weights) |
SDVariable |
DifferentialFunctionFactory.confusionMatrix(SDVariable labels,
SDVariable pred,
SDVariable weights) |
SDVariable |
DifferentialFunctionFactory.conv1d(SDVariable input,
SDVariable weights,
Conv1DConfig conv1DConfig)
Conv1d operation.
|
SDVariable |
DifferentialFunctionFactory.conv1d(SDVariable input,
SDVariable weights,
SDVariable bias,
Conv1DConfig conv1DConfig)
Conv1d operation.
|
SDVariable |
DifferentialFunctionFactory.conv2d(SDVariable[] inputs,
Conv2DConfig conv2DConfig)
Conv2d operation.
|
SDVariable |
DifferentialFunctionFactory.conv3d(SDVariable[] inputs,
Conv3DConfig conv3DConfig)
Conv3d operation.
|
SDVariable |
DifferentialFunctionFactory.cos(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.cosh(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.cosineDistance(SDVariable ix,
SDVariable iy,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.cosineSimilarity(SDVariable iX,
SDVariable i_y,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.countNonZero(SDVariable input,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.countZero(SDVariable input,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.create(String name,
SDVariable shape,
boolean initialize,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.create(String name,
SDVariable shape,
char order,
boolean initialize,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.cross(SDVariable a,
SDVariable b) |
SDVariable |
DifferentialFunctionFactory.cube(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.cubeBp(SDVariable in,
SDVariable epsilon) |
SDVariable |
DifferentialFunctionFactory.cubeDerivative(SDVariable iX)
Deprecated.
|
SDVariable |
DifferentialFunctionFactory.cumprod(SDVariable in,
boolean exclusive,
boolean reverse,
int... axis) |
SDVariable |
DifferentialFunctionFactory.cumprodBp(SDVariable in,
SDVariable grad,
boolean exclusive,
boolean reverse,
int... axis) |
SDVariable |
DifferentialFunctionFactory.cumsum(SDVariable in,
boolean exclusive,
boolean reverse,
int... axis) |
SDVariable |
DifferentialFunctionFactory.cumsumBp(SDVariable in,
SDVariable grad,
boolean exclusive,
boolean reverse,
int... axis) |
SDVariable |
DifferentialFunctionFactory.deconv2d(SDVariable[] inputs,
DeConv2DConfig deconv2DConfig)
Deconv2d operation.
|
SDVariable |
DifferentialFunctionFactory.deconv3d(SDVariable input,
SDVariable weights,
SDVariable bias,
DeConv3DConfig config) |
SDVariable[] |
DifferentialFunctionFactory.deconv3dDerivative(SDVariable input,
SDVariable weights,
SDVariable bias,
SDVariable grad,
DeConv3DConfig config) |
SDVariable |
DifferentialFunctionFactory.depthToSpace(SDVariable differentialFunction,
int blocksSize,
String dataFormat) |
SDVariable |
DifferentialFunctionFactory.depthWiseConv2d(SDVariable[] inputs,
Conv2DConfig depthConv2DConfig)
Depth-wise Conv2d operation.
|
SDVariable |
DifferentialFunctionFactory.diag(SDVariable sdVariable) |
SDVariable |
DifferentialFunctionFactory.diagPart(SDVariable sdVariable) |
SDVariable |
DifferentialFunctionFactory.dilation2D(SDVariable df,
SDVariable weights,
int[] strides,
int[] rates,
boolean isSameMode) |
SDVariable |
DifferentialFunctionFactory.div(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.div(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.divi(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.divi(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.divideNoNan(SDVariable in1,
SDVariable in2) |
SDVariable |
DifferentialFunctionFactory.doRepeat(SDVariable func,
SDVariable input) |
SDVariable |
DifferentialFunctionFactory.dot(SDVariable x,
SDVariable y,
int... dimensions) |
SDVariable[] |
DifferentialFunctionFactory.dotBp(SDVariable in1,
SDVariable in2,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.dotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled) |
SDVariable |
DifferentialFunctionFactory.drawBoundingBoxes(SDVariable boxes,
SDVariable colors) |
SDVariable |
DifferentialFunctionFactory.dropout(SDVariable input,
double p) |
SDVariable[] |
DifferentialFunctionFactory.dynamicPartition(SDVariable differentialFunction,
SDVariable partitions,
int numPartitions) |
SDVariable[] |
DifferentialFunctionFactory.dynamicPartitionBp(SDVariable input,
SDVariable partitions,
SDVariable[] grads,
int numPartitions) |
SDVariable |
DifferentialFunctionFactory.dynamicStitch(SDVariable[] indices,
SDVariable[] differentialFunctions) |
SDVariable |
DifferentialFunctionFactory.elu(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.eluBp(SDVariable in,
SDVariable epsilon,
double alpha) |
SDVariable |
DifferentialFunctionFactory.enter(SDVariable x,
String frameName) |
SDVariable |
DifferentialFunctionFactory.enter(SDVariable x,
String frameName,
boolean isConstant) |
SDVariable |
DifferentialFunctionFactory.entropy(SDVariable in,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.eq(SDVariable iX,
double i_y) |
SDVariable |
DifferentialFunctionFactory.eq(SDVariable iX,
SDVariable i_y) |
SDVariable |
DifferentialFunctionFactory.eqi(SDVariable iX,
double i_y) |
SDVariable |
DifferentialFunctionFactory.erf(SDVariable differentialFunction) |
SDVariable |
DifferentialFunctionFactory.erfc(SDVariable differentialFunction) |
SDVariable |
DifferentialFunctionFactory.euclideanDistance(SDVariable iX,
SDVariable i_y,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.exit(SDVariable x) |
SDVariable |
DifferentialFunctionFactory.exp(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.expandDims(SDVariable iX,
int axis) |
SDVariable |
DifferentialFunctionFactory.expm1(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.extractImagePatches(SDVariable input,
int kH,
int kW,
int sH,
int sW,
int rH,
int rW,
boolean sameMode) |
SDVariable |
DifferentialFunctionFactory.fakeQuantWithMinMaxVarsPerChannel(SDVariable x,
SDVariable min,
SDVariable max,
int num_bits,
boolean narrow) |
SDVariable |
DifferentialFunctionFactory.fill(SDVariable shape,
DataType dataType,
double value) |
SDVariable |
DifferentialFunctionFactory.firstIndex(SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.floor(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.floorDiv(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.floorMod(SDVariable x,
SDVariable y) |
SDVariable[] |
DifferentialFunctionFactory.fusedBatchNorm(SDVariable x,
SDVariable scale,
SDVariable offset,
SDVariable dataFormat,
SDVariable isTraining) |
SDVariable |
DifferentialFunctionFactory.gather(SDVariable df,
int[] indices,
int axis) |
SDVariable |
DifferentialFunctionFactory.gather(SDVariable df,
SDVariable indices,
int axis) |
SDVariable |
DifferentialFunctionFactory.gatherNd(SDVariable df,
SDVariable indices) |
SDVariable |
DifferentialFunctionFactory.gelu(SDVariable iX,
boolean precise) |
SDVariable |
DifferentialFunctionFactory.geluDerivative(SDVariable iX,
boolean precise) |
SDVariable |
DifferentialFunctionFactory.gradientBackwardsMarker(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.gt(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.gt(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.gte(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.gte(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.gtei(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.gtei(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.gti(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.gti(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.hammingDistance(SDVariable ix,
SDVariable iy,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.hardSigmoid(SDVariable in) |
SDVariable |
DifferentialFunctionFactory.hardSigmoidBp(SDVariable in,
SDVariable epsilon) |
SDVariable |
DifferentialFunctionFactory.hardTanh(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.hardTanhBp(SDVariable in,
SDVariable epsilon) |
SDVariable |
DifferentialFunctionFactory.hardTanhDerivative(SDVariable iX)
Deprecated.
|
SDVariable |
DifferentialFunctionFactory.iamax(SDVariable in,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.iamin(SDVariable in,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.identity(SDVariable input) |
SDVariable |
DifferentialFunctionFactory.im2Col(SDVariable input,
Conv2DConfig config) |
SDVariable |
DifferentialFunctionFactory.im2ColBp(SDVariable im2colInput,
SDVariable gradientAtOutput,
Conv2DConfig config) |
SDVariable |
DifferentialFunctionFactory.invertPermutation(SDVariable input,
boolean inPlace) |
SDVariable |
DifferentialFunctionFactory.invoke(String name,
Object[] args) |
SDVariable |
DifferentialFunctionFactory.isFinite(SDVariable ix) |
SDVariable |
DifferentialFunctionFactory.isInfinite(SDVariable ix) |
SDVariable |
DifferentialFunctionFactory.isMax(SDVariable ix) |
SDVariable |
DifferentialFunctionFactory.isNaN(SDVariable ix) |
SDVariable |
DifferentialFunctionFactory.isNonDecreasing(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.isNumericTensor(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.isStrictlyIncreasing(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.jaccardDistance(SDVariable ix,
SDVariable iy,
int... dimensions) |
SDVariable |
DifferentialFunction.larg()
The left argument for this function
|
SDVariable |
DifferentialFunctionFactory.lastIndex(SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.layerNorm(SDVariable input,
SDVariable gain,
boolean channelsFirst,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.layerNorm(SDVariable input,
SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions) |
SDVariable[] |
DifferentialFunctionFactory.layerNormBp(SDVariable input,
SDVariable gain,
SDVariable gradient,
boolean channelsFirst,
int... dimensions) |
SDVariable[] |
DifferentialFunctionFactory.layerNormBp(SDVariable input,
SDVariable gain,
SDVariable bias,
SDVariable gradient,
boolean channelsFirst,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.leakyRelu(SDVariable iX,
double alpha) |
SDVariable |
DifferentialFunctionFactory.leakyReluBp(SDVariable in,
SDVariable epsilon,
double cutoff) |
SDVariable |
DifferentialFunctionFactory.leakyReluDerivative(SDVariable iX,
double cutoff)
|
SDVariable |
DifferentialFunctionFactory.linspace(SDVariable lower,
SDVariable upper,
SDVariable count,
DataType dt) |
SDVariable[] |
DifferentialFunctionFactory.listdiff(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.localResponseNormalization(SDVariable input,
LocalResponseNormalizationConfig lrnConfig)
Local response normalization operation.
|
SDVariable |
DifferentialFunctionFactory.log(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.log(SDVariable in,
double base) |
SDVariable |
DifferentialFunctionFactory.log1p(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.logEntropy(SDVariable in,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.logSigmoid(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.logSoftmax(SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.logSoftmax(SDVariable i_v,
int dimension) |
SDVariable |
DifferentialFunctionFactory.logSoftmaxDerivative(SDVariable arg,
SDVariable wrt) |
SDVariable |
DifferentialFunctionFactory.logSoftmaxDerivative(SDVariable arg,
SDVariable wrt,
int dimension) |
SDVariable |
DifferentialFunctionFactory.logSumExp(SDVariable arg,
boolean keepDims,
int... dimension) |
SDVariable |
DifferentialFunctionFactory.lossAbsoluteDifference(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable[] |
DifferentialFunctionFactory.lossAbsoluteDifferenceBP(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable |
DifferentialFunctionFactory.lossCosineDistance(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension) |
SDVariable[] |
DifferentialFunctionFactory.lossCosineDistanceBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension) |
SDVariable |
DifferentialFunctionFactory.lossHinge(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable[] |
DifferentialFunctionFactory.lossHingeBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable |
DifferentialFunctionFactory.lossHuber(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta) |
SDVariable[] |
DifferentialFunctionFactory.lossHuberBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta) |
SDVariable |
DifferentialFunctionFactory.lossL2(SDVariable var) |
SDVariable |
DifferentialFunctionFactory.lossLog(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon) |
SDVariable[] |
DifferentialFunctionFactory.lossLogBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon) |
SDVariable |
DifferentialFunctionFactory.lossLogPoisson(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable[] |
DifferentialFunctionFactory.lossLogPoissonBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable |
DifferentialFunctionFactory.lossLogPoissonFull(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable[] |
DifferentialFunctionFactory.lossLogPoissonFullBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable |
DifferentialFunctionFactory.lossMeanPairwiseSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable[] |
DifferentialFunctionFactory.lossMeanPairwiseSquaredErrorBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable |
DifferentialFunctionFactory.lossMeanSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable[] |
DifferentialFunctionFactory.lossMeanSquaredErrorBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable |
DifferentialFunctionFactory.lossSigmoidCrossEntropy(SDVariable labels,
SDVariable logits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing) |
SDVariable[] |
DifferentialFunctionFactory.lossSigmoidCrossEntropyBp(SDVariable labels,
SDVariable logits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing) |
SDVariable |
DifferentialFunctionFactory.lossSoftmaxCrossEntropy(SDVariable labels,
SDVariable logits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing) |
SDVariable[] |
DifferentialFunctionFactory.lossSoftmaxCrossEntropyBp(SDVariable labels,
SDVariable logits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing) |
SDVariable |
DifferentialFunctionFactory.lossSoftmaxCrossEntropyWithLogits(SDVariable labels,
SDVariable logits,
SDVariable weights,
int classDim) |
SDVariable[] |
DifferentialFunctionFactory.lossSoftmaxCrossEntropyWithLogitsBp(SDVariable labels,
SDVariable logits,
SDVariable weights,
int classDim) |
SDVariable |
DifferentialFunctionFactory.lossSparseSoftmaxCrossEntropy(SDVariable logits,
SDVariable labels) |
SDVariable[] |
DifferentialFunctionFactory.lossSparseSoftmaxCrossEntropyBp(SDVariable logits,
SDVariable labels) |
SDVariable |
DifferentialFunctionFactory.lt(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.lt(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.lte(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.lte(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.ltei(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.lti(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.lti(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.ltOrEqi(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.manhattanDistance(SDVariable iX,
SDVariable i_y,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.matchCondition(SDVariable in,
Condition condition)
Returns a boolean mask of equal shape to the input, where the condition is satisfied
|
SDVariable |
DifferentialFunctionFactory.matchConditionCount(SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions)
Returns a count of the number of elements that satisfy the condition
|
SDVariable |
DifferentialFunctionFactory.matrixBandPart(SDVariable input,
SDVariable minLower,
SDVariable maxUpper) |
SDVariable |
DifferentialFunctionFactory.matrixDeterminant(SDVariable in) |
SDVariable |
DifferentialFunctionFactory.matrixInverse(SDVariable in) |
SDVariable |
DifferentialFunctionFactory.max(SDVariable i_x,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.max(SDVariable first,
SDVariable second) |
SDVariable |
DifferentialFunctionFactory.maxBp(SDVariable i_x,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.maxPooling2d(SDVariable input,
Pooling2DConfig pooling2DConfig)
Max pooling 2d operation.
|
SDVariable |
DifferentialFunctionFactory.maxPooling3d(SDVariable input,
Pooling3DConfig pooling3DConfig)
Max pooling 3d operation.
|
SDVariable[] |
DifferentialFunctionFactory.maxPoolWithArgmaxs(SDVariable x,
Pooling2DConfig pooling2DConfig) |
SDVariable |
DifferentialFunctionFactory.mean(SDVariable in,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.meanBp(SDVariable in,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.merge(SDVariable... inputs) |
SDVariable |
DifferentialFunctionFactory.mergeAdd(SDVariable... differentialFunctions) |
SDVariable |
DifferentialFunctionFactory.mergeAvg(SDVariable... differentialFunctions) |
SDVariable |
DifferentialFunctionFactory.mergeMax(SDVariable... differentialFunctions) |
SDVariable[] |
DifferentialFunctionFactory.meshgrid(boolean cartesian,
SDVariable... inputs) |
SDVariable |
DifferentialFunctionFactory.min(SDVariable i_x,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.min(SDVariable first,
SDVariable second) |
SDVariable |
DifferentialFunctionFactory.minBp(SDVariable i_x,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.mishDerivative(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.mmul(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.mmul(SDVariable x,
SDVariable y,
MMulTranspose mMulTranspose) |
SDVariable |
DifferentialFunctionFactory.mod(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable[] |
DifferentialFunctionFactory.moments(SDVariable input,
int... axes) |
SDVariable |
DifferentialFunctionFactory.mul(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.mul(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.muli(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.muli(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.multiHeadDotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled) |
SDVariable |
DifferentialFunctionFactory.neg(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.neq(SDVariable iX,
double i_y) |
SDVariable |
DifferentialFunctionFactory.neq(SDVariable iX,
SDVariable i_y) |
SDVariable |
DifferentialFunctionFactory.neqi(SDVariable iX,
double i_y) |
SDVariable |
DifferentialFunctionFactory.neqi(SDVariable iX,
SDVariable i_y) |
SDVariable |
DifferentialFunctionFactory.nextIteration(SDVariable x) |
SDVariable |
DifferentialFunctionFactory.noop(SDVariable input) |
SDVariable |
DifferentialFunctionFactory.norm1(SDVariable i_x,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.norm1Bp(SDVariable preReduceIn,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.norm2(SDVariable i_x,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.norm2Bp(SDVariable preReduceIn,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable[] |
DifferentialFunctionFactory.normalizeMoments(SDVariable counts,
SDVariable means,
SDVariable variances,
double shift) |
SDVariable |
DifferentialFunctionFactory.normmax(SDVariable i_x,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.normmaxBp(SDVariable preReduceIn,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.onehot(SDVariable indices,
int depth) |
SDVariable |
DifferentialFunctionFactory.onehot(SDVariable indices,
int depth,
int axis,
double on,
double off,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.onesLike(String name,
SDVariable input,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.or(SDVariable iX,
SDVariable i_y) |
SDVariable |
DifferentialFunction.outputVariable() |
SDVariable[] |
DifferentialFunction.outputVariables()
Return the output variables for this differential function.
|
abstract SDVariable[] |
DifferentialFunction.outputVariables(String baseName)
Return the output functions for this differential function.
|
SDVariable |
DifferentialFunctionFactory.pad(SDVariable input,
SDVariable padding,
Pad.Mode mode,
double padValue) |
SDVariable |
DifferentialFunctionFactory.parallel_stack(SDVariable[] values) |
SDVariable |
DifferentialFunctionFactory.permute(SDVariable iX,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.permute(SDVariable in,
SDVariable dimensions) |
SDVariable |
DifferentialFunctionFactory.polygamma(SDVariable n,
SDVariable x) |
SDVariable |
DifferentialFunctionFactory.pow(SDVariable iX,
double i_y) |
SDVariable |
DifferentialFunctionFactory.pow(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.powDerivative(SDVariable iX,
double pow) |
SDVariable |
DifferentialFunctionFactory.prelu(SDVariable x,
SDVariable alpha,
int... sharedAxes) |
SDVariable[] |
DifferentialFunctionFactory.preluBp(SDVariable in,
SDVariable alpha,
SDVariable epsilon,
int... sharedAxes) |
SDVariable |
DifferentialFunctionFactory.prod(SDVariable i_x,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.prodBp(SDVariable preReduceInput,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.randomBernoulli(double p,
long... shape) |
SDVariable |
DifferentialFunctionFactory.randomBernoulli(double p,
SDVariable shape) |
SDVariable |
DifferentialFunctionFactory.randomBinomial(int nTrials,
double p,
long... shape) |
SDVariable |
DifferentialFunctionFactory.randomExponential(double lambda,
SDVariable shape)
Exponential distribution: P(x) = lambda * exp(-lambda * x)
|
SDVariable |
DifferentialFunctionFactory.randomLogNormal(double mean,
double stdev,
long... shape) |
SDVariable |
DifferentialFunctionFactory.randomNormal(double mean,
double std,
long... shape) |
SDVariable |
DifferentialFunctionFactory.randomNormal(double mean,
double std,
SDVariable shape) |
SDVariable |
DifferentialFunctionFactory.randomNormalTruncated(double mean,
double stdev,
long... shape) |
SDVariable |
DifferentialFunctionFactory.randomUniform(double min,
double max,
long... shape) |
SDVariable |
DifferentialFunctionFactory.randomUniform(double min,
double max,
SDVariable shape,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.range(double from,
double to,
double step,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.range(SDVariable from,
SDVariable to,
SDVariable step,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.rank(SDVariable df) |
SDVariable |
DifferentialFunction.rarg()
The right argument for this function.
|
SDVariable |
DifferentialFunctionFactory.rdiv(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.rdiv(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.rdivi(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.rdivi(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.reciprocal(SDVariable a) |
SDVariable |
DifferentialFunctionFactory.reductionBroadcastableWithOrigShape(int origRank,
int[] reduceDims,
SDVariable toExpand)
Add 1s as required to the array make an array possible to be broadcast with the original (pre-reduce) array.
|
SDVariable |
DifferentialFunctionFactory.reductionBroadcastableWithOrigShape(SDVariable origInput,
SDVariable axis,
SDVariable toExpand) |
SDVariable |
DifferentialFunctionFactory.reductionShape(SDVariable shape,
SDVariable axis,
boolean keepDim) |
SDVariable |
DifferentialFunctionFactory.relu(SDVariable iX,
double cutoff) |
SDVariable |
DifferentialFunctionFactory.relu6(SDVariable iX,
double cutoff) |
SDVariable |
DifferentialFunctionFactory.relu6Derivative(SDVariable iX,
SDVariable wrt,
double cutoff) |
SDVariable |
DifferentialFunctionFactory.reluDerivative(SDVariable input,
SDVariable grad) |
SDVariable |
DifferentialFunctionFactory.reluLayer(SDVariable input,
SDVariable weights,
SDVariable bias) |
SDVariable |
DifferentialFunctionFactory.repeat(SDVariable iX,
int axis) |
SDVariable |
DifferentialFunctionFactory.replaceWhere(SDVariable to,
Number set,
Condition condition) |
SDVariable |
DifferentialFunctionFactory.replaceWhere(SDVariable to,
SDVariable from,
Condition condition) |
SDVariable |
DifferentialFunctionFactory.reshape(SDVariable iX,
int[] shape) |
SDVariable |
DifferentialFunctionFactory.reshape(SDVariable iX,
long[] shape) |
SDVariable |
DifferentialFunctionFactory.reshape(SDVariable iX,
SDVariable shape) |
SDVariable |
DifferentialFunctionFactory.reverse(SDVariable x,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.reverseSequence(SDVariable x,
SDVariable seq_lengths) |
SDVariable |
DifferentialFunctionFactory.reverseSequence(SDVariable x,
SDVariable seq_lengths,
int seq_dim,
int batch_dim) |
SDVariable |
DifferentialFunctionFactory.roll(SDVariable input,
SDVariable shift) |
SDVariable |
DifferentialFunctionFactory.rotl(SDVariable ix,
SDVariable shift) |
SDVariable |
DifferentialFunctionFactory.rotr(SDVariable ix,
SDVariable shift) |
SDVariable |
DifferentialFunctionFactory.round(SDVariable ix) |
SDVariable |
DifferentialFunctionFactory.rshift(SDVariable ix,
SDVariable shift) |
SDVariable |
DifferentialFunctionFactory.rsqrt(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.rsub(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.rsub(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.rsubi(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.rsubi(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.scalarFloorMod(SDVariable in,
Number num) |
SDVariable |
DifferentialFunctionFactory.scalarMax(SDVariable in,
Number num) |
SDVariable |
DifferentialFunctionFactory.scalarMin(SDVariable in,
Number num) |
SDVariable |
DifferentialFunctionFactory.scalarSet(SDVariable in,
Number num) |
SDVariable |
DifferentialFunctionFactory.scatterAdd(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
DifferentialFunctionFactory.scatterDiv(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
DifferentialFunctionFactory.scatterMax(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
DifferentialFunctionFactory.scatterMin(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
DifferentialFunctionFactory.scatterMul(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
DifferentialFunctionFactory.scatterSub(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
DifferentialFunctionFactory.scatterUpdate(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
DifferentialFunctionFactory.sconv2d(SDVariable[] inputs,
Conv2DConfig conv2DConfig)
Separable Conv2d operation.
|
SDVariable |
DifferentialFunctionFactory.segmentMax(SDVariable data,
SDVariable segmentIds) |
SDVariable[] |
DifferentialFunctionFactory.segmentMaxBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient) |
SDVariable |
DifferentialFunctionFactory.segmentMean(SDVariable data,
SDVariable segmentIds) |
SDVariable[] |
DifferentialFunctionFactory.segmentMeanBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient) |
SDVariable |
DifferentialFunctionFactory.segmentMin(SDVariable data,
SDVariable segmentIds) |
SDVariable[] |
DifferentialFunctionFactory.segmentMinBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient) |
SDVariable |
DifferentialFunctionFactory.segmentProd(SDVariable data,
SDVariable segmentIds) |
SDVariable[] |
DifferentialFunctionFactory.segmentProdBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient) |
SDVariable |
DifferentialFunctionFactory.segmentSum(SDVariable data,
SDVariable segmentIds) |
SDVariable[] |
DifferentialFunctionFactory.segmentSumBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient) |
SDVariable |
DifferentialFunctionFactory.selu(SDVariable arg) |
SDVariable |
DifferentialFunctionFactory.seluBp(SDVariable in,
SDVariable epsilon) |
SDVariable |
DifferentialFunctionFactory.seluDerivative(SDVariable arg)
Deprecated.
|
SDVariable |
DifferentialFunctionFactory.sequenceMask(SDVariable lengths,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.sequenceMask(SDVariable lengths,
int maxLen,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.sequenceMask(SDVariable lengths,
SDVariable maxLen,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.setDiag(SDVariable in,
SDVariable diag) |
SDVariable |
DifferentialFunctionFactory.shannonEntropy(SDVariable in,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.shape(SDVariable df) |
SDVariable |
DifferentialFunctionFactory.shift(SDVariable ix,
SDVariable shift) |
SDVariable |
DifferentialFunctionFactory.sigmoid(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.sigmoidDerivative(SDVariable iX,
SDVariable wrt) |
SDVariable |
DifferentialFunctionFactory.sign(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.sin(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.sinh(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.size(SDVariable in) |
SDVariable |
DifferentialFunctionFactory.sizeAt(SDVariable in,
int dimension) |
SDVariable |
DifferentialFunctionFactory.slice(SDVariable input,
int[] begin,
int[] size) |
SDVariable |
DifferentialFunctionFactory.slice(SDVariable input,
SDVariable begin,
SDVariable size) |
SDVariable |
DifferentialFunctionFactory.sliceBp(SDVariable input,
SDVariable gradient,
int[] begin,
int[] size) |
SDVariable |
DifferentialFunctionFactory.sliceBp(SDVariable input,
SDVariable gradient,
SDVariable begin,
SDVariable size) |
SDVariable |
DifferentialFunctionFactory.softmax(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.softmax(SDVariable iX,
int dimension) |
SDVariable |
DifferentialFunctionFactory.softmaxDerivative(SDVariable functionInput,
SDVariable wrt,
Integer dimension) |
SDVariable |
DifferentialFunctionFactory.softplus(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.softsign(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.softsignBp(SDVariable in,
SDVariable epsilon) |
SDVariable |
DifferentialFunctionFactory.softsignDerivative(SDVariable iX)
Deprecated.
|
SDVariable |
DifferentialFunctionFactory.spaceToBatch(SDVariable differentialFunction,
int[] blocks,
int[][] padding) |
SDVariable |
DifferentialFunctionFactory.spaceToDepth(SDVariable differentialFunction,
int blocksSize,
String dataFormat) |
SDVariable |
DifferentialFunctionFactory.sqrt(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.square(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.squaredDifference(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.squaredNorm(SDVariable input,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.squaredNormBp(SDVariable preReduceInput,
SDVariable gradient,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.squeeze(SDVariable iX,
int... axis) |
SDVariable |
DifferentialFunctionFactory.stack(SDVariable[] values,
int axis) |
SDVariable |
DifferentialFunctionFactory.standardize(SDVariable i_x,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.standardizeBp(SDVariable stdInput,
SDVariable gradient,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.std(SDVariable i_x,
boolean biasCorrected,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.stdBp(SDVariable stdInput,
SDVariable gradient,
boolean biasCorrected,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.step(SDVariable in,
double cutoff) |
SDVariable |
DifferentialFunctionFactory.stridedSlice(SDVariable input,
int[] begin,
int[] end,
int[] strides) |
SDVariable |
DifferentialFunctionFactory.stridedSlice(SDVariable in,
int[] begin,
int[] end,
int[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
SDVariable |
DifferentialFunctionFactory.stridedSlice(SDVariable input,
long[] begin,
long[] end,
long[] strides) |
SDVariable |
DifferentialFunctionFactory.stridedSlice(SDVariable in,
long[] begin,
long[] end,
long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
SDVariable |
DifferentialFunctionFactory.stridedSliceBp(SDVariable in,
SDVariable grad,
long[] begin,
long[] end,
long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
SDVariable |
DifferentialFunctionFactory.stridedSliceBp(SDVariable in,
SDVariable grad,
SDVariable begin,
SDVariable end,
SDVariable strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
SDVariable |
DifferentialFunctionFactory.sub(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.sub(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.subi(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.subi(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.sum(SDVariable i_x,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.sumBp(SDVariable i_x,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.swish(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.swishDerivative(SDVariable iX) |
SDVariable[] |
DifferentialFunctionFactory.switchOp(SDVariable input,
SDVariable predicate) |
SDVariable |
DifferentialFunctionFactory.tan(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.tanh(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.tanhDerivative(SDVariable iX,
SDVariable wrt) |
SDVariable |
DifferentialFunctionFactory.tanhRational(SDVariable in) |
SDVariable |
DifferentialFunctionFactory.tanhRationalBp(SDVariable in,
SDVariable epsilon) |
SDVariable |
DifferentialFunctionFactory.tanhRationalDerivative(SDVariable in)
Deprecated.
|
SDVariable |
DifferentialFunctionFactory.tanhRectified(SDVariable in) |
SDVariable |
DifferentialFunctionFactory.tanhRectifiedBp(SDVariable in,
SDVariable epsilon) |
SDVariable |
DifferentialFunctionFactory.tanhRectifiedDerivative(SDVariable in)
Deprecated.
|
SDVariable |
DifferentialFunctionFactory.tensorMmul(SDVariable x,
SDVariable y,
int[][] dimensions) |
SDVariable |
DifferentialFunctionFactory.thresholdRelu(SDVariable in,
SDVariable epsilon,
double cutoff) |
SDVariable |
DifferentialFunctionFactory.thresholdReluBp(SDVariable in,
SDVariable epsilon,
double cutoff) |
SDVariable |
DifferentialFunctionFactory.tile(SDVariable iX,
int[] repeat) |
SDVariable |
DifferentialFunctionFactory.tile(SDVariable iX,
SDVariable repeat) |
SDVariable |
DifferentialFunctionFactory.tileBp(SDVariable in,
SDVariable grad,
int[] repeat) |
SDVariable |
DifferentialFunctionFactory.tileBp(SDVariable in,
SDVariable repeat,
SDVariable grad) |
SDVariable |
DifferentialFunctionFactory.toggleBits(SDVariable x) |
SDVariable |
DifferentialFunctionFactory.trace(SDVariable in) |
SDVariable |
DifferentialFunctionFactory.transpose(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.truncatedDiv(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.unsortedSegmentMax(SDVariable data,
SDVariable segmentIds,
int numSegments) |
SDVariable[] |
DifferentialFunctionFactory.unsortedSegmentMaxBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient,
int numSegments) |
SDVariable |
DifferentialFunctionFactory.unsortedSegmentMean(SDVariable data,
SDVariable segmentIds,
int numSegments) |
SDVariable[] |
DifferentialFunctionFactory.unsortedSegmentMeanBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient,
int numSegments) |
SDVariable |
DifferentialFunctionFactory.unsortedSegmentMin(SDVariable data,
SDVariable segmentIds,
int numSegments) |
SDVariable[] |
DifferentialFunctionFactory.unsortedSegmentMinBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient,
int numSegments) |
SDVariable |
DifferentialFunctionFactory.unsortedSegmentProd(SDVariable data,
SDVariable segmentIds,
int numSegments) |
SDVariable[] |
DifferentialFunctionFactory.unsortedSegmentProdBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient,
int numSegments) |
SDVariable |
DifferentialFunctionFactory.unsortedSegmentSqrtN(SDVariable data,
SDVariable segmentIds,
int numSegments) |
SDVariable[] |
DifferentialFunctionFactory.unsortedSegmentSqrtNBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient,
int numSegments) |
SDVariable |
DifferentialFunctionFactory.unsortedSegmentSum(SDVariable data,
SDVariable segmentIds,
int numSegments) |
SDVariable[] |
DifferentialFunctionFactory.unsortedSegmentSumBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient,
int numSegments) |
SDVariable[] |
DifferentialFunctionFactory.unstack(SDVariable value,
int axis) |
SDVariable[] |
DifferentialFunctionFactory.unstack(SDVariable value,
int axis,
int num) |
SDVariable |
DifferentialFunctionFactory.upsampling2d(SDVariable input,
boolean nchw,
int scaleH,
int scaleW) |
SDVariable |
DifferentialFunctionFactory.upsampling2dBp(SDVariable input,
SDVariable gradient,
boolean nchw,
int scaleH,
int scaleW) |
SDVariable |
DifferentialFunctionFactory.variance(SDVariable i_x,
boolean biasCorrected,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.varianceBp(SDVariable stdInput,
SDVariable gradient,
boolean biasCorrected,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.weightedCrossEntropyWithLogits(SDVariable targets,
SDVariable inputs,
SDVariable weights) |
SDVariable |
DifferentialFunctionFactory.xor(SDVariable ix,
SDVariable iy) |
SDVariable |
DifferentialFunctionFactory.xwPlusB(SDVariable input,
SDVariable weights,
SDVariable bias) |
SDVariable |
DifferentialFunctionFactory.zeroFraction(SDVariable input) |
SDVariable |
DifferentialFunctionFactory.zerosLike(SDVariable input) |
SDVariable |
DifferentialFunctionFactory.zerosLike(String name,
SDVariable input) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
DifferentialFunctionFactory.addBp(SDVariable x,
SDVariable y,
SDVariable grad) |
List<SDVariable> |
DifferentialFunction.diff(List<SDVariable> i_v1)
Perform automatic differentiation
wrt the input variables
|
List<SDVariable> |
DifferentialFunctionFactory.divBp(SDVariable x,
SDVariable y,
SDVariable grad) |
abstract List<SDVariable> |
DifferentialFunction.doDiff(List<SDVariable> f1)
The actual implementation for automatic differentiation.
|
List<SDVariable> |
DifferentialFunctionFactory.dotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled,
boolean withWeights) |
List<SDVariable> |
DifferentialFunctionFactory.dotProductAttentionBp(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable gradient,
SDVariable mask,
boolean scaled) |
List<SDVariable> |
DifferentialFunctionFactory.floorDivBp(SDVariable x,
SDVariable y,
SDVariable grad) |
List<SDVariable> |
DifferentialFunctionFactory.floorModBp(SDVariable x,
SDVariable y,
SDVariable grad) |
List<SDVariable> |
DifferentialFunctionFactory.mmulBp(SDVariable x,
SDVariable y,
SDVariable eps,
MMulTranspose mt) |
List<SDVariable> |
DifferentialFunctionFactory.modBp(SDVariable x,
SDVariable y,
SDVariable grad) |
List<SDVariable> |
DifferentialFunctionFactory.mulBp(SDVariable x,
SDVariable y,
SDVariable grad) |
List<SDVariable> |
DifferentialFunctionFactory.multiHeadDotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled,
boolean withWeights) |
List<SDVariable> |
DifferentialFunctionFactory.multiHeadDotProductAttentionBp(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable gradient,
SDVariable mask,
boolean scaled) |
List<SDVariable> |
DifferentialFunctionFactory.rdivBp(SDVariable x,
SDVariable y,
SDVariable grad) |
List<SDVariable> |
DifferentialFunctionFactory.rsubBp(SDVariable x,
SDVariable y,
SDVariable grad) |
List<SDVariable> |
DifferentialFunctionFactory.subBp(SDVariable x,
SDVariable y,
SDVariable grad) |
Modifier and Type | Method and Description |
---|---|
SDVariable |
DifferentialFunctionFactory.abs(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.acos(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.acosh(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.add(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.add(SDVariable differentialFunction,
SDVariable i_v) |
List<SDVariable> |
DifferentialFunctionFactory.addBp(SDVariable x,
SDVariable y,
SDVariable grad) |
SDVariable |
DifferentialFunctionFactory.addi(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.addi(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.adjustContrast(SDVariable in,
SDVariable factor) |
SDVariable |
DifferentialFunctionFactory.adjustContrastV2(SDVariable in,
SDVariable factor) |
SDVariable |
DifferentialFunctionFactory.all(SDVariable input,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.amax(SDVariable in,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.amean(SDVariable in,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.amin(SDVariable in,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.and(SDVariable ix,
SDVariable iy) |
SDVariable |
DifferentialFunctionFactory.any(SDVariable input,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.argmax(SDVariable in,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.argmin(SDVariable in,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.asin(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.asinh(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.assign(SDVariable x,
Number num) |
SDVariable |
DifferentialFunctionFactory.assign(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.asum(SDVariable in,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.atan(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.atan2(SDVariable y,
SDVariable x) |
SDVariable |
DifferentialFunctionFactory.atanh(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.avgPooling2d(SDVariable input,
Pooling2DConfig pooling2DConfig)
Average pooling 2d operation.
|
SDVariable |
DifferentialFunctionFactory.avgPooling3d(SDVariable input,
Pooling3DConfig pooling3DConfig)
Avg pooling 3d operation.
|
SDVariable[] |
DifferentialFunctionFactory.batchMmul(SDVariable[] matrices,
boolean transposeA,
boolean transposeB) |
SDVariable[] |
DifferentialFunctionFactory.batchMmul(SDVariable[] matricesA,
SDVariable[] matricesB) |
SDVariable[] |
DifferentialFunctionFactory.batchMmul(SDVariable[] matricesA,
SDVariable[] matricesB) |
SDVariable[] |
DifferentialFunctionFactory.batchMmul(SDVariable[] matricesA,
SDVariable[] matricesB,
boolean transposeA,
boolean transposeB) |
SDVariable[] |
DifferentialFunctionFactory.batchMmul(SDVariable[] matricesA,
SDVariable[] matricesB,
boolean transposeA,
boolean transposeB) |
SDVariable |
DifferentialFunctionFactory.batchNorm(SDVariable input,
SDVariable mean,
SDVariable variance,
SDVariable gamma,
SDVariable beta,
boolean applyGamma,
boolean applyBeta,
double epsilon,
int... axis)
Batch norm operation.
|
SDVariable |
DifferentialFunctionFactory.batchToSpace(SDVariable differentialFunction,
int[] blocks,
int[][] crops) |
SDVariable |
DifferentialFunctionFactory.betainc(SDVariable a,
SDVariable b,
SDVariable x) |
SDVariable |
DifferentialFunctionFactory.biasAdd(SDVariable input,
SDVariable bias,
boolean nchw) |
SDVariable[] |
DifferentialFunctionFactory.biasAddBp(SDVariable input,
SDVariable bias,
SDVariable grad,
boolean nchw) |
SDVariable |
DifferentialFunctionFactory.bitCast(SDVariable in,
SDVariable dataType) |
SDVariable |
DifferentialFunctionFactory.bitwiseAnd(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.bitwiseHammingDist(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.bitwiseOr(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.bitwiseXor(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.cast(SDVariable toCast,
DataType toType) |
SDVariable |
DifferentialFunctionFactory.ceil(SDVariable x) |
SDVariable |
DifferentialFunctionFactory.clipByNorm(SDVariable x,
double clipValue) |
SDVariable |
DifferentialFunctionFactory.clipByNorm(SDVariable x,
double clipValue,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.clipByValue(SDVariable x,
double clipValueMin,
double clipValueMax) |
SDVariable |
DifferentialFunctionFactory.col2Im(SDVariable input,
Conv2DConfig config) |
SDVariable |
DifferentialFunctionFactory.compareAndBitpack(SDVariable threshold) |
SDVariable |
DifferentialFunctionFactory.concat(int dimension,
SDVariable... inputs) |
SDVariable |
DifferentialFunctionFactory.confusionMatrix(SDVariable labels,
SDVariable pred,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.confusionMatrix(SDVariable labels,
SDVariable pred,
Integer numClasses) |
SDVariable |
DifferentialFunctionFactory.confusionMatrix(SDVariable labels,
SDVariable pred,
Integer numClasses,
SDVariable weights) |
SDVariable |
DifferentialFunctionFactory.confusionMatrix(SDVariable labels,
SDVariable pred,
SDVariable weights) |
SDVariable |
DifferentialFunctionFactory.conv1d(SDVariable input,
SDVariable weights,
Conv1DConfig conv1DConfig)
Conv1d operation.
|
SDVariable |
DifferentialFunctionFactory.conv1d(SDVariable input,
SDVariable weights,
SDVariable bias,
Conv1DConfig conv1DConfig)
Conv1d operation.
|
SDVariable |
DifferentialFunctionFactory.conv2d(SDVariable[] inputs,
Conv2DConfig conv2DConfig)
Conv2d operation.
|
SDVariable |
DifferentialFunctionFactory.conv3d(SDVariable[] inputs,
Conv3DConfig conv3DConfig)
Conv3d operation.
|
SDVariable |
DifferentialFunctionFactory.cos(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.cosh(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.cosineDistance(SDVariable ix,
SDVariable iy,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.cosineSimilarity(SDVariable iX,
SDVariable i_y,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.countNonZero(SDVariable input,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.countZero(SDVariable input,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.create(String name,
SDVariable shape,
boolean initialize,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.create(String name,
SDVariable shape,
char order,
boolean initialize,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.cross(SDVariable a,
SDVariable b) |
SDVariable |
DifferentialFunctionFactory.cube(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.cubeBp(SDVariable in,
SDVariable epsilon) |
SDVariable |
DifferentialFunctionFactory.cubeDerivative(SDVariable iX)
Deprecated.
|
SDVariable |
DifferentialFunctionFactory.cumprod(SDVariable in,
boolean exclusive,
boolean reverse,
int... axis) |
SDVariable |
DifferentialFunctionFactory.cumprodBp(SDVariable in,
SDVariable grad,
boolean exclusive,
boolean reverse,
int... axis) |
SDVariable |
DifferentialFunctionFactory.cumsum(SDVariable in,
boolean exclusive,
boolean reverse,
int... axis) |
SDVariable |
DifferentialFunctionFactory.cumsumBp(SDVariable in,
SDVariable grad,
boolean exclusive,
boolean reverse,
int... axis) |
SDVariable |
DifferentialFunctionFactory.deconv2d(SDVariable[] inputs,
DeConv2DConfig deconv2DConfig)
Deconv2d operation.
|
SDVariable |
DifferentialFunctionFactory.deconv3d(SDVariable input,
SDVariable weights,
SDVariable bias,
DeConv3DConfig config) |
SDVariable[] |
DifferentialFunctionFactory.deconv3dDerivative(SDVariable input,
SDVariable weights,
SDVariable bias,
SDVariable grad,
DeConv3DConfig config) |
SDVariable |
DifferentialFunctionFactory.depthToSpace(SDVariable differentialFunction,
int blocksSize,
String dataFormat) |
SDVariable |
DifferentialFunctionFactory.depthWiseConv2d(SDVariable[] inputs,
Conv2DConfig depthConv2DConfig)
Depth-wise Conv2d operation.
|
SDVariable |
DifferentialFunctionFactory.diag(SDVariable sdVariable) |
SDVariable |
DifferentialFunctionFactory.diagPart(SDVariable sdVariable) |
SDVariable |
DifferentialFunctionFactory.dilation2D(SDVariable df,
SDVariable weights,
int[] strides,
int[] rates,
boolean isSameMode) |
SDVariable |
DifferentialFunctionFactory.div(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.div(SDVariable differentialFunction,
SDVariable i_v) |
List<SDVariable> |
DifferentialFunctionFactory.divBp(SDVariable x,
SDVariable y,
SDVariable grad) |
SDVariable |
DifferentialFunctionFactory.divi(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.divi(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.divideNoNan(SDVariable in1,
SDVariable in2) |
SDVariable |
DifferentialFunctionFactory.doRepeat(SDVariable func,
SDVariable input) |
SDVariable |
DifferentialFunctionFactory.dot(SDVariable x,
SDVariable y,
int... dimensions) |
SDVariable[] |
DifferentialFunctionFactory.dotBp(SDVariable in1,
SDVariable in2,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.dotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled) |
List<SDVariable> |
DifferentialFunctionFactory.dotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled,
boolean withWeights) |
List<SDVariable> |
DifferentialFunctionFactory.dotProductAttentionBp(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable gradient,
SDVariable mask,
boolean scaled) |
SDVariable |
DifferentialFunctionFactory.drawBoundingBoxes(SDVariable boxes,
SDVariable colors) |
SDVariable |
DifferentialFunctionFactory.dropout(SDVariable input,
double p) |
SDVariable[] |
DifferentialFunctionFactory.dynamicPartition(SDVariable differentialFunction,
SDVariable partitions,
int numPartitions) |
SDVariable[] |
DifferentialFunctionFactory.dynamicPartitionBp(SDVariable input,
SDVariable partitions,
SDVariable[] grads,
int numPartitions) |
SDVariable[] |
DifferentialFunctionFactory.dynamicPartitionBp(SDVariable input,
SDVariable partitions,
SDVariable[] grads,
int numPartitions) |
SDVariable |
DifferentialFunctionFactory.dynamicStitch(SDVariable[] indices,
SDVariable[] differentialFunctions) |
SDVariable |
DifferentialFunctionFactory.dynamicStitch(SDVariable[] indices,
SDVariable[] differentialFunctions) |
SDVariable |
DifferentialFunctionFactory.elu(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.eluBp(SDVariable in,
SDVariable epsilon,
double alpha) |
SDVariable |
DifferentialFunctionFactory.enter(SDVariable x,
String frameName) |
SDVariable |
DifferentialFunctionFactory.enter(SDVariable x,
String frameName,
boolean isConstant) |
SDVariable |
DifferentialFunctionFactory.entropy(SDVariable in,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.eq(SDVariable iX,
double i_y) |
SDVariable |
DifferentialFunctionFactory.eq(SDVariable iX,
SDVariable i_y) |
SDVariable |
DifferentialFunctionFactory.eqi(SDVariable iX,
double i_y) |
SDVariable |
DifferentialFunctionFactory.erf(SDVariable differentialFunction) |
SDVariable |
DifferentialFunctionFactory.erfc(SDVariable differentialFunction) |
SDVariable |
DifferentialFunctionFactory.euclideanDistance(SDVariable iX,
SDVariable i_y,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.exit(SDVariable x) |
SDVariable |
DifferentialFunctionFactory.exp(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.expandDims(SDVariable iX,
int axis) |
SDVariable |
DifferentialFunctionFactory.expm1(SDVariable iX) |
ExternalErrorsFunction |
DifferentialFunctionFactory.externalErrors(Map<String,INDArray> externalGradients,
SDVariable... inputs) |
ExternalErrorsFunction |
DifferentialFunctionFactory.externalErrors(SDVariable... inputs) |
SDVariable |
DifferentialFunctionFactory.extractImagePatches(SDVariable input,
int kH,
int kW,
int sH,
int sW,
int rH,
int rW,
boolean sameMode) |
SDVariable |
DifferentialFunctionFactory.fakeQuantWithMinMaxVarsPerChannel(SDVariable x,
SDVariable min,
SDVariable max,
int num_bits,
boolean narrow) |
SDVariable |
DifferentialFunctionFactory.fill(SDVariable shape,
DataType dataType,
double value) |
SDVariable |
DifferentialFunctionFactory.firstIndex(SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.floor(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.floorDiv(SDVariable x,
SDVariable y) |
List<SDVariable> |
DifferentialFunctionFactory.floorDivBp(SDVariable x,
SDVariable y,
SDVariable grad) |
SDVariable |
DifferentialFunctionFactory.floorMod(SDVariable x,
SDVariable y) |
List<SDVariable> |
DifferentialFunctionFactory.floorModBp(SDVariable x,
SDVariable y,
SDVariable grad) |
SDVariable[] |
DifferentialFunctionFactory.fusedBatchNorm(SDVariable x,
SDVariable scale,
SDVariable offset,
SDVariable dataFormat,
SDVariable isTraining) |
SDVariable |
DifferentialFunctionFactory.gather(SDVariable df,
int[] indices,
int axis) |
SDVariable |
DifferentialFunctionFactory.gather(SDVariable df,
SDVariable indices,
int axis) |
SDVariable |
DifferentialFunctionFactory.gatherNd(SDVariable df,
SDVariable indices) |
SDVariable |
DifferentialFunctionFactory.gelu(SDVariable iX,
boolean precise) |
SDVariable |
DifferentialFunctionFactory.geluDerivative(SDVariable iX,
boolean precise) |
SDVariable |
DifferentialFunctionFactory.gradientBackwardsMarker(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.gt(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.gt(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.gte(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.gte(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.gtei(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.gtei(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.gti(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.gti(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.hammingDistance(SDVariable ix,
SDVariable iy,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.hardSigmoid(SDVariable in) |
SDVariable |
DifferentialFunctionFactory.hardSigmoidBp(SDVariable in,
SDVariable epsilon) |
SDVariable |
DifferentialFunctionFactory.hardTanh(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.hardTanhBp(SDVariable in,
SDVariable epsilon) |
SDVariable |
DifferentialFunctionFactory.hardTanhDerivative(SDVariable iX)
Deprecated.
|
SDVariable |
DifferentialFunctionFactory.iamax(SDVariable in,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.iamin(SDVariable in,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.identity(SDVariable input) |
SDVariable |
DifferentialFunctionFactory.im2Col(SDVariable input,
Conv2DConfig config) |
SDVariable |
DifferentialFunctionFactory.im2ColBp(SDVariable im2colInput,
SDVariable gradientAtOutput,
Conv2DConfig config) |
SDVariable |
DifferentialFunctionFactory.invertPermutation(SDVariable input,
boolean inPlace) |
SDVariable |
DifferentialFunctionFactory.isFinite(SDVariable ix) |
SDVariable |
DifferentialFunctionFactory.isInfinite(SDVariable ix) |
SDVariable |
DifferentialFunctionFactory.isMax(SDVariable ix) |
SDVariable |
DifferentialFunctionFactory.isNaN(SDVariable ix) |
SDVariable |
DifferentialFunctionFactory.isNonDecreasing(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.isNumericTensor(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.isStrictlyIncreasing(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.jaccardDistance(SDVariable ix,
SDVariable iy,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.lastIndex(SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.layerNorm(SDVariable input,
SDVariable gain,
boolean channelsFirst,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.layerNorm(SDVariable input,
SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions) |
SDVariable[] |
DifferentialFunctionFactory.layerNormBp(SDVariable input,
SDVariable gain,
SDVariable gradient,
boolean channelsFirst,
int... dimensions) |
SDVariable[] |
DifferentialFunctionFactory.layerNormBp(SDVariable input,
SDVariable gain,
SDVariable bias,
SDVariable gradient,
boolean channelsFirst,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.leakyRelu(SDVariable iX,
double alpha) |
SDVariable |
DifferentialFunctionFactory.leakyReluBp(SDVariable in,
SDVariable epsilon,
double cutoff) |
SDVariable |
DifferentialFunctionFactory.leakyReluDerivative(SDVariable iX,
double cutoff)
|
SDVariable |
DifferentialFunctionFactory.linspace(SDVariable lower,
SDVariable upper,
SDVariable count,
DataType dt) |
SDVariable[] |
DifferentialFunctionFactory.listdiff(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.localResponseNormalization(SDVariable input,
LocalResponseNormalizationConfig lrnConfig)
Local response normalization operation.
|
SDVariable |
DifferentialFunctionFactory.log(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.log(SDVariable in,
double base) |
SDVariable |
DifferentialFunctionFactory.log1p(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.logEntropy(SDVariable in,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.logSigmoid(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.logSoftmax(SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.logSoftmax(SDVariable i_v,
int dimension) |
SDVariable |
DifferentialFunctionFactory.logSoftmaxDerivative(SDVariable arg,
SDVariable wrt) |
SDVariable |
DifferentialFunctionFactory.logSoftmaxDerivative(SDVariable arg,
SDVariable wrt,
int dimension) |
SDVariable |
DifferentialFunctionFactory.logSumExp(SDVariable arg,
boolean keepDims,
int... dimension) |
SDVariable |
DifferentialFunctionFactory.lossAbsoluteDifference(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable[] |
DifferentialFunctionFactory.lossAbsoluteDifferenceBP(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable |
DifferentialFunctionFactory.lossCosineDistance(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension) |
SDVariable[] |
DifferentialFunctionFactory.lossCosineDistanceBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension) |
SDVariable |
DifferentialFunctionFactory.lossHinge(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable[] |
DifferentialFunctionFactory.lossHingeBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable |
DifferentialFunctionFactory.lossHuber(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta) |
SDVariable[] |
DifferentialFunctionFactory.lossHuberBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta) |
SDVariable |
DifferentialFunctionFactory.lossL2(SDVariable var) |
SDVariable |
DifferentialFunctionFactory.lossLog(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon) |
SDVariable[] |
DifferentialFunctionFactory.lossLogBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon) |
SDVariable |
DifferentialFunctionFactory.lossLogPoisson(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable[] |
DifferentialFunctionFactory.lossLogPoissonBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable |
DifferentialFunctionFactory.lossLogPoissonFull(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable[] |
DifferentialFunctionFactory.lossLogPoissonFullBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable |
DifferentialFunctionFactory.lossMeanPairwiseSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable[] |
DifferentialFunctionFactory.lossMeanPairwiseSquaredErrorBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable |
DifferentialFunctionFactory.lossMeanSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable[] |
DifferentialFunctionFactory.lossMeanSquaredErrorBp(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce) |
SDVariable |
DifferentialFunctionFactory.lossSigmoidCrossEntropy(SDVariable labels,
SDVariable logits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing) |
SDVariable[] |
DifferentialFunctionFactory.lossSigmoidCrossEntropyBp(SDVariable labels,
SDVariable logits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing) |
SDVariable |
DifferentialFunctionFactory.lossSoftmaxCrossEntropy(SDVariable labels,
SDVariable logits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing) |
SDVariable[] |
DifferentialFunctionFactory.lossSoftmaxCrossEntropyBp(SDVariable labels,
SDVariable logits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing) |
SDVariable |
DifferentialFunctionFactory.lossSoftmaxCrossEntropyWithLogits(SDVariable labels,
SDVariable logits,
SDVariable weights,
int classDim) |
SDVariable[] |
DifferentialFunctionFactory.lossSoftmaxCrossEntropyWithLogitsBp(SDVariable labels,
SDVariable logits,
SDVariable weights,
int classDim) |
SDVariable |
DifferentialFunctionFactory.lossSparseSoftmaxCrossEntropy(SDVariable logits,
SDVariable labels) |
SDVariable[] |
DifferentialFunctionFactory.lossSparseSoftmaxCrossEntropyBp(SDVariable logits,
SDVariable labels) |
SDVariable |
DifferentialFunctionFactory.lt(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.lt(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.lte(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.lte(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.ltei(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.lti(SDVariable functionInput,
double functionInput1) |
SDVariable |
DifferentialFunctionFactory.lti(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.ltOrEqi(SDVariable functionInput,
SDVariable functionInput1) |
SDVariable |
DifferentialFunctionFactory.manhattanDistance(SDVariable iX,
SDVariable i_y,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.matchCondition(SDVariable in,
Condition condition)
Returns a boolean mask of equal shape to the input, where the condition is satisfied
|
SDVariable |
DifferentialFunctionFactory.matchConditionCount(SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions)
Returns a count of the number of elements that satisfy the condition
|
SDVariable |
DifferentialFunctionFactory.matrixBandPart(SDVariable input,
SDVariable minLower,
SDVariable maxUpper) |
SDVariable |
DifferentialFunctionFactory.matrixDeterminant(SDVariable in) |
SDVariable |
DifferentialFunctionFactory.matrixInverse(SDVariable in) |
SDVariable |
DifferentialFunctionFactory.max(SDVariable i_x,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.max(SDVariable first,
SDVariable second) |
SDVariable |
DifferentialFunctionFactory.maxBp(SDVariable i_x,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.maxPooling2d(SDVariable input,
Pooling2DConfig pooling2DConfig)
Max pooling 2d operation.
|
SDVariable |
DifferentialFunctionFactory.maxPooling3d(SDVariable input,
Pooling3DConfig pooling3DConfig)
Max pooling 3d operation.
|
SDVariable[] |
DifferentialFunctionFactory.maxPoolWithArgmaxs(SDVariable x,
Pooling2DConfig pooling2DConfig) |
SDVariable |
DifferentialFunctionFactory.mean(SDVariable in,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.meanBp(SDVariable in,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.merge(SDVariable... inputs) |
SDVariable |
DifferentialFunctionFactory.mergeAdd(SDVariable... differentialFunctions) |
SDVariable |
DifferentialFunctionFactory.mergeAvg(SDVariable... differentialFunctions) |
SDVariable |
DifferentialFunctionFactory.mergeMax(SDVariable... differentialFunctions) |
SDVariable[] |
DifferentialFunctionFactory.meshgrid(boolean cartesian,
SDVariable... inputs) |
SDVariable |
DifferentialFunctionFactory.min(SDVariable i_x,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.min(SDVariable first,
SDVariable second) |
SDVariable |
DifferentialFunctionFactory.minBp(SDVariable i_x,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.mishDerivative(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.mmul(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.mmul(SDVariable x,
SDVariable y,
MMulTranspose mMulTranspose) |
List<SDVariable> |
DifferentialFunctionFactory.mmulBp(SDVariable x,
SDVariable y,
SDVariable eps,
MMulTranspose mt) |
SDVariable |
DifferentialFunctionFactory.mod(SDVariable differentialFunction,
SDVariable i_v) |
List<SDVariable> |
DifferentialFunctionFactory.modBp(SDVariable x,
SDVariable y,
SDVariable grad) |
SDVariable[] |
DifferentialFunctionFactory.moments(SDVariable input,
int... axes) |
SDVariable |
DifferentialFunctionFactory.mul(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.mul(SDVariable differentialFunction,
SDVariable i_v) |
List<SDVariable> |
DifferentialFunctionFactory.mulBp(SDVariable x,
SDVariable y,
SDVariable grad) |
SDVariable |
DifferentialFunctionFactory.muli(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.muli(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.multiHeadDotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled) |
List<SDVariable> |
DifferentialFunctionFactory.multiHeadDotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled,
boolean withWeights) |
List<SDVariable> |
DifferentialFunctionFactory.multiHeadDotProductAttentionBp(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable gradient,
SDVariable mask,
boolean scaled) |
SDVariable |
DifferentialFunctionFactory.neg(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.neq(SDVariable iX,
double i_y) |
SDVariable |
DifferentialFunctionFactory.neq(SDVariable iX,
SDVariable i_y) |
SDVariable |
DifferentialFunctionFactory.neqi(SDVariable iX,
double i_y) |
SDVariable |
DifferentialFunctionFactory.neqi(SDVariable iX,
SDVariable i_y) |
SDVariable |
DifferentialFunctionFactory.nextIteration(SDVariable x) |
SDVariable |
DifferentialFunctionFactory.noop(SDVariable input) |
SDVariable |
DifferentialFunctionFactory.norm1(SDVariable i_x,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.norm1Bp(SDVariable preReduceIn,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.norm2(SDVariable i_x,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.norm2Bp(SDVariable preReduceIn,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable[] |
DifferentialFunctionFactory.normalizeMoments(SDVariable counts,
SDVariable means,
SDVariable variances,
double shift) |
SDVariable |
DifferentialFunctionFactory.normmax(SDVariable i_x,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.normmaxBp(SDVariable preReduceIn,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.onehot(SDVariable indices,
int depth) |
SDVariable |
DifferentialFunctionFactory.onehot(SDVariable indices,
int depth,
int axis,
double on,
double off,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.onesLike(String name,
SDVariable input,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.or(SDVariable iX,
SDVariable i_y) |
SDVariable |
DifferentialFunctionFactory.pad(SDVariable input,
SDVariable padding,
Pad.Mode mode,
double padValue) |
SDVariable |
DifferentialFunctionFactory.parallel_stack(SDVariable[] values) |
SDVariable |
DifferentialFunctionFactory.permute(SDVariable iX,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.permute(SDVariable in,
SDVariable dimensions) |
SDVariable |
DifferentialFunctionFactory.polygamma(SDVariable n,
SDVariable x) |
SDVariable |
DifferentialFunctionFactory.pow(SDVariable iX,
double i_y) |
SDVariable |
DifferentialFunctionFactory.pow(SDVariable x,
SDVariable y) |
SDVariable |
DifferentialFunctionFactory.powDerivative(SDVariable iX,
double pow) |
SDVariable |
DifferentialFunctionFactory.prelu(SDVariable x,
SDVariable alpha,
int... sharedAxes) |
SDVariable[] |
DifferentialFunctionFactory.preluBp(SDVariable in,
SDVariable alpha,
SDVariable epsilon,
int... sharedAxes) |
SDVariable |
DifferentialFunctionFactory.prod(SDVariable i_x,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.prodBp(SDVariable preReduceInput,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.randomBernoulli(double p,
SDVariable shape) |
SDVariable |
DifferentialFunctionFactory.randomExponential(double lambda,
SDVariable shape)
Exponential distribution: P(x) = lambda * exp(-lambda * x)
|
SDVariable |
DifferentialFunctionFactory.randomNormal(double mean,
double std,
SDVariable shape) |
SDVariable |
DifferentialFunctionFactory.randomUniform(double min,
double max,
SDVariable shape,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.range(SDVariable from,
SDVariable to,
SDVariable step,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.rank(SDVariable df) |
SDVariable |
DifferentialFunctionFactory.rdiv(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.rdiv(SDVariable differentialFunction,
SDVariable i_v) |
List<SDVariable> |
DifferentialFunctionFactory.rdivBp(SDVariable x,
SDVariable y,
SDVariable grad) |
SDVariable |
DifferentialFunctionFactory.rdivi(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.rdivi(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.reciprocal(SDVariable a) |
SDVariable |
DifferentialFunctionFactory.reductionBroadcastableWithOrigShape(int origRank,
int[] reduceDims,
SDVariable toExpand)
Add 1s as required to the array make an array possible to be broadcast with the original (pre-reduce) array.
|
SDVariable |
DifferentialFunctionFactory.reductionBroadcastableWithOrigShape(SDVariable origInput,
SDVariable axis,
SDVariable toExpand) |
SDVariable |
DifferentialFunctionFactory.reductionShape(SDVariable shape,
SDVariable axis,
boolean keepDim) |
SDVariable |
DifferentialFunctionFactory.relu(SDVariable iX,
double cutoff) |
SDVariable |
DifferentialFunctionFactory.relu6(SDVariable iX,
double cutoff) |
SDVariable |
DifferentialFunctionFactory.relu6Derivative(SDVariable iX,
SDVariable wrt,
double cutoff) |
SDVariable |
DifferentialFunctionFactory.reluDerivative(SDVariable input,
SDVariable grad) |
SDVariable |
DifferentialFunctionFactory.reluLayer(SDVariable input,
SDVariable weights,
SDVariable bias) |
SDVariable |
DifferentialFunctionFactory.repeat(SDVariable iX,
int axis) |
void |
DifferentialFunction.replaceArg(int i,
SDVariable newArg) |
SDVariable |
DifferentialFunctionFactory.replaceWhere(SDVariable to,
Number set,
Condition condition) |
SDVariable |
DifferentialFunctionFactory.replaceWhere(SDVariable to,
SDVariable from,
Condition condition) |
SDVariable |
DifferentialFunctionFactory.reshape(SDVariable iX,
int[] shape) |
SDVariable |
DifferentialFunctionFactory.reshape(SDVariable iX,
long[] shape) |
SDVariable |
DifferentialFunctionFactory.reshape(SDVariable iX,
SDVariable shape) |
SDVariable |
DifferentialFunctionFactory.reverse(SDVariable x,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.reverseSequence(SDVariable x,
SDVariable seq_lengths) |
SDVariable |
DifferentialFunctionFactory.reverseSequence(SDVariable x,
SDVariable seq_lengths,
int seq_dim,
int batch_dim) |
SDVariable |
DifferentialFunctionFactory.roll(SDVariable input,
SDVariable shift) |
SDVariable |
DifferentialFunctionFactory.rotl(SDVariable ix,
SDVariable shift) |
SDVariable |
DifferentialFunctionFactory.rotr(SDVariable ix,
SDVariable shift) |
SDVariable |
DifferentialFunctionFactory.round(SDVariable ix) |
SDVariable |
DifferentialFunctionFactory.rshift(SDVariable ix,
SDVariable shift) |
SDVariable |
DifferentialFunctionFactory.rsqrt(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.rsub(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.rsub(SDVariable differentialFunction,
SDVariable i_v) |
List<SDVariable> |
DifferentialFunctionFactory.rsubBp(SDVariable x,
SDVariable y,
SDVariable grad) |
SDVariable |
DifferentialFunctionFactory.rsubi(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.rsubi(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.scalarFloorMod(SDVariable in,
Number num) |
SDVariable |
DifferentialFunctionFactory.scalarMax(SDVariable in,
Number num) |
SDVariable |
DifferentialFunctionFactory.scalarMin(SDVariable in,
Number num) |
SDVariable |
DifferentialFunctionFactory.scalarSet(SDVariable in,
Number num) |
SDVariable |
DifferentialFunctionFactory.scatterAdd(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
DifferentialFunctionFactory.scatterDiv(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
DifferentialFunctionFactory.scatterMax(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
DifferentialFunctionFactory.scatterMin(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
DifferentialFunctionFactory.scatterMul(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
DifferentialFunctionFactory.scatterSub(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
DifferentialFunctionFactory.scatterUpdate(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
DifferentialFunctionFactory.sconv2d(SDVariable[] inputs,
Conv2DConfig conv2DConfig)
Separable Conv2d operation.
|
SDVariable |
DifferentialFunctionFactory.segmentMax(SDVariable data,
SDVariable segmentIds) |
SDVariable[] |
DifferentialFunctionFactory.segmentMaxBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient) |
SDVariable |
DifferentialFunctionFactory.segmentMean(SDVariable data,
SDVariable segmentIds) |
SDVariable[] |
DifferentialFunctionFactory.segmentMeanBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient) |
SDVariable |
DifferentialFunctionFactory.segmentMin(SDVariable data,
SDVariable segmentIds) |
SDVariable[] |
DifferentialFunctionFactory.segmentMinBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient) |
SDVariable |
DifferentialFunctionFactory.segmentProd(SDVariable data,
SDVariable segmentIds) |
SDVariable[] |
DifferentialFunctionFactory.segmentProdBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient) |
SDVariable |
DifferentialFunctionFactory.segmentSum(SDVariable data,
SDVariable segmentIds) |
SDVariable[] |
DifferentialFunctionFactory.segmentSumBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient) |
SDVariable |
DifferentialFunctionFactory.selu(SDVariable arg) |
SDVariable |
DifferentialFunctionFactory.seluBp(SDVariable in,
SDVariable epsilon) |
SDVariable |
DifferentialFunctionFactory.seluDerivative(SDVariable arg)
Deprecated.
|
SDVariable |
DifferentialFunctionFactory.sequenceMask(SDVariable lengths,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.sequenceMask(SDVariable lengths,
int maxLen,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.sequenceMask(SDVariable lengths,
SDVariable maxLen,
DataType dataType) |
SDVariable |
DifferentialFunctionFactory.setDiag(SDVariable in,
SDVariable diag) |
SDVariable |
DifferentialFunctionFactory.shannonEntropy(SDVariable in,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.shape(SDVariable df) |
SDVariable |
DifferentialFunctionFactory.shift(SDVariable ix,
SDVariable shift) |
SDVariable |
DifferentialFunctionFactory.sigmoid(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.sigmoidDerivative(SDVariable iX,
SDVariable wrt) |
SDVariable |
DifferentialFunctionFactory.sign(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.sin(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.sinh(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.size(SDVariable in) |
SDVariable |
DifferentialFunctionFactory.sizeAt(SDVariable in,
int dimension) |
SDVariable |
DifferentialFunctionFactory.slice(SDVariable input,
int[] begin,
int[] size) |
SDVariable |
DifferentialFunctionFactory.slice(SDVariable input,
SDVariable begin,
SDVariable size) |
SDVariable |
DifferentialFunctionFactory.sliceBp(SDVariable input,
SDVariable gradient,
int[] begin,
int[] size) |
SDVariable |
DifferentialFunctionFactory.sliceBp(SDVariable input,
SDVariable gradient,
SDVariable begin,
SDVariable size) |
SDVariable |
DifferentialFunctionFactory.softmax(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.softmax(SDVariable iX,
int dimension) |
SDVariable |
DifferentialFunctionFactory.softmaxDerivative(SDVariable functionInput,
SDVariable wrt,
Integer dimension) |
SDVariable |
DifferentialFunctionFactory.softplus(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.softsign(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.softsignBp(SDVariable in,
SDVariable epsilon) |
SDVariable |
DifferentialFunctionFactory.softsignDerivative(SDVariable iX)
Deprecated.
|
SDVariable |
DifferentialFunctionFactory.spaceToBatch(SDVariable differentialFunction,
int[] blocks,
int[][] padding) |
SDVariable |
DifferentialFunctionFactory.spaceToDepth(SDVariable differentialFunction,
int blocksSize,
String dataFormat) |
SDVariable |
DifferentialFunctionFactory.sqrt(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.square(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.squaredDifference(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.squaredNorm(SDVariable input,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.squaredNormBp(SDVariable preReduceInput,
SDVariable gradient,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.squeeze(SDVariable iX,
int... axis) |
SDVariable |
DifferentialFunctionFactory.stack(SDVariable[] values,
int axis) |
SDVariable |
DifferentialFunctionFactory.standardize(SDVariable i_x,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.standardizeBp(SDVariable stdInput,
SDVariable gradient,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.std(SDVariable i_x,
boolean biasCorrected,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.stdBp(SDVariable stdInput,
SDVariable gradient,
boolean biasCorrected,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.step(SDVariable in,
double cutoff) |
SDVariable |
DifferentialFunctionFactory.stridedSlice(SDVariable input,
int[] begin,
int[] end,
int[] strides) |
SDVariable |
DifferentialFunctionFactory.stridedSlice(SDVariable in,
int[] begin,
int[] end,
int[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
SDVariable |
DifferentialFunctionFactory.stridedSlice(SDVariable input,
long[] begin,
long[] end,
long[] strides) |
SDVariable |
DifferentialFunctionFactory.stridedSlice(SDVariable in,
long[] begin,
long[] end,
long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
SDVariable |
DifferentialFunctionFactory.stridedSliceBp(SDVariable in,
SDVariable grad,
long[] begin,
long[] end,
long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
SDVariable |
DifferentialFunctionFactory.stridedSliceBp(SDVariable in,
SDVariable grad,
SDVariable begin,
SDVariable end,
SDVariable strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
SDVariable |
DifferentialFunctionFactory.sub(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.sub(SDVariable differentialFunction,
SDVariable i_v) |
List<SDVariable> |
DifferentialFunctionFactory.subBp(SDVariable x,
SDVariable y,
SDVariable grad) |
SDVariable |
DifferentialFunctionFactory.subi(SDVariable differentialFunction,
double i_v) |
SDVariable |
DifferentialFunctionFactory.subi(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.sum(SDVariable i_x,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.sumBp(SDVariable i_x,
SDVariable grad,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.swish(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.swishDerivative(SDVariable iX) |
SDVariable[] |
DifferentialFunctionFactory.switchOp(SDVariable input,
SDVariable predicate) |
SDVariable |
DifferentialFunctionFactory.tan(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.tanh(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.tanhDerivative(SDVariable iX,
SDVariable wrt) |
SDVariable |
DifferentialFunctionFactory.tanhRational(SDVariable in) |
SDVariable |
DifferentialFunctionFactory.tanhRationalBp(SDVariable in,
SDVariable epsilon) |
SDVariable |
DifferentialFunctionFactory.tanhRationalDerivative(SDVariable in)
Deprecated.
|
SDVariable |
DifferentialFunctionFactory.tanhRectified(SDVariable in) |
SDVariable |
DifferentialFunctionFactory.tanhRectifiedBp(SDVariable in,
SDVariable epsilon) |
SDVariable |
DifferentialFunctionFactory.tanhRectifiedDerivative(SDVariable in)
Deprecated.
|
SDVariable |
DifferentialFunctionFactory.tensorMmul(SDVariable x,
SDVariable y,
int[][] dimensions) |
SDVariable |
DifferentialFunctionFactory.thresholdRelu(SDVariable in,
SDVariable epsilon,
double cutoff) |
SDVariable |
DifferentialFunctionFactory.thresholdReluBp(SDVariable in,
SDVariable epsilon,
double cutoff) |
SDVariable |
DifferentialFunctionFactory.tile(SDVariable iX,
int[] repeat) |
SDVariable |
DifferentialFunctionFactory.tile(SDVariable iX,
SDVariable repeat) |
SDVariable |
DifferentialFunctionFactory.tileBp(SDVariable in,
SDVariable grad,
int[] repeat) |
SDVariable |
DifferentialFunctionFactory.tileBp(SDVariable in,
SDVariable repeat,
SDVariable grad) |
SDVariable |
DifferentialFunctionFactory.toggleBits(SDVariable x) |
SDVariable |
DifferentialFunctionFactory.trace(SDVariable in) |
SDVariable |
DifferentialFunctionFactory.transpose(SDVariable iX) |
SDVariable |
DifferentialFunctionFactory.truncatedDiv(SDVariable differentialFunction,
SDVariable i_v) |
SDVariable |
DifferentialFunctionFactory.unsortedSegmentMax(SDVariable data,
SDVariable segmentIds,
int numSegments) |
SDVariable[] |
DifferentialFunctionFactory.unsortedSegmentMaxBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient,
int numSegments) |
SDVariable |
DifferentialFunctionFactory.unsortedSegmentMean(SDVariable data,
SDVariable segmentIds,
int numSegments) |
SDVariable[] |
DifferentialFunctionFactory.unsortedSegmentMeanBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient,
int numSegments) |
SDVariable |
DifferentialFunctionFactory.unsortedSegmentMin(SDVariable data,
SDVariable segmentIds,
int numSegments) |
SDVariable[] |
DifferentialFunctionFactory.unsortedSegmentMinBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient,
int numSegments) |
SDVariable |
DifferentialFunctionFactory.unsortedSegmentProd(SDVariable data,
SDVariable segmentIds,
int numSegments) |
SDVariable[] |
DifferentialFunctionFactory.unsortedSegmentProdBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient,
int numSegments) |
SDVariable |
DifferentialFunctionFactory.unsortedSegmentSqrtN(SDVariable data,
SDVariable segmentIds,
int numSegments) |
SDVariable[] |
DifferentialFunctionFactory.unsortedSegmentSqrtNBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient,
int numSegments) |
SDVariable |
DifferentialFunctionFactory.unsortedSegmentSum(SDVariable data,
SDVariable segmentIds,
int numSegments) |
SDVariable[] |
DifferentialFunctionFactory.unsortedSegmentSumBp(SDVariable data,
SDVariable segmentIds,
SDVariable gradient,
int numSegments) |
SDVariable[] |
DifferentialFunctionFactory.unstack(SDVariable value,
int axis) |
SDVariable[] |
DifferentialFunctionFactory.unstack(SDVariable value,
int axis,
int num) |
SDVariable |
DifferentialFunctionFactory.upsampling2d(SDVariable input,
boolean nchw,
int scaleH,
int scaleW) |
SDVariable |
DifferentialFunctionFactory.upsampling2dBp(SDVariable input,
SDVariable gradient,
boolean nchw,
int scaleH,
int scaleW) |
void |
DifferentialFunctionFactory.validateDifferentialFunctionGraph(SDVariable function) |
void |
DifferentialFunctionFactory.validateDifferentialFunctionsameDiff(SDVariable function) |
SDVariable |
DifferentialFunctionFactory.variance(SDVariable i_x,
boolean biasCorrected,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.varianceBp(SDVariable stdInput,
SDVariable gradient,
boolean biasCorrected,
boolean keepDims,
int... dimensions) |
SDVariable |
DifferentialFunctionFactory.weightedCrossEntropyWithLogits(SDVariable targets,
SDVariable inputs,
SDVariable weights) |
SDVariable |
DifferentialFunctionFactory.xor(SDVariable ix,
SDVariable iy) |
SDVariable |
DifferentialFunctionFactory.xwPlusB(SDVariable input,
SDVariable weights,
SDVariable bias) |
SDVariable |
DifferentialFunctionFactory.zeroFraction(SDVariable input) |
SDVariable |
DifferentialFunctionFactory.zerosLike(SDVariable input) |
SDVariable |
DifferentialFunctionFactory.zerosLike(String name,
SDVariable input) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
DifferentialFunction.diff(List<SDVariable> i_v1)
Perform automatic differentiation
wrt the input variables
|
abstract List<SDVariable> |
DifferentialFunction.doDiff(List<SDVariable> f1)
The actual implementation for automatic differentiation.
|
Constructor and Description |
---|
DifferentialFunction(SameDiff sameDiff,
boolean inPlace,
SDVariable[] args)
Add the various arguments for
this function
|
DifferentialFunction(SameDiff sameDiff,
SDVariable[] args) |
Modifier and Type | Method and Description |
---|---|
ListenerVariables.Builder |
ListenerVariables.Builder.evaluationVariables(SDVariable... variables)
Add required variables for evaluation
|
ListenerVariables.Builder |
ListenerVariables.Builder.inferenceVariables(SDVariable... variables)
Add required variables for inference
|
ListenerVariables.Builder |
ListenerVariables.Builder.requireVariables(Operation op,
SDVariable... variables)
Add required variables for the specified op
|
ListenerEvaluations.Builder |
ListenerEvaluations.Builder.trainEvaluation(SDVariable variable,
int labelIndex,
IEvaluation... evaluations)
Add requested training evaluations for a parm/variable
|
ListenerVariables.Builder |
ListenerVariables.Builder.trainingVariables(SDVariable... variables)
Add required variables for training
|
ListenerEvaluations.Builder |
ListenerEvaluations.Builder.validationEvaluation(SDVariable variable,
int labelIndex,
IEvaluation... evaluations)
Add requested validation evaluations for a parm/variable
|
ListenerVariables.Builder |
ListenerVariables.Builder.validationVariables(SDVariable... variables)
Add required variables for validation
|
Modifier and Type | Method and Description |
---|---|
<T extends IEvaluation> |
EvaluationRecord.evaluation(SDVariable param)
Get the evaluation for a given param/variable
|
<T extends IEvaluation<T>> |
EvaluationRecord.evaluation(SDVariable param,
Class<T> evalClass)
Get the evaluation of a given type, for a given param/variable
|
IEvaluation |
EvaluationRecord.evaluation(SDVariable param,
int index)
Get the evaluation for param at the specified index
|
List<IEvaluation> |
EvaluationRecord.evaluations(SDVariable param)
Get evaluations for a given param/variable
|
double |
EvaluationRecord.getValue(SDVariable param,
IMetric metric)
Get the metric's value for the evaluation of the metric's type, for a given param/variable
|
double |
EvaluationRecord.getValue(SDVariable param,
int index,
IMetric metric)
Get the metric's value for the evaluation for a given param/variable at the given index
|
double |
LossCurve.lastMeanDelta(SDVariable loss)
Return the loss delta between the last epoch and the one before it, for a given variable.
|
float |
LossCurve.lastMeanLoss(SDVariable loss)
Return the mean loss value for a given variable on the last epoch.
|
float[] |
LossCurve.meanLoss(SDVariable loss)
Return all mean loss values for a given variable
|
float |
LossCurve.meanLoss(SDVariable loss,
int epoch)
Return the mean loss value for a given variable on a given epoch.
|
List<IEvaluation> |
History.trainingEval(SDVariable param)
Get the results of a training evaluation on a given parameter
Only works if there is only one evaluation for param.
|
List<Double> |
History.trainingEval(SDVariable param,
IMetric metric)
Get the results of a training evaluation on a given parameter for a given metric
Only works if there is only one evaluation with the given metric for param
|
List<IEvaluation> |
History.trainingEval(SDVariable param,
int index)
Get the results of a training evaluation on a given parameter at a given index
Note that it returns all recorded evaluations.
|
List<Double> |
History.trainingEval(SDVariable param,
int index,
IMetric metric)
Get the results of a training evaluation on a given parameter at a given index, for a given metric
Note that it returns all recorded evaluations.
|
List<IEvaluation> |
History.validationEval(SDVariable param)
Get the results of a validation evaluation on a given parameter
Only works if there is only one evaluation for param.
|
List<Double> |
History.validationEval(SDVariable param,
IMetric metric)
Get the results of a validation evaluation on a given parameter for a given metric
Only works if there is only one evaluation with the given metric for param
|
List<IEvaluation> |
History.validationEval(SDVariable param,
int index)
Get the results of a validation evaluation on a given parameter at a given index
Note that it returns all recorded evaluations.
|
List<Double> |
History.validationEval(SDVariable param,
int index,
IMetric metric)
Get the results of a validation evaluation on a given parameter at a given index, for a given metric
Note that it returns all recorded evaluations.
|
Modifier and Type | Method and Description |
---|---|
<X extends SDVariable> |
SameDiff.setupFunction(X function)
Attempts to insert the
DifferentialFunction reference in to this SameDiff instance. |
Modifier and Type | Method and Description |
---|---|
SDVariable |
SDVariable.add(double scalar)
|
SDVariable |
SDVariable.add(SDVariable other)
|
SDVariable |
SDVariable.add(String varName,
double scalar)
Scalar addition:
out = this + scalar Output variable has the same shape as the input variable |
SDVariable |
SDVariable.add(String name,
SDVariable x)
Addition operation: elementwise
this + x If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SameDiff.addVariable(SDVariable variable)
Add the specified variable to this SameDiff instance
|
SDVariable |
SDVariable.argmax(int... dimensions)
|
SDVariable |
SDVariable.argmax(String name,
boolean keepDims,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.argmax(String name,
int... dimensions)
|
SDVariable |
SDVariable.argmin(int... dimensions)
|
SDVariable |
SDVariable.argmin(String name,
boolean keepDims,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.argmin(String name,
int... dimensions)
|
SDVariable |
SDVariable.assign(Number value)
Return a variable with equal shape to the input, but all elements set to the specified value
|
SDVariable |
SDVariable.castTo(DataType dataType) |
SDVariable |
SDVariable.castTo(String name,
DataType dataType) |
SDVariable |
SDVariable.clone(SameDiff sd) |
SDVariable |
SameDiff.constant(double value)
Create a new double scalar constant (rank 0) with the specified value.
Constants are not modified by training/backprop. |
SDVariable |
SameDiff.constant(float value)
Create a new float scalar constant (rank 0) with the specified value
Constants are not modified by training/backprop. |
SDVariable |
SameDiff.constant(INDArray constant)
Create an SDVariable with a fixed/constant value, with a generated name
Constants are not modified by training/backprop. |
SDVariable |
SameDiff.constant(int value)
Create a new integer scalar constant (rank 0) with the specified value
|
SDVariable |
SameDiff.constant(long value)
Create a new long scalar constant (rank 0) with the specified value
|
SDVariable |
SameDiff.constant(String name,
DataType dataType,
Number value)
Create a new scalar constant (rank 0) with the specified value and datatype
|
SDVariable |
SameDiff.constant(String name,
double value)
Create a new double scalar constant (rank 0) with the specified value
|
SDVariable |
SameDiff.constant(String name,
float value)
Create a new float scalar constant (rank 0) with the specified value
|
SDVariable |
SameDiff.constant(String name,
INDArray constant)
Create an SDVariable with a fixed/constant value
Constants are not modified by training/backprop. |
SDVariable |
SameDiff.constant(String name,
int value)
Create a new integer scalar constant (rank 0) with the specified value
|
SDVariable |
SameDiff.constant(String name,
long value)
Create a new long scalar constant (rank 0) with the specified value
|
SDVariable |
SDVariable.convertToConstant()
Convert this variable to a constant.
|
SDVariable |
SameDiff.convertToConstant(SDVariable variable)
Convert the specified variable to a constant.
|
SDVariable |
SDVariable.convertToVariable()
Convert this variable to a VARIABLE type SDVariable.
This can only be done for constants and placeholders, not ARRAY type variables (which are usually network activations). |
SDVariable |
SameDiff.convertToVariable(SDVariable constant)
Convert the specified variable to a VARIABLE type SDVariable.
This can only be done for constants and placeholders, not ARRAY type variables (which are usually network activations). |
SDVariable |
SameDiffNoArgSingleLambda.define(SameDiff sameDiff) |
SDVariable[] |
SameDiffFunctionDefinition.define(SameDiff sameDiff,
Map<String,INDArray> inputs,
SDVariable[] variableInputs) |
SDVariable |
SameDiffSingleLambda.define(SameDiff sameDiff,
SDVariable[] inputs) |
SDVariable[] |
SameDiffLambda.define(SameDiff sameDiff,
SDVariable[] inputs) |
SDVariable |
SDVariable.div(double scalar)
|
SDVariable |
SDVariable.div(SDVariable x)
|
SDVariable |
SDVariable.div(String varName,
double scalar)
Scalar division:
out = this / scalar Output variable has the same shape as the input variable |
SDVariable |
SDVariable.div(String name,
SDVariable x)
Division operation: elementwise
this / x If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.dot(SDVariable other,
int... dimensions)
|
SDVariable |
SDVariable.dot(String name,
SDVariable other,
int... dimensions)
Matrix dot product: out = dot(this,other, dimensions)
|
SDVariable |
SDVariable.dup()
Create a new SDVariable, the contents of which is copied from this current variable
|
SDVariable |
SDVariable.eq(double value)
|
SDVariable |
SDVariable.eq(SDVariable other)
|
SDVariable |
SDVariable.eq(String name,
double value)
Equals operation: elementwise
this == value Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDVariable.eq(String name,
SDVariable other)
Equal to operation: elementwise
this == y If x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SameDiffConditional.eval(SameDiff context,
SameDiffFunctionDefinition body,
SDVariable[] inputVars) |
SDVariable |
SDVariable.fdiv(String name,
SDVariable x)
Floor division operation: elementwise
this // x If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable[] |
SameDiff.generateOutputVariableForOp(DifferentialFunction function)
Generate the variables based on the given input op
and return the output variable names.
|
SDVariable[] |
SameDiff.generateOutputVariableForOp(DifferentialFunction function,
String baseName,
boolean isImport)
Generate the variables based on the given input op and return the output variable names.
|
SDVariable |
SDVariable.get(SDIndex... indices)
Get a variable with content equal to a specified sub-array of this variable.
Can be used (for example) to get rows, columns, sub-matrices, etc. |
SDVariable |
SameDiff.getGradForVariable(String varName)
Get the gradient for the variable with the specified name.
The gradient variable is the variable that represents the derivative of the loss function with respect to the output of this variable. |
SDVariable |
SDVariable.getGradient()
The gradient variable is the variable that represents the derivative of the loss function with respect
to the output of this variable.
|
SDVariable[] |
SameDiff.getInputVariablesForOp(DifferentialFunction function)
Get the input variable(s) for the specified differential function
|
SDVariable[] |
SameDiff.getOutputVariablesForOp(DifferentialFunction function)
Get the output variable(s) for the specified differential function
|
SDVariable |
SameDiff.getVariable(String name)
Get the variable based on the opName
|
SDVariable |
SameDiff.grad(String varName)
Get the gradient for the variable with the specified variable name.
|
SDVariable |
SDVariable.gradient()
Alias for the gradient variable - same as
getGradient() . |
SDVariable |
SDVariable.gt(double value)
|
SDVariable |
SDVariable.gt(SDVariable other)
|
SDVariable |
SDVariable.gt(String name,
double value)
Greater than operation: elementwise
this > value Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDVariable.gt(String name,
SDVariable other)
Greater than operation: elementwise
this > y If x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.gte(double value)
|
SDVariable |
SDVariable.gte(SDVariable other)
|
SDVariable |
SDVariable.gte(String name,
double value)
Greater than or equals operation: elementwise
this >= value Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDVariable.gte(String name,
SDVariable other)
Greater than or equal to operation: elementwise
this >= y If x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
ArgumentInterceptor.intercept(SDVariable argument) |
SDVariable |
SameDiff.invokeFunctionOn(String functionName,
SameDiff with) |
SDVariable |
SameDiff.invokeGraphOn(SameDiff sameDiff) |
SDVariable |
SDVariable.lt(double value)
|
SDVariable |
SDVariable.lt(SDVariable other)
|
SDVariable |
SDVariable.lt(String name,
double value)
Less than operation: elementwise
this < value Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDVariable.lt(String name,
SDVariable other)
Less than operation: elementwise
this < y If x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.lte(double value)
|
SDVariable |
SDVariable.lte(SDVariable other)
|
SDVariable |
SDVariable.lte(String name,
double value)
Less than or equals operation: elementwise
this <= value Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDVariable.lte(String name,
SDVariable other)
Less than or equal to operation: elementwise
this <= y If x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.max(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.max(int... dimensions)
|
SDVariable |
SDVariable.max(String name,
boolean keepDims,
int... dimensions)
Maximum array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.max(String name,
int... dimensions)
|
SDVariable |
SDVariable.mean(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.mean(int... dimensions)
|
SDVariable |
SDVariable.mean(String name,
boolean keepDims,
int... dimensions)
Mean (average) array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.mean(String name,
int... dimensions)
|
SDVariable |
SDVariable.min(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.min(int... dimensions)
|
SDVariable |
SDVariable.min(String name,
boolean keepDims,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDVariable.min(String name,
int... dimensions)
|
SDVariable |
SDVariable.minus(double other)
For Kotlin operator interop
|
SDVariable |
SDVariable.minus(SDVariable other)
For Kotlin operator interop
|
SDVariable |
SDVariable.mmul(SDVariable other)
|
SDVariable |
SDVariable.mmul(String name,
SDVariable other)
Matrix multiplication: out = mmul(this,other)
|
SDVariable |
SDVariable.mmul(String name,
SDVariable other,
MMulTranspose mMulTranspose)
Matrix multiplication: out = mmul(this,other)
|
SDVariable |
SDVariable.mod(String name,
SDVariable x)
Modulo operation: elementwise
this / x If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.mul(double scalar)
|
SDVariable |
SDVariable.mul(SDVariable x)
|
SDVariable |
SDVariable.mul(String varName,
double scalar)
Scalar multiplication:
out = this * scalar Output variable has the same shape as the input variable |
SDVariable |
SDVariable.mul(String name,
SDVariable x)
Multiplication operation: elementwise
this * x If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.neg()
Negate op - returns a new variable with the values of the current variable negated
|
SDVariable |
SDVariable.neg(String name)
Negate op - returns a new variable with the values of the current variable negated
|
SDVariable |
SDVariable.neq(double value)
See
neq(SDVariable) |
SDVariable |
SDVariable.neq(SDVariable other)
|
SDVariable |
SDVariable.neq(String name,
double value)
Not equals operation: elementwise
this != value Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDVariable.neq(String name,
SDVariable other)
Not equal to operation: elementwise
this != y If x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.norm1(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.norm1(int... dimensions)
|
SDVariable |
SDVariable.norm1(String name,
boolean keepDims,
int... dimensions)
Norm1 (L1 norm) reduction operation: The output contains the L1 norm for each tensor/subset along the specified dimensions:
out = sum_i abs(x[i]) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.norm1(String name,
int... dimensions)
|
SDVariable |
SDVariable.norm2(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.norm2(int... dimensions)
|
SDVariable |
SDVariable.norm2(String name,
boolean keepDims,
int... dimensions)
Norm2 (L2 norm) reduction operation: The output contains the L2 norm for each tensor/subset along the specified dimensions:
out = sqrt(sum_i x[i]^2) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.norm2(String name,
int... dimensions)
|
SDVariable |
SDVariable.normmax(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.normmax(int... dimensions)
|
SDVariable |
SDVariable.normmax(String name,
boolean keepDims,
int... dimensions)
Max norm (infinity norm) reduction operation: The output contains the max norm for each tensor/subset along the
specified dimensions:
out = max(abs(x[i])) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.normmax(String name,
int... dimensions)
|
SDVariable |
SameDiff.one(String name,
DataType dataType,
int... shape)
Create a new variable with the specified shape, with all values initialized to 1.0.
|
SDVariable |
SameDiff.one(String name,
DataType dataType,
long... shape)
Create a new variable with the specified shape, with all values initialized to 1.0.
|
SDVariable |
SameDiff.one(String name,
int... shape)
|
SDVariable |
SameDiff.one(String name,
long... shape)
|
SDVariable |
SDVariable.permute(int... dimensions)
Permute the dimensions of the current variable according to the specified permutation indices.
Example: if the current variable has shape [a,b,c] and dimensions = [2,0,1] the output has shape [c,a,b] |
SDVariable |
SDVariable.permute(SDVariable dimensions) |
SDVariable |
SameDiff.placeHolder(String name,
DataType dataType,
long... shape)
Create a a placeholder variable.
|
SDVariable |
SDVariable.plus(double other)
For Kotlin operator interop
|
SDVariable |
SDVariable.plus(SDVariable other)
For Kotlin operator interop
|
SDVariable |
SDVariable.pow(double scalar)
|
SDVariable |
SDVariable.pow(String varName,
double scalar)
Scalar power operation:
out = this ^ scalar Output variable has the same shape as the input variable |
SDVariable |
SDVariable.prod(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.prod(int... dimensions)
|
SDVariable |
SDVariable.prod(String name,
boolean keepDims,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.prod(String name,
int... dimensions)
|
SDVariable |
SDVariable.rank()
Get the rank of this variable as a dynamic SDVariable
|
SDVariable |
SDVariable.rdiv(double scalar)
|
SDVariable |
SDVariable.rdiv(SDVariable sameDiffVariable)
|
SDVariable |
SDVariable.rdiv(String varName,
double scalar)
Scalar reverse division:
out = scalar / this Output variable has the same shape as the input variable |
SDVariable |
SDVariable.rdiv(String name,
SDVariable x)
Reverse division operation: elementwise
x / this If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.rename(String newName)
Rename this variable to a new name.
|
SDVariable |
SDVariable.reshape(int... newShape)
Reshape the current variable to the specified shape.
|
SDVariable |
SDVariable.reshape(long... newShape)
Reshape the current variable to the specified shape.
|
SDVariable |
SDVariable.reshape(SDVariable newShape)
Reshape the current variable to the specified (dynamic) shape.
|
SDVariable |
SDVariable.rsub(double scalar)
|
SDVariable |
SDVariable.rsub(SDVariable x)
|
SDVariable |
SDVariable.rsub(String varName,
double scalar)
Scalar reverse subtraction:
out = scalar - this Output variable has the same shape as the input variable |
SDVariable |
SDVariable.rsub(String name,
SDVariable x)
Reverse subtraction operation: elementwise
x - this If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SameDiff.scalar(String name,
DataType dataType,
Number value)
Create a new scalar (rank 0) SDVariable with the specified value and datatype
|
SDVariable |
SameDiff.scalar(String name,
double value)
Create a new double scalar (rank 0) SDVariable with the specified value
|
SDVariable |
SameDiff.scalar(String name,
float value)
Create a new float scalar (rank 0) SDVariable with the specified value
|
SDVariable |
SameDiff.scalar(String name,
int value)
Create a new integer scalar (rank 0) SDVariable with the specified value
|
SDVariable |
SameDiff.scalar(String name,
long value)
Create a new long scalar (rank 0) SDVariable with the specified value
|
SDVariable |
SDVariable.setArray(INDArray array)
Associate the specified array with this variable
|
SDVariable |
SDVariable.shape()
Get the shape of the array as a dynamic SDVariable
|
SDVariable |
SDVariable.squaredDifference(SDVariable x)
|
SDVariable |
SDVariable.squaredDifference(String name,
SDVariable x)
Squared difference operation:
(this - x)^2 |
SDVariable |
SDVariable.std(boolean biasCorrected,
int... dimensions)
|
SDVariable |
SDVariable.std(String name,
boolean biasCorrected,
boolean keepDims,
int... dimensions)
Stardard deviation array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.std(String name,
boolean biasCorrected,
int... dimensions)
|
SDVariable |
SDVariable.sub(double scalar)
|
SDVariable |
SDVariable.sub(SDVariable x)
|
SDVariable |
SDVariable.sub(String varName,
double scalar)
Scalar subtraction:
out = this - scalar Output variable has the same shape as the input variable |
SDVariable |
SDVariable.sub(String name,
SDVariable x)
Subtraction operation: elementwise
this - x If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.sum(boolean keepDims,
int... dimensions)
|
SDVariable |
SDVariable.sum(int... dimensions)
|
SDVariable |
SDVariable.sum(String name,
boolean keepDims,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions.
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDVariable.sum(String name,
int... dimensions)
|
SDVariable |
SDVariable.times(double other)
For Kotlin operator interop
|
SDVariable |
SDVariable.times(SDVariable other)
For Kotlin operator interop
|
SDVariable |
SDVariable.truncatedDiv(SDVariable sameDiffVariable) |
SDVariable |
SDVariable.truncatedDiv(String varName,
SDVariable sameDiffVariable) |
SDVariable |
SameDiff.updateVariableNameAndReference(SDVariable varToUpdate,
String newVarName)
Updates the variable name property on the passed in variable, the reference in samediff, and returns the variable.
|
SDVariable[] |
SameDiff.updateVariableNamesAndReferences(SDVariable[] variablesToUpdate,
String[] newVariableNames)
Updates the variable name property on the passed in variables, its reference in samediff, and returns the variable.
|
SDVariable |
SameDiff.var(DataType dataType,
int... shape)
Creates a
SDVariable with the specified shape and a generated nameAny array will be generated with all zeros for the values This method creates a VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(DataType dataType,
long... shape)
Creates a
SDVariable with the specified shape and a generated nameAny array will be generated with all zeros for the values This method creates a VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(INDArray arr)
Create an
SDVariable with a generated name, and assocate the specified array with it.This is a VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(SDVariable v)
Initialize a
SDVariable reference tying this variable to this samediff instance. |
SDVariable |
SameDiff.var(String name,
DataType dataType,
int... shape)
Creates a
SDVariable with the given shape and nameAny array will be generated with all zeros for the values |
SDVariable |
SameDiff.var(String name,
DataType dataType,
long... shape)
Creates a
SDVariable with the given shape and nameAny array will be generated with all zeros for the values This is a VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(String name,
INDArray arr)
Create an
SDVariable with the specified name, and associate the specified array with itThis is a VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(String name,
int... shape)
Creates a
SDVariable with the given shape and nameAny array will be generated with all zeros for the values. |
SDVariable |
SameDiff.var(String name,
long... shape)
Creates a
SDVariable with the given shape and nameAny array will be generated with all zeros for the values. |
SDVariable |
SameDiff.var(String name,
LongShapeDescriptor shapeDesc)
Creates a
SDVariable with the given shape and nameAny array will be generated with all zeros for the values This is a VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(String name,
LongShapeDescriptor shape,
WeightInitScheme weightInitScheme)
Creates a
SDVariable with the given shape and nameThe underlying array will be initialized using the specified weight initilization scheme This is a VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(String name,
VariableType variableType,
WeightInitScheme weightInitScheme,
DataType dataType,
long... shape)
Variable initialization with a specified
WeightInitScheme
This method creates VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(String name,
WeightInitScheme weightInitScheme,
DataType dataType,
long... shape)
Variable initialization with a specified
WeightInitScheme
This method creates VARIABLE type SDVariable - i.e., must be floating point, and is a trainable parameter. |
SDVariable |
SameDiff.var(String name,
WeightInitScheme weightInitScheme,
long... shape)
Variable initialization with a specified
WeightInitScheme . |
SDVariable |
SameDiff.var(WeightInitScheme weightInitScheme,
DataType dataType,
long... shape)
Creates a
SDVariable with the specified shape and a generated name. |
SDVariable |
SameDiff.zero(String name,
DataType dataType,
int... shape)
Create a new variable with the specified shape, with all values initialized to 0.
|
SDVariable |
SameDiff.zero(String name,
DataType dataType,
long... shape)
Create a new variable with the specified shape, with all values initialized to 0.
|
SDVariable |
SameDiff.zero(String name,
int... shape)
|
SDVariable |
SameDiff.zero(String name,
long... shape)
|
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
SameDiff.getVariablesInScope(NameScope scope)
Gets all variables in a given name scope.
|
List<SDVariable> |
SameDiff.getVariablesInScope(String scope)
|
Map<String,SDVariable> |
SameDiff.variableMap()
Return a copy of the internal variable map
|
List<SDVariable> |
SameDiff.variables()
The list of all variables in the graph
|
Modifier and Type | Method and Description |
---|---|
SDVariable |
SDVariable.add(SDVariable other)
|
SDVariable |
SDVariable.add(String name,
SDVariable x)
Addition operation: elementwise
this + x If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
void |
SameDiff.addArgsFor(SDVariable[] variables,
DifferentialFunction function)
Adds incoming arguments for the specified differential function to the graph
|
void |
SDVariable.addControlDependency(SDVariable controlDependency)
Add a control dependency for this variable on the specified variable.
Control dependencies can be used to enforce the execution order. |
void |
SameDiff.addLossVariable(SDVariable variable)
|
void |
SameDiff.addOutgoingFor(SDVariable[] variables,
DifferentialFunction function)
Adds outgoing arguments to the graph for the specified DifferentialFunction
Also checks for input arguments and updates the graph adding an appropriate edge when the full graph is declared.
|
SDVariable |
SameDiff.addVariable(SDVariable variable)
Add the specified variable to this SameDiff instance
|
void |
SameDiff.assignArray(INDArray arr,
SDVariable variable)
Update the constant or variable type SDVariable with the values from the specified
array.
|
void |
SameDiff.associateArrayWithVariable(INDArray arr,
SDVariable variable)
Associate the array with the given variable.
|
SDVariable |
SameDiff.convertToConstant(SDVariable variable)
Convert the specified variable to a constant.
|
SDVariable |
SameDiff.convertToVariable(SDVariable constant)
Convert the specified variable to a VARIABLE type SDVariable.
This can only be done for constants and placeholders, not ARRAY type variables (which are usually network activations). |
SDVariable[] |
SameDiffFunctionDefinition.define(SameDiff sameDiff,
Map<String,INDArray> inputs,
SDVariable[] variableInputs) |
SDVariable |
SameDiffSingleLambda.define(SameDiff sameDiff,
SDVariable[] inputs) |
SDVariable[] |
SameDiffLambda.define(SameDiff sameDiff,
SDVariable[] inputs) |
SameDiff |
SameDiff.defineFunction(String function,
SameDiffFunctionDefinition functionDefinition,
SDVariable[] variables) |
SDVariable |
SDVariable.div(SDVariable x)
|
SDVariable |
SDVariable.div(String name,
SDVariable x)
Division operation: elementwise
this / x If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.dot(SDVariable other,
int... dimensions)
|
SDVariable |
SDVariable.dot(String name,
SDVariable other,
int... dimensions)
Matrix dot product: out = dot(this,other, dimensions)
|
SDVariable |
SDVariable.eq(SDVariable other)
|
SDVariable |
SDVariable.eq(String name,
SDVariable other)
Equal to operation: elementwise
this == y If x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SameDiffConditional.eval(SameDiff context,
SameDiffFunctionDefinition body,
SDVariable[] inputVars) |
SDVariable |
SDVariable.fdiv(String name,
SDVariable x)
Floor division operation: elementwise
this // x If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.gt(SDVariable other)
|
SDVariable |
SDVariable.gt(String name,
SDVariable other)
Greater than operation: elementwise
this > y If x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.gte(SDVariable other)
|
SDVariable |
SDVariable.gte(String name,
SDVariable other)
Greater than or equal to operation: elementwise
this >= y If x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
ArgumentInterceptor.intercept(SDVariable argument) |
SDVariable |
SDVariable.lt(SDVariable other)
|
SDVariable |
SDVariable.lt(String name,
SDVariable other)
Less than operation: elementwise
this < y If x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.lte(SDVariable other)
|
SDVariable |
SDVariable.lte(String name,
SDVariable other)
Less than or equal to operation: elementwise
this <= y If x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.minus(SDVariable other)
For Kotlin operator interop
|
SDVariable |
SDVariable.mmul(SDVariable other)
|
SDVariable |
SDVariable.mmul(String name,
SDVariable other)
Matrix multiplication: out = mmul(this,other)
|
SDVariable |
SDVariable.mmul(String name,
SDVariable other,
MMulTranspose mMulTranspose)
Matrix multiplication: out = mmul(this,other)
|
SDVariable |
SDVariable.mod(String name,
SDVariable x)
Modulo operation: elementwise
this / x If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.mul(SDVariable x)
|
SDVariable |
SDVariable.mul(String name,
SDVariable x)
Multiplication operation: elementwise
this * x If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.neq(SDVariable other)
|
SDVariable |
SDVariable.neq(String name,
SDVariable other)
Not equal to operation: elementwise
this != y If x and y arrays have equal shape, the output shape is the same as the inputs. Supports broadcasting: if x and y have different shapes and are broadcastable, the output shape is broadcast. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDVariable.permute(SDVariable dimensions) |
SDVariable |
SDVariable.plus(SDVariable other)
For Kotlin operator interop
|
SDVariable |
SDVariable.rdiv(SDVariable sameDiffVariable)
|
SDVariable |
SDVariable.rdiv(String name,
SDVariable x)
Reverse division operation: elementwise
x / this If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
void |
SameDiff.replaceArgFor(int i,
SDVariable newArg,
DifferentialFunction function)
Replaces the argument at i with newArg for function
Does not use (or remove) ArgumentInterceptor stuff
|
SDVariable |
SDVariable.reshape(SDVariable newShape)
Reshape the current variable to the specified (dynamic) shape.
|
SDVariable |
SDVariable.rsub(SDVariable x)
|
SDVariable |
SDVariable.rsub(String name,
SDVariable x)
Reverse subtraction operation: elementwise
x - this If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
void |
SameDiff.setGradientForVariableName(String variableName,
SDVariable variable)
Assign a SDVariable to represent the gradient of the SDVariable with the specified name
|
void |
SameDiff.setLossVariables(SDVariable... lossVariables)
|
SDVariable |
SDVariable.squaredDifference(SDVariable x)
|
SDVariable |
SDVariable.squaredDifference(String name,
SDVariable x)
Squared difference operation:
(this - x)^2 |
SDVariable |
SDVariable.sub(SDVariable x)
|
SDVariable |
SDVariable.sub(String name,
SDVariable x)
Subtraction operation: elementwise
this - x If this and x variables have equal shape, the output shape is the same as the inputs. Supports broadcasting: if this and x have different shapes and are broadcastable, the output shape is broadcast. |
SDVariable |
SDVariable.times(SDVariable other)
For Kotlin operator interop
|
TrainingConfig.Builder |
TrainingConfig.Builder.trainEvaluation(SDVariable variable,
int labelIndex,
IEvaluation... evaluations)
Add requested History training evaluations for a parm/variable.
|
SDVariable |
SDVariable.truncatedDiv(SDVariable sameDiffVariable) |
SDVariable |
SDVariable.truncatedDiv(String varName,
SDVariable sameDiffVariable) |
SDVariable |
SameDiff.updateVariableNameAndReference(SDVariable varToUpdate,
String newVarName)
Updates the variable name property on the passed in variable, the reference in samediff, and returns the variable.
|
SDVariable[] |
SameDiff.updateVariableNamesAndReferences(SDVariable[] variablesToUpdate,
String[] newVariableNames)
Updates the variable name property on the passed in variables, its reference in samediff, and returns the variable.
|
TrainingConfig.Builder |
TrainingConfig.Builder.validationEvaluation(SDVariable variable,
int labelIndex,
IEvaluation... evaluations)
Add requested History validation evaluations for a parm/variable.
|
SDVariable |
SameDiff.var(SDVariable v)
Initialize a
SDVariable reference tying this variable to this samediff instance. |
Modifier and Type | Method and Description |
---|---|
void |
SameDiff.convertToConstants(List<SDVariable> variables)
Convert all of the specified variables to constants.
|
void |
SameDiff.convertToVariables(List<SDVariable> constants)
Convert the specified variables to VARIABLE type SDVariables.
This can only be done for constants and placeholders, not ARRAY type variables (which are usually network activations). |
Modifier and Type | Method and Description |
---|---|
EvaluationConfig |
EvaluationConfig.evaluate(SDVariable variable,
IEvaluation... evaluations)
|
EvaluationConfig |
EvaluationConfig.evaluate(SDVariable variable,
int labelIndex,
IEvaluation... evaluations)
|
BatchOutputConfig |
BatchOutputConfig.input(SDVariable variable,
INDArray placeholder)
|
EvaluationConfig |
EvaluationConfig.labelIndex(SDVariable variable,
int labelIndex)
|
BatchOutputConfig |
BatchOutputConfig.output(SDVariable... outputs)
Add required outputs
|
OutputConfig |
OutputConfig.output(SDVariable... outputs)
Add required outputs
|
Modifier and Type | Method and Description |
---|---|
SDVariable |
DefaultSameDiffConditional.eval(SameDiff context,
SameDiffFunctionDefinition body,
SDVariable[] inputVars) |
Modifier and Type | Method and Description |
---|---|
SDVariable |
DefaultSameDiffConditional.eval(SameDiff context,
SameDiffFunctionDefinition body,
SDVariable[] inputVars) |
Modifier and Type | Field and Description |
---|---|
protected SDVariable |
Variable.gradient |
protected SDVariable |
Variable.variable |
Modifier and Type | Method and Description |
---|---|
protected INDArray |
InferenceSession.getArray(SDVariable sdv,
Collection<AbstractSession.VarId> opInputs,
Collection<AbstractSession.VarId> allIterInputs) |
Modifier and Type | Method and Description |
---|---|
SDVariable |
SDMath.abs(SDVariable x)
Elementwise absolute value operation: out = abs(x)
|
SDVariable |
SDMath.abs(String name,
SDVariable x)
Elementwise absolute value operation: out = abs(x)
|
SDVariable |
SDLoss.absoluteDifference(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.absoluteDifference(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.absoluteDifference(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Absolute difference loss: {@code sum_i abs( label[i] - predictions[i] )
|
SDVariable |
SDMath.acos(SDVariable x)
Elementwise acos (arccosine, inverse cosine) operation: out = arccos(x)
|
SDVariable |
SDMath.acos(String name,
SDVariable x)
Elementwise acos (arccosine, inverse cosine) operation: out = arccos(x)
|
SDVariable |
SDMath.acosh(SDVariable x)
Elementwise acosh (inverse hyperbolic cosine) function: out = acosh(x)
|
SDVariable |
SDMath.acosh(String name,
SDVariable x)
Elementwise acosh (inverse hyperbolic cosine) function: out = acosh(x)
|
SDVariable |
SDImage.adjustContrast(String name,
SDVariable in,
SDVariable factor)
Adjusts contrast of RGB or grayscale images.
|
SDVariable |
SDImage.adjustHue(String name,
SDVariable in,
SDVariable delta)
Adjust hue of RGB image
|
SDVariable |
SDImage.adjustSaturation(String name,
SDVariable in,
SDVariable factor)
Adjust saturation of RGB images
|
SDVariable |
SDBaseOps.all(SDVariable x,
int... dimensions)
|
SDVariable |
SDBaseOps.all(String name,
SDVariable x,
int... dimensions)
Boolean and array reduction operation, optionally along specified dimensions
|
SDVariable |
SDMath.amax(SDVariable in,
int... dimensions)
Absolute max array reduction operation, optionally along specified dimensions: out = max(abs(x))
|
SDVariable |
SDMath.amax(String name,
SDVariable in,
int... dimensions)
Absolute max array reduction operation, optionally along specified dimensions: out = max(abs(x))
|
SDVariable |
SDMath.amean(SDVariable in,
int... dimensions)
Absolute mean array reduction operation, optionally along specified dimensions: out = mean(abs(x))
|
SDVariable |
SDMath.amean(String name,
SDVariable in,
int... dimensions)
Absolute mean array reduction operation, optionally along specified dimensions: out = mean(abs(x))
|
SDVariable |
SDMath.amin(SDVariable in,
int... dimensions)
Absolute min array reduction operation, optionally along specified dimensions: out = min(abs(x))
|
SDVariable |
SDMath.amin(String name,
SDVariable in,
int... dimensions)
Absolute min array reduction operation, optionally along specified dimensions: out = min(abs(x))
|
SDVariable |
SDBitwise.and(SDVariable x,
SDVariable y)
|
SDVariable |
SDMath.and(SDVariable x,
SDVariable y)
Boolean AND operation: elementwise (x != 0) && (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBitwise.and(String name,
SDVariable x,
SDVariable y)
Bitwise AND operation.
|
SDVariable |
SDMath.and(String name,
SDVariable x,
SDVariable y)
Boolean AND operation: elementwise (x != 0) && (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.any(SDVariable x,
int... dimensions)
|
SDVariable |
SDBaseOps.any(String name,
SDVariable x,
int... dimensions)
Boolean or array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.argmax(SDVariable in,
boolean keepDims,
int... dimensions) |
SDVariable |
SDBaseOps.argmax(SDVariable in,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension |
SDVariable |
SDBaseOps.argmax(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmax(String name,
SDVariable in,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension |
SDVariable |
SDBaseOps.argmin(SDVariable in,
boolean keepDims,
int... dimensions) |
SDVariable |
SDBaseOps.argmin(SDVariable in,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension |
SDVariable |
SDBaseOps.argmin(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmin(String name,
SDVariable in,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension |
SDVariable |
SDMath.asin(SDVariable x)
Elementwise asin (arcsin, inverse sine) operation: out = arcsin(x)
|
SDVariable |
SDMath.asin(String name,
SDVariable x)
Elementwise asin (arcsin, inverse sine) operation: out = arcsin(x)
|
SDVariable |
SDMath.asinh(SDVariable x)
Elementwise asinh (inverse hyperbolic sine) function: out = asinh(x)
|
SDVariable |
SDMath.asinh(String name,
SDVariable x)
Elementwise asinh (inverse hyperbolic sine) function: out = asinh(x)
|
SDVariable |
SDBaseOps.assign(SDVariable in,
Number value)
Return an array with equal shape to the input, but all elements set to 'value'
|
SDVariable |
SDBaseOps.assign(SDVariable x,
SDVariable y)
Assign/copy op: out = x.assign(y).
|
SDVariable |
SDBaseOps.assign(String name,
SDVariable in,
Number value)
Return an array with equal shape to the input, but all elements set to 'value'
|
SDVariable |
SDBaseOps.assign(String name,
SDVariable x,
SDVariable y)
Assign/copy op: out = x.assign(y).
|
SDVariable |
SDMath.asum(SDVariable in,
int... dimensions)
Absolute sum array reduction operation, optionally along specified dimensions: out = sum(abs(x))
|
SDVariable |
SDMath.asum(String name,
SDVariable in,
int... dimensions)
Absolute sum array reduction operation, optionally along specified dimensions: out = sum(abs(x))
|
SDVariable |
SDMath.atan(SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = arctangent(x)
|
SDVariable |
SDMath.atan(String name,
SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = arctangent(x)
|
SDVariable |
SDMath.atan2(SDVariable y,
SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = atan2(x,y).
|
SDVariable |
SDMath.atan2(String name,
SDVariable y,
SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = atan2(x,y).
|
SDVariable |
SDMath.atanh(SDVariable x)
Elementwise atanh (inverse hyperbolic tangent) function: out = atanh(x)
|
SDVariable |
SDMath.atanh(String name,
SDVariable x)
Elementwise atanh (inverse hyperbolic tangent) function: out = atanh(x)
|
SDVariable |
SDCNN.avgPooling2d(SDVariable input,
Pooling2DConfig pooling2DConfig)
|
SDVariable |
SDCNN.avgPooling2d(String name,
SDVariable input,
Pooling2DConfig pooling2DConfig)
2D Convolution layer operation - average pooling 2d
|
SDVariable |
SDCNN.avgPooling3d(SDVariable input,
Pooling3DConfig pooling3DConfig)
|
SDVariable |
SDCNN.avgPooling3d(String name,
SDVariable input,
Pooling3DConfig pooling3DConfig)
3D convolution layer operation - average pooling 3d
|
SDVariable[] |
SDBaseOps.batchMmul(SDVariable[] matricesA,
SDVariable[] matricesB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(SDVariable[] matricesA,
SDVariable[] matricesB,
boolean transposeA,
boolean transposeB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(String[] names,
SDVariable[] matricesA,
SDVariable[] matricesB,
boolean transposeA,
boolean transposeB)
Matrix multiply a batch of matrices.
|
SDVariable |
SDNN.batchNorm(SDVariable input,
SDVariable mean,
SDVariable variance,
SDVariable gamma,
SDVariable beta,
double epsilon,
int... axis)
Batch norm operation.
|
SDVariable |
SDNN.batchNorm(String name,
SDVariable input,
SDVariable mean,
SDVariable variance,
SDVariable gamma,
SDVariable beta,
boolean applyGamma,
boolean applyBeta,
double epsilon,
int... axis)
Batch normalization with optional application of gamma/beta args.
|
SDVariable |
SDNN.batchNorm(String name,
SDVariable input,
SDVariable mean,
SDVariable variance,
SDVariable gamma,
SDVariable beta,
double epsilon,
int... axis)
Neural network batch normalization operation.
For details, see https://arxiv.org/abs/1502.03167 |
SDVariable |
SDCNN.batchToSpace(SDVariable x,
int[] blocks,
int[][] crops) |
SDVariable |
SDCNN.batchToSpace(String name,
SDVariable x,
int[] blocks,
int[][] crops)
Convolution 2d layer batch to space operation on 4d input.
|
SDVariable |
SDRandom.bernoulli(double p,
long... shape) |
SDVariable |
SDRandom.bernoulli(double p,
SDVariable shape) |
SDVariable |
SDRandom.bernoulli(String name,
double p,
long... shape)
Generate a new random SDVariable, where values are randomly sampled according to a Bernoulli distribution,
with the specified probability.
|
SDVariable |
SDRandom.bernoulli(String name,
double p,
SDVariable shape)
Generate a new random SDVariable, where values are randomly sampled according to a Bernoulli distribution,
with the specified probability.
|
SDVariable |
SDMath.betainc(String name,
SDVariable a,
SDVariable b,
SDVariable x)
Compute the regularized incomplete beta integral
|
SDVariable |
SDNN.biasAdd(SDVariable input,
SDVariable bias,
boolean nchw) |
SDVariable |
SDNN.biasAdd(String name,
SDVariable input,
SDVariable bias,
boolean nchw)
Bias addition operation: a special case of addition, typically used with CNN 4D activations and a 1D bias vector
|
SDVariable |
SDRandom.binomial(int nTrials,
double p,
long... shape)
Generate a new random SDVariable, where values are randomly sampled according to a Binomial distribution,
with the specified number of trials and probability.
|
SDVariable |
SDRandom.binomial(String name,
int nTrials,
double p,
long... shape)
Generate a new random SDVariable, where values are randomly sampled according to a Binomial distribution,
with the specified number of trials and probability.
|
SDVariable |
SDMath.bitRotl(String name,
SDVariable x,
SDVariable shift)
Roll integer bits to the left, i.e.
|
SDVariable |
SDMath.bitRotr(String name,
SDVariable x,
SDVariable shift)
Roll integer bits to the right, i.e.
|
SDVariable |
SDBitwise.bitsHammingDistance(SDVariable x,
SDVariable y)
|
SDVariable |
SDBitwise.bitsHammingDistance(String name,
SDVariable x,
SDVariable y)
Bitwise Hamming distance reduction over all elements of both input arrays.
For example, if x=01100000 and y=1010000 then the bitwise Hamming distance is 2 (due to differences at positions 0 and 1) |
SDVariable |
SDMath.bitShift(String name,
SDVariable x,
SDVariable shift)
Shift integer bits to the left, i.e.
|
SDVariable |
SDMath.bitShiftRight(String name,
SDVariable x,
SDVariable shift)
Shift integer bits to the right, i.e.
|
SDVariable |
SDBaseOps.castTo(SDVariable toCast,
DataType toType) |
SDVariable |
SDBaseOps.castTo(String name,
SDVariable toCast,
DataType toType) |
SDVariable |
SDMath.ceil(SDVariable x)
Element-wise ceiling function: out = ceil(x).
|
SDVariable |
SDMath.ceil(String name,
SDVariable x)
Element-wise ceiling function: out = ceil(x).
|
SDVariable |
SDMath.clipByNorm(SDVariable x,
double clipValue)
Clipping by L2 norm
if l2Norm(x) < clipValue, then input is returned unmodifed Otherwise, out[i] = in[i] * clipValue / l2Norm(in) |
SDVariable |
SDMath.clipByNorm(SDVariable x,
double clipValue,
int... dimensions)
Clipping by L2 norm, optionally along dimension(s)
if l2Norm(x,dimension) < clipValue, then input is returned unmodifed Otherwise, out[i] = in[i] * clipValue / l2Norm(in, dimensions) where each value is clipped according to the corresponding l2Norm along the specified dimensions |
SDVariable |
SDMath.clipByNorm(String name,
SDVariable x,
double clipValue)
Clipping by L2 norm
if l2Norm(x) < clipValue, then input is returned unmodifed Otherwise, out[i] = in[i] * clipValue / l2Norm(in) |
SDVariable |
SDMath.clipByNorm(String name,
SDVariable x,
double clipValue,
int... dimensions)
Clipping by L2 norm, optionally along dimension(s)
if l2Norm(x,dimension) < clipValue, then input is returned unmodifed Otherwise, out[i] = in[i] * clipValue / l2Norm(in, dimensions) where each value is clipped according to the corresponding l2Norm along the specified dimensions |
SDVariable |
SDMath.clipByValue(SDVariable x,
double clipValueMin,
double clipValueMax)
Element-wise clipping function:
out[i] = in[i] if in[i] >= clipValueMin and in[i] <= clipValueMax out[i] = clipValueMin if in[i] < clipValueMin out[i] = clipValueMax if in[i] > clipValueMax |
SDVariable |
SDMath.clipByValue(String name,
SDVariable x,
double clipValueMin,
double clipValueMax)
Element-wise clipping function:
out[i] = in[i] if in[i] >= clipValueMin and in[i] <= clipValueMax out[i] = clipValueMin if in[i] < clipValueMin out[i] = clipValueMax if in[i] > clipValueMax |
SDVariable |
SDCNN.col2Im(SDVariable in,
Conv2DConfig config)
|
SDVariable |
SDCNN.col2Im(String name,
SDVariable in,
Conv2DConfig config)
col2im operation for use in 2D convolution operations.
|
SDVariable |
SDBaseOps.concat(int dimension,
SDVariable... inputs) |
SDVariable |
SDBaseOps.concat(String name,
int dimension,
SDVariable... inputs)
Concatenate a set of inputs along the specified dimension.
Note that inputs must have identical rank and identical dimensions, other than the dimension to stack on. For example, if 2 inputs have shape [a, x, c] and [a, y, c] and dimension = 1, then the output has shape [a, x+y, c] |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable predictions) |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable pred,
Integer numClasses) |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable pred,
Integer numClasses,
SDVariable weights) |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable pred,
SDVariable weights) |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred) |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
DataType dataType)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values.
|
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
Integer numClasses)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values.
For example, if labels = [0, 1, 1], predicted = [0, 2, 1], and numClasses=4 then output is: [1, 0, 0, 0] [0, 1, 1, 0] [0, 0, 0, 0] [0, 0, 0, 0] |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
Integer numClasses,
SDVariable weights)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values.
For example, if labels = [0, 1, 1], predicted = [0, 2, 1], numClasses = 4, and weights = [1, 2, 3] [1, 0, 0, 0] [0, 3, 2, 0] [0, 0, 0, 0] [0, 0, 0, 0] |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
SDVariable weights)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values.
|
SDVariable |
SDCNN.conv1d(SDVariable input,
SDVariable weights,
Conv1DConfig conv1DConfig)
|
SDVariable |
SDCNN.conv1d(SDVariable input,
SDVariable weights,
SDVariable bias,
Conv1DConfig conv1DConfig)
|
SDVariable |
SDCNN.conv1d(String name,
SDVariable input,
SDVariable weights,
Conv1DConfig conv1DConfig)
|
SDVariable |
SDCNN.conv1d(String name,
SDVariable input,
SDVariable weights,
SDVariable bias,
Conv1DConfig conv1DConfig)
Conv1d operation.
|
SDVariable |
SDCNN.conv2d(SDVariable[] inputs,
Conv2DConfig config)
|
SDVariable |
SDCNN.conv2d(SDVariable layerInput,
SDVariable weights,
Conv2DConfig config)
|
SDVariable |
SDCNN.conv2d(SDVariable layerInput,
SDVariable weights,
SDVariable bias,
Conv2DConfig config)
|
SDVariable |
SDCNN.conv2d(String name,
SDVariable[] inputs,
Conv2DConfig config)
2D Convolution operation with optional bias
|
SDVariable |
SDCNN.conv2d(String name,
SDVariable layerInput,
SDVariable weights,
Conv2DConfig config)
|
SDVariable |
SDCNN.conv2d(String name,
SDVariable layerInput,
SDVariable weights,
SDVariable bias,
Conv2DConfig config)
2D Convolution operation with optional bias
|
SDVariable |
SDCNN.conv3d(SDVariable input,
SDVariable weights,
Conv3DConfig conv3DConfig)
|
SDVariable |
SDCNN.conv3d(SDVariable input,
SDVariable weights,
SDVariable bias,
Conv3DConfig conv3DConfig)
|
SDVariable |
SDCNN.conv3d(String name,
SDVariable input,
SDVariable weights,
Conv3DConfig conv3DConfig)
|
SDVariable |
SDCNN.conv3d(String name,
SDVariable input,
SDVariable weights,
SDVariable bias,
Conv3DConfig conv3DConfig)
Convolution 3D operation with optional bias
|
SDVariable |
SDMath.cos(SDVariable x)
Elementwise cosine operation: out = cos(x)
|
SDVariable |
SDMath.cos(String name,
SDVariable x)
Elementwise cosine operation: out = cos(x)
|
SDVariable |
SDMath.cosh(SDVariable x)
Elementwise cosh (hyperbolic cosine) operation: out = cosh(x)
|
SDVariable |
SDMath.cosh(String name,
SDVariable x)
Elementwise cosh (hyperbolic cosine) operation: out = cosh(x)
|
SDVariable |
SDMath.cosineDistance(SDVariable x,
SDVariable y,
int... dimensions) |
SDVariable |
SDMath.cosineDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Cosine distance reduction operation.
|
SDVariable |
SDLoss.cosineDistance(String name,
SDVariable label,
SDVariable predictions,
int dimension)
|
SDVariable |
SDLoss.cosineDistance(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce,
int dimension)
|
SDVariable |
SDLoss.cosineDistance(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i] , which is
equivalent to cosine distance when both the predictions and labels are normalized.Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. |
SDVariable |
SDMath.cosineSimilarity(SDVariable x,
SDVariable y,
int... dimensions) |
SDVariable |
SDMath.cosineSimilarity(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Cosine similarity pairwise reduction operation.
|
SDVariable |
SDMath.countNonZero(SDVariable input,
int... dimensions)
Count non zero array reduction operation, optionally along specified dimensions: out = count(x != 0)
|
SDVariable |
SDMath.countNonZero(String name,
SDVariable input,
int... dimensions)
Count non zero array reduction operation, optionally along specified dimensions: out = count(x != 0)
|
SDVariable |
SDMath.countZero(SDVariable input,
int... dimensions)
Count zero array reduction operation, optionally along specified dimensions: out = count(x == 0)
|
SDVariable |
SDMath.countZero(String name,
SDVariable input,
int... dimensions)
Count zero array reduction operation, optionally along specified dimensions: out = count(x == 0)
|
SDVariable |
SDImage.cropAndResize(String name,
SDVariable image,
SDVariable cropBoxes,
SDVariable boxIndices,
SDVariable cropOutSize,
CropAndResize.Method method,
double extrapolationValue)
Given an input image and some crop boxes, extract out the image subsets and resize them to the specified size.
|
SDVariable |
SDMath.cross(SDVariable a,
SDVariable b) |
SDVariable |
SDMath.cross(String name,
SDVariable a,
SDVariable b)
Returns the pair-wise cross product of equal size arrays a and b: a x b = ||a||x||b|| sin(theta).
|
SDVariable |
SDMath.cube(SDVariable x)
Element-wise cube function: out = x^3
|
SDVariable |
SDMath.cube(String name,
SDVariable x)
Element-wise cube function: out = x^3
|
SDVariable |
SDBaseOps.cumprod(SDVariable in,
boolean exclusive,
boolean reverse,
int... axis) |
SDVariable |
SDBaseOps.cumprod(String name,
SDVariable in,
boolean exclusive,
boolean reverse,
int... axis)
Cumulative product operation.
For input: [ a, b, c], output is: exclusize=false, reverse=false: [a, a*b, a*b*c] exclusive=true, reverse=false, [0, a, a*b] exclusive=false, reverse=true: [a*b*c, b*c, c] exclusive=true, reverse=true: [b*c, c, 0] |
SDVariable |
SDBaseOps.cumsum(SDVariable in,
boolean exclusive,
boolean reverse,
int... axis) |
SDVariable |
SDBaseOps.cumsum(String name,
SDVariable in,
boolean exclusive,
boolean reverse,
int... axis)
Cumulative sum operation.
For input: [ a, b, c], output is: exclusize=false, reverse=false: [a, a+b, a+b+c] exclusive=true, reverse=false, [0, a, a+b] exclusive=false, reverse=true: [a+b+c, b+c, c] exclusive=true, reverse=true: [b+c, c, 0] |
SDVariable |
SDCNN.deconv2d(SDVariable[] inputs,
DeConv2DConfig deconv2DConfig)
|
SDVariable |
SDCNN.deconv2d(SDVariable layerInput,
SDVariable weights,
DeConv2DConfig deconv2DConfig)
|
SDVariable |
SDCNN.deconv2d(SDVariable layerInput,
SDVariable weights,
SDVariable bias,
DeConv2DConfig deconv2DConfig)
|
SDVariable |
SDCNN.deconv2d(String name,
SDVariable[] inputs,
DeConv2DConfig deconv2DConfig)
2D deconvolution operation with or without optional bias
|
SDVariable |
SDCNN.deconv2d(String name,
SDVariable layerInput,
SDVariable weights,
DeConv2DConfig deconv2DConfig)
|
SDVariable |
SDCNN.deconv2d(String name,
SDVariable layerInput,
SDVariable weights,
SDVariable bias,
DeConv2DConfig deconv2DConfig)
2D deconvolution operation with optional bias
|
SDVariable |
SDCNN.deconv3d(SDVariable input,
SDVariable weights,
DeConv3DConfig config)
|
SDVariable |
SDCNN.deconv3d(SDVariable input,
SDVariable weights,
SDVariable bias,
DeConv3DConfig config)
|
SDVariable |
SDCNN.deconv3d(String name,
SDVariable input,
SDVariable weights,
DeConv3DConfig config)
|
SDVariable |
SDCNN.deconv3d(String name,
SDVariable input,
SDVariable weights,
SDVariable bias,
DeConv3DConfig config)
3D CNN deconvolution operation with or without optional bias
|
SDVariable |
SDCNN.depthToSpace(SDVariable x,
int blockSize,
String dataFormat)
|
SDVariable |
SDCNN.depthToSpace(String name,
SDVariable x,
int blockSize,
String dataFormat)
Convolution 2d layer batch to space operation on 4d input.
Reduces input channels dimension by rearranging data into a larger spatial dimensions Example: if input has shape [mb, 8, 2, 2] and block size is 2, then output size is [mb, 8/(2*2), 2*2, 2*2] = [mb, 2, 4, 4] |
SDVariable |
SDCNN.depthWiseConv2d(SDVariable[] inputs,
Conv2DConfig depthConv2DConfig)
|
SDVariable |
SDCNN.depthWiseConv2d(SDVariable layerInput,
SDVariable depthWeights,
Conv2DConfig config)
|
SDVariable |
SDCNN.depthWiseConv2d(SDVariable layerInput,
SDVariable depthWeights,
SDVariable bias,
Conv2DConfig config)
|
SDVariable |
SDCNN.depthWiseConv2d(String name,
SDVariable[] inputs,
Conv2DConfig depthConv2DConfig)
Depth-wise convolution 2D operation.
|
SDVariable |
SDCNN.depthWiseConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
Conv2DConfig config)
|
SDVariable |
SDCNN.depthWiseConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
SDVariable bias,
Conv2DConfig config)
Depth-wise 2D convolution operation with optional bias
|
SDVariable |
SDMath.diag(SDVariable x) |
SDVariable |
SDMath.diag(String name,
SDVariable x)
Returns an output variable with diagonal values equal to the specified values; off-diagonal values will be set to 0
For example, if input = [1,2,3], then output is given by: [ 1, 0, 0] [ 0, 2, 0] [ 0, 0, 3] Higher input ranks are also supported: if input has shape [a,...,R-1] then output[i,...,k,i,...,k] = input[i,...,k]. |
SDVariable |
SDMath.diagPart(SDVariable x) |
SDVariable |
SDMath.diagPart(String name,
SDVariable x)
Extract the diagonal part from the input array.
If input is [ 1, 0, 0] [ 0, 2, 0] [ 0, 0, 3] then output is [1, 2, 3]. Supports higher dimensions: in general, out[i,...,k] = in[i,...,k,i,...,k] |
SDVariable |
SDCNN.dilation2D(SDVariable df,
SDVariable weights,
int[] strides,
int[] rates,
boolean isSameMode)
|
SDVariable |
SDCNN.dilation2D(String name,
SDVariable df,
SDVariable weights,
int[] strides,
int[] rates,
boolean isSameMode)
TODO doc string
|
SDVariable |
SDBaseOps.dot(SDVariable x,
SDVariable y,
int... dimensions)
TODO doc string
|
SDVariable |
SDBaseOps.dot(String name,
SDVariable x,
SDVariable y,
int... dimensions)
TODO doc string
|
SDVariable |
SDNN.dotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled)
This operation performs dot product attention on the given timeseries input with the given queries
|
SDVariable |
SDNN.dotProductAttention(String name,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled)
This operation performs dot product attention on the given timeseries input with the given queries
|
SDVariable |
SDNN.dropout(SDVariable input,
double inputRetainProbability) |
SDVariable |
SDNN.dropout(String name,
SDVariable input,
double inputRetainProbability) |
SDVariable[] |
SDBaseOps.dynamicPartition(SDVariable x,
SDVariable partitions,
int numPartitions) |
SDVariable[] |
SDBaseOps.dynamicPartition(String[] name,
SDVariable x,
SDVariable partitions,
int numPartitions)
Dynamically partition the input variable values into the specified number of paritions, using the indices.
Example: |
SDVariable |
SDBaseOps.dynamicStitch(SDVariable[] indices,
SDVariable[] x) |
SDVariable |
SDBaseOps.dynamicStitch(String name,
SDVariable[] indices,
SDVariable[] x)
Dynamically merge the specified input arrays into a single array, using the specified indices
|
SDVariable |
SDNN.elu(SDVariable x)
Element-wise exponential linear unit (ELU) function:
out = x if x > 0 out = a * (exp(x) - 1) if x <= 0 with constant a = 1.0 |
SDVariable |
SDNN.elu(String name,
SDVariable x)
Element-wise exponential linear unit (ELU) function:
out = x if x > 0 out = a * (exp(x) - 1) if x <= 0 with constant a = 1.0 |
SDVariable |
SDMath.entropy(SDVariable in,
int... dimensions)
Entropy reduction: -sum(x * log(x))
|
SDVariable |
SDMath.entropy(String name,
SDVariable in,
int... dimensions)
Entropy reduction: -sum(x * log(x))
|
SDVariable |
SDBaseOps.eq(SDVariable x,
double y)
Equals operation: elementwise x == y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.eq(SDVariable x,
SDVariable y)
Equal to operation: elementwise x == y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.eq(String name,
SDVariable x,
double y)
Equals operation: elementwise x == y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.eq(String name,
SDVariable x,
SDVariable y)
Equal to operation: elementwise x == y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDMath.erf(SDVariable x)
Element-wise Gaussian error function - out = erf(in)
|
SDVariable |
SDMath.erf(String name,
SDVariable x)
Element-wise Gaussian error function - out = erf(in)
|
SDVariable |
SDMath.erfc(SDVariable x)
Element-wise complementary Gaussian error function - out = erfc(in) = 1 - erf(in)
|
SDVariable |
SDMath.erfc(String name,
SDVariable x)
Element-wise complementary Gaussian error function - out = erfc(in) = 1 - erf(in)
|
SDVariable |
SDMath.euclideanDistance(SDVariable x,
SDVariable y,
int... dimensions) |
SDVariable |
SDMath.euclideanDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Euclidean distance (l2 norm, l2 distance) reduction operation.
|
SDVariable |
SDMath.exp(SDVariable x)
Elementwise exponent function: out = exp(x) = 2.71828...^x
|
SDVariable |
SDMath.exp(String name,
SDVariable x)
Elementwise exponent function: out = exp(x) = 2.71828...^x
|
SDVariable |
SDBaseOps.expandDims(SDVariable x,
int axis) |
SDVariable |
SDBaseOps.expandDims(String name,
SDVariable x,
int axis)
Reshape the input by adding a 1 at the specified location.
For example, if input has shape [a, b], then output shape is: axis = 0: [1, a, b] axis = 1: [a, 1, b] axis = 2: [a, b, 1] |
SDVariable |
SDMath.expm1(SDVariable x)
Elementwise 1.0 - exponent function: out = 1.0 - exp(x) = 1.0 - 2.71828...^x
|
SDVariable |
SDMath.expm1(String name,
SDVariable x)
Elementwise 1.0 - exponent function: out = 1.0 - exp(x) = 1.0 - 2.71828...^x
|
SDVariable |
SDRandom.exponential(double lambda,
SDVariable shape)
Generate a new random SDVariable, where values are randomly sampled according to a exponential distribution:
P(x) = lambda * exp(-lambda * x)
|
SDVariable |
SDRandom.exponential(String name,
double lambda,
SDVariable shape)
Generate a new random SDVariable, where values are randomly sampled according to a exponential distribution:
P(x) = lambda * exp(-lambda * x)
|
SDVariable |
SDImage.extractImagePatches(String name,
SDVariable image,
int[] kSizes,
int[] strides,
int[] rates,
boolean sameMode)
Given an input image, extract out image patches (of size kSizes - h x w) and place them in the depth dimension.
|
SDVariable |
SDCNN.extractImagePatches(String name,
SDVariable input,
int kH,
int kW,
int sH,
int sW,
int rH,
int rW,
boolean sameMode)
Extract image patches
|
SDVariable |
SDMath.eye(int rows)
Generate a square identity matrix with the specified number of rows.
|
SDVariable |
SDMath.eye(int rows,
int cols) |
SDVariable |
SDMath.eye(int rows,
int cols,
DataType dataType,
int... batchDimension)
|
SDVariable |
SDMath.eye(SDVariable rows)
As per
SDMath.eye(int) but with the number of rows specified as a scalar SDVariable |
SDVariable |
SDMath.eye(SDVariable rows,
SDVariable cols)
As per
SDMath.eye(int, int) bit with the number of rows/columns specified as scalar SDVariables |
SDVariable |
SDMath.eye(SDVariable rows,
SDVariable cols,
SDVariable batchDimension)
As per
SDMath.eye(int, int, DataType, int...) bit with the number of rows/columns specified as scalar SDVariables,
and the batch dimension specified as a 1D SDVariable |
SDVariable |
SDMath.eye(String name,
int rows)
Generate an identity matrix with the specified number of rows and columns.
|
SDVariable |
SDMath.eye(String name,
int rows,
int cols)
As per
SDMath.eye(String, int, int, DataType) but with the default datatype, Eye.DEFAULT_DTYPE |
SDVariable |
SDMath.eye(String name,
int rows,
int cols,
DataType dataType)
Generate an identity matrix with the specified number of rows and columns
Example:
|
SDVariable |
SDMath.eye(String name,
int rows,
int cols,
DataType dataType,
int... batchDimension)
Generate an identity matrix with the specified number of rows and columns, with optional leading dims
Example: batchShape: [3,3] numRows: 2 numCols: 4 returns a tensor of shape (3, 3, 2, 4) that consists of 3 * 3 batches of (2,4)-shaped identity matrices: 1 0 0 0 0 1 0 0 |
SDVariable |
SDMath.eye(String name,
SDVariable rows)
As per
SDMath.eye(String, int) but with the number of rows specified as a scalar SDVariable |
SDVariable |
SDMath.eye(String name,
SDVariable rows,
SDVariable cols)
As per
SDMath.eye(String, int, int) bit with the number of rows/columns specified as scalar SDVariables |
SDVariable |
SDMath.eye(String name,
SDVariable rows,
SDVariable cols,
SDVariable batchDimension)
As per
#eye(String, int, int, int...) bit with the number of rows/columns specified as scalar SDVariables,
and the batch dimension specified as a 1D SDVariable |
SDVariable |
SDBaseOps.fill(SDVariable shape,
DataType dataType,
double value)
Generate an output variable with the specified (dynamic) shape with all elements set to the specified value
|
SDVariable |
SDBaseOps.fill(String name,
SDVariable shape,
DataType dataType,
double value)
Generate an output variable with the specified (dynamic) shape with all elements set to the specified value
|
SDVariable |
SDMath.firstIndex(SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions) |
SDVariable |
SDMath.firstIndex(SDVariable in,
Condition condition,
int... dimensions) |
SDVariable |
SDMath.firstIndex(String name,
SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions)
First index reduction operation.
Returns a variable that contains the index of the first element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.firstIndex(String name,
SDVariable in,
Condition condition,
int... dimensions)
First index reduction operation.
Returns a variable that contains the index of the first element that matches the specified condition (for each slice along the specified dimensions) |
SDVariable |
SDMath.floor(SDVariable x)
Element-wise floor function: out = floor(x).
|
SDVariable |
SDMath.floor(String name,
SDVariable x)
Element-wise floor function: out = floor(x).
|
SDVariable[] |
SDNN.fusedBatchNorm(String[] names,
SDVariable x,
SDVariable scale,
SDVariable offset,
SDVariable dataFormat,
SDVariable isTraining)
Batch normalization
|
SDVariable |
SDBaseOps.gather(SDVariable df,
int[] indices,
int axis) |
SDVariable |
SDBaseOps.gather(SDVariable df,
SDVariable indices,
int axis) |
SDVariable |
SDBaseOps.gather(String name,
SDVariable df,
int[] indices,
int axis)
Gather slices from the input variable where the indices are specified as fixed int[] values.
Output shape is same as input shape, except for axis dimension, which has size equal to indices.length. |
SDVariable |
SDBaseOps.gather(String name,
SDVariable df,
SDVariable indices,
int axis)
Gather slices from the input variable where the indices are specified as dynamic SDVariable values.
Output shape is same as input shape, except for axis dimension, which has size equal to indices.length. |
SDVariable |
SDBaseOps.gatherNd(SDVariable df,
SDVariable indices)
TODO doc string
|
SDVariable |
SDBaseOps.gatherNd(String name,
SDVariable df,
SDVariable indices)
TODO doc string
|
SDVariable |
SDNN.gelu(SDVariable x)
GELU activation function - Gaussian Error Linear Units
For more details, see Gaussian Error Linear Units (GELUs) - https://arxiv.org/abs/1606.08415 This method uses the sigmoid approximation |
SDVariable |
SDNN.gelu(String name,
SDVariable x)
GELU activation function - Gaussian Error Linear Units
For more details, see Gaussian Error Linear Units (GELUs) - https://arxiv.org/abs/1606.08415 This method uses the sigmoid approximation |
protected SDVariable |
SDBaseOps.gradientBackwardsMarker(SDVariable x)
Intended for internal/developer use
|
protected SDVariable |
SDBaseOps.gradientBackwardsMarker(String name,
SDVariable x)
Intended for internal/developer use
|
SDVariable |
SDBaseOps.gt(SDVariable x,
double y)
Greater than operation: elementwise x > y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.gt(SDVariable x,
SDVariable y)
Greater than operation: elementwise x > y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.gt(String name,
SDVariable x,
double y)
Greater than operation: elementwise x > y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.gt(String name,
SDVariable x,
SDVariable y)
Greater than operation: elementwise x > y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.gte(SDVariable x,
double y)
Greater than or equals operation: elementwise x >= y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.gte(SDVariable x,
SDVariable y)
Greater than or equal to operation: elementwise x >= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.gte(String name,
SDVariable x,
double y)
Greater than or equals operation: elementwise x >= y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.gte(String name,
SDVariable x,
SDVariable y)
Greater than or equal to operation: elementwise x >= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDMath.hammingDistance(SDVariable x,
SDVariable y,
int... dimensions) |
SDVariable |
SDMath.hammingDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Hamming distance reduction operation.
|
SDVariable |
SDNN.hardSigmoid(SDVariable in)
Element-wise hard sigmoid function:
out[i] = 0 if in[i] <= -2.5 out[1] = 0.2*in[i]+0.5 if -2.5 < in[i] < 2.5 out[i] = 1 if in[i] >= 2.5 |
SDVariable |
SDNN.hardSigmoid(String name,
SDVariable in)
Element-wise hard sigmoid function:
out[i] = 0 if in[i] <= -2.5 out[1] = 0.2*in[i]+0.5 if -2.5 < in[i] < 2.5 out[i] = 1 if in[i] >= 2.5 |
SDVariable |
SDNN.hardTanh(SDVariable in)
Element-wise hard tanh function:
out[i] = -1 if in[i] <= -1 out[1] = in[i] if -1 < in[i] < 1 out[i] = 1 if in[i] >= 1 |
SDVariable |
SDNN.hardTanh(String name,
SDVariable in)
Element-wise hard tanh function:
out[i] = -1 if in[i] <= -1 out[1] = in[i] if -1 < in[i] < 1 out[i] = 1 if in[i] >= 1 |
SDVariable |
SDNN.hardTanhDerivative(SDVariable x)
Derivative (dOut/dIn) of the element-wise hard Tanh function -
SDNN.hardTanh(SDVariable) |
SDVariable |
SDNN.hardTanhDerivative(String name,
SDVariable x)
Derivative (dOut/dIn) of the element-wise hard Tanh function -
SDNN.hardTanh(SDVariable) |
SDVariable |
SDLoss.hingeLoss(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.hingeLoss(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.hingeLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Hinge loss: a loss function used for training classifiers.
|
SDVariable |
SDLoss.huberLoss(String name,
SDVariable label,
SDVariable predictions,
double delta)
|
SDVariable |
SDLoss.huberLoss(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce,
double delta)
|
SDVariable |
SDLoss.huberLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
SDMath.iamax(SDVariable in,
boolean keepDims,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
|
SDVariable |
SDMath.iamax(SDVariable in,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
|
SDVariable |
SDMath.iamax(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
|
SDVariable |
SDMath.iamax(String name,
SDVariable in,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
|
SDVariable |
SDMath.iamin(SDVariable in,
boolean keepDims,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
|
SDVariable |
SDMath.iamin(SDVariable in,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
|
SDVariable |
SDMath.iamin(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
|
SDVariable |
SDMath.iamin(String name,
SDVariable in,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
|
SDVariable |
SDBaseOps.identity(SDVariable input)
Elementwise identity operation: out = x
|
SDVariable |
SDBaseOps.identity(String name,
SDVariable input)
Elementwise identity operation: out = x
|
SDVariable |
SDBaseOps.ifCond(SameDiffNoArgSingleLambda cond,
SameDiffNoArgSingleLambda trueBody,
SameDiffNoArgSingleLambda falseBody)
|
SDVariable |
SDBaseOps.ifCond(String ifName,
SameDiffNoArgSingleLambda cond,
SameDiffNoArgSingleLambda trueBody,
SameDiffNoArgSingleLambda falseBody)
|
SDVariable |
SDBaseOps.ifCond(String outputName,
String ifName,
SameDiffNoArgSingleLambda cond,
SameDiffNoArgSingleLambda trueBody,
SameDiffNoArgSingleLambda falseBody)
Constructs a If statement using the tensorflow style control flow operations (Switch and Merge)
If the result of cond is true, returns the result of trueBody, otherwise returns the result of falseBody
Note that cond and body lambdas are only called once to construct the graph.
|
SDVariable |
SDCNN.im2Col(SDVariable in,
Conv2DConfig config)
|
SDVariable |
SDCNN.im2Col(String name,
SDVariable in,
Conv2DConfig config)
im2col operation for use in 2D convolution operations.
|
SDVariable |
SDBaseOps.invertPermutation(SDVariable input)
Compute the inverse permutation indices for a permutation operation
Example: if input is [2, 0, 1] then output is [1, 2, 0] The idea is that x.permute(input).permute(invertPermutation(input)) == x |
SDVariable |
SDBaseOps.invertPermutation(String name,
SDVariable input)
Compute the inverse permutation indices for a permutation operation
Example: if input is [2, 0, 1] then output is [1, 2, 0] The idea is that x.permute(input).permute(invertPermutation(input)) == x |
SDVariable |
SDMath.isFinite(SDVariable x)
Is finite operation: elementwise isFinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isFinite(String name,
SDVariable x)
Is finite operation: elementwise isFinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isInfinite(SDVariable x)
Is infinite operation: elementwise isInfinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isInfinite(String name,
SDVariable x)
Is infinite operation: elementwise isInfinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isMax(SDVariable x)
Is maximum operation: elementwise x == max(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isMax(String name,
SDVariable x)
Is maximum operation: elementwise x == max(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isNaN(SDVariable x)
Is Not a Number operation: elementwise isNaN(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isNaN(String name,
SDVariable x)
Is Not a Number operation: elementwise isNaN(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isNonDecreasing(SDVariable x)
Is the array non decreasing?
An array is non-decreasing if for every valid i, x[i] <= x[i+1]. |
SDVariable |
SDMath.isNonDecreasing(String name,
SDVariable x)
Is the array non decreasing?
An array is non-decreasing if for every valid i, x[i] <= x[i+1]. |
SDVariable |
SDBaseOps.isNumericTensor(SDVariable x)
Is the director a numeric tensor? In the current version of ND4J/SameDiff, this always returns true/1
|
SDVariable |
SDBaseOps.isNumericTensor(String name,
SDVariable x)
Is the director a numeric tensor? In the current version of ND4J/SameDiff, this always returns true/1
|
SDVariable |
SDMath.isStrictlyIncreasing(SDVariable x)
Is the array strictly increasing?
An array is strictly increasing if for every valid i, x[i] < x[i+1]. |
SDVariable |
SDMath.isStrictlyIncreasing(String name,
SDVariable x)
Is the array strictly increasing?
An array is strictly increasing if for every valid i, x[i] < x[i+1]. |
SDVariable |
SDMath.jaccardDistance(SDVariable x,
SDVariable y,
int... dimensions)
Jaccard similarity reduction operation.
|
SDVariable |
SDMath.jaccardDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Jaccard similarity reduction operation.
|
SDVariable |
SDLoss.l2Loss(SDVariable var)
L2 loss: 1/2 * sum(x^2)
|
SDVariable |
SDLoss.l2Loss(String name,
SDVariable var)
L2 loss: 1/2 * sum(x^2)
|
SDVariable |
SDMath.lastIndex(SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions) |
SDVariable |
SDMath.lastIndex(SDVariable in,
Condition condition,
int... dimensions) |
SDVariable |
SDMath.lastIndex(String name,
SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions)
Last index reduction operation.
Returns a variable that contains the index of the last element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.lastIndex(String name,
SDVariable in,
Condition condition,
int... dimensions)
Last index reduction operation.
Returns a variable that contains the index of the last element that matches the specified condition (for each slice along the specified dimensions) |
SDVariable |
SDNN.layerNorm(SDVariable input,
SDVariable gain,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization without bias
y = gain * standardize(x)
|
SDVariable |
SDNN.layerNorm(SDVariable input,
SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization
y = gain * standardize(x) + bias
|
SDVariable |
SDNN.layerNorm(String name,
SDVariable input,
SDVariable gain,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization
y = gain * standardize(x)
|
SDVariable |
SDNN.layerNorm(String name,
SDVariable input,
SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization
y = gain * standardize(x) + bias
|
SDVariable |
SDNN.leakyRelu(SDVariable x,
double alpha)
Element-wise leaky ReLU function:
out = x if x >= 0.0 out = alpha * x if x < cutoff Alpha value is most commonly set to 0.01 |
SDVariable |
SDNN.leakyRelu(String name,
SDVariable x,
double alpha)
Element-wise leaky ReLU function:
out = x if x >= 0.0 out = alpha * x if x < cutoff Alpha value is most commonly set to 0.01 |
SDVariable |
SDNN.leakyReluDerivative(String name,
SDVariable x,
double alpha)
Leaky ReLU derivative: dOut/dIn given input.
See SDNN.leakyRelu(String, SDVariable, double) |
SDVariable |
SDBitwise.leftShift(SDVariable x,
SDVariable y)
|
SDVariable |
SDBitwise.leftShift(String name,
SDVariable x,
SDVariable y)
Bitwise left shift operation.
|
SDVariable |
SDBitwise.leftShiftCyclic(SDVariable x,
SDVariable y)
|
SDVariable |
SDBitwise.leftShiftCyclic(String name,
SDVariable x,
SDVariable y)
Bitwise left cyclical shift operation.
|
SDVariable |
SDNN.linear(SDVariable input,
SDVariable weights,
SDVariable bias) |
SDVariable |
SDNN.linear(String name,
SDVariable input,
SDVariable weights,
SDVariable bias)
Linear layer operation: out = mmul(in,w) + bias
Note that bias array is optional |
SDVariable |
SDBaseOps.linspace(DataType dataType,
double start,
double stop,
long number)
Create a new 1d array with values evenly spaced between values 'start' and 'stop'
For example, linspace(start=3.0, stop=4.0, number=3) will generate [3.0, 3.5, 4.0]
|
SDVariable |
SDBaseOps.linspace(String name,
DataType dataType,
double start,
double stop,
long number)
Create a new 1d array with values evenly spaced between values 'start' and 'stop'
For example, linspace(start=3.0, stop=4.0, number=3) will generate [3.0, 3.5, 4.0]
|
SDVariable |
SDBaseOps.linspace(String name,
SDVariable from,
SDVariable to,
SDVariable length,
DataType dt)
Create a new 1d array with values evenly spaced between values 'start' and 'stop'
For example, linspace(start=3.0, stop=4.0, number=3) will generate [3.0, 3.5, 4.0]
|
SDVariable[] |
SDMath.listDiff(SDVariable x,
SDVariable y)
List diff operation computes the difference between two 1d arrays, and also returns the indices - i.e., the positions
where the output appears in the input X.
For inputs X and Y, listDiff returns everything in X but not in Y. For example, if X=[1,10,3,7,6] and Y=[10, 6]), then:
output 0 (difference) = {@code [1,3,7]}output 1 (indices) = {@code [0, 2, 3]} |
SDVariable |
SDCNN.localResponseNormalization(SDVariable inputs,
LocalResponseNormalizationConfig lrnConfig)
|
SDVariable |
SDCNN.localResponseNormalization(String name,
SDVariable input,
LocalResponseNormalizationConfig lrnConfig)
2D convolution layer operation - local response normalization
|
SDVariable |
SDMath.log(SDVariable x)
Element-wise logarithm function (base e - natural logarithm): out = log(x)
|
SDVariable |
SDMath.log(SDVariable in,
double base)
Element-wise logarithm function (with specified base): out = log_{base}(x)
|
SDVariable |
SDMath.log(String name,
SDVariable x)
Element-wise logarithm function (base e - natural logarithm): out = log(x)
|
SDVariable |
SDMath.log(String name,
SDVariable in,
double base)
Element-wise logarithm function (with specified base): out = log_{base}(x)
|
SDVariable |
SDMath.log1p(SDVariable x)
Elementwise natural logarithm function: out = log_e (1 + x)
|
SDVariable |
SDMath.log1p(String name,
SDVariable x)
Elementwise natural logarithm function: out = log_e (1 + x)
|
SDVariable |
SDMath.logEntropy(SDVariable in,
int... dimensions)
Log entropy reduction: log(-sum(x * log(x)))
|
SDVariable |
SDMath.logEntropy(String name,
SDVariable in,
int... dimensions)
Log entropy reduction: log(-sum(x * log(x)))
|
SDVariable |
SDLoss.logLoss(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.logLoss(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.logLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
SDRandom.logNormal(double mean,
double stddev,
long... shape) |
SDVariable |
SDRandom.logNormal(String name,
double mean,
double stddev,
long... shape)
Generate a new random SDVariable, where values are randomly sampled according to a Log Normal distribution,
i.e.,
log(x) ~ N(mean, stdev) |
SDVariable |
SDLoss.logPoisson(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.logPoisson(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.logPoisson(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Log poisson loss: a loss function used for training classifiers.
|
SDVariable |
SDLoss.logPoissonFull(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.logPoissonFull(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.logPoissonFull(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Log poisson loss: a loss function used for training classifiers.
|
SDVariable |
SDNN.logSigmoid(SDVariable x)
Element-wise sigmoid function: out[i] = log(sigmoid(in[i]))
|
SDVariable |
SDNN.logSigmoid(String name,
SDVariable x)
Element-wise sigmoid function: out[i] = log(sigmoid(in[i]))
|
SDVariable |
SDNN.logSoftmax(SDVariable x)
Log softmax activation
|
SDVariable |
SDNN.logSoftmax(SDVariable x,
int dimension)
Log softmax activation
|
SDVariable |
SDNN.logSoftmax(String name,
SDVariable x)
Log softmax activation
|
SDVariable |
SDNN.logSoftmax(String name,
SDVariable x,
int dimension)
Log softmax activation
|
SDVariable |
SDMath.logSumExp(SDVariable input,
int... dimensions)
Log-sum-exp reduction (optionally along dimension).
|
SDVariable |
SDMath.logSumExp(String name,
SDVariable input,
boolean keepDims,
int... dimensions) |
SDVariable |
SDMath.logSumExp(String name,
SDVariable input,
int... dimensions)
Log-sum-exp reduction (optionally along dimension).
|
SDVariable |
SDBaseOps.lt(SDVariable x,
double y)
Less than operation: elementwise x < y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.lt(SDVariable x,
SDVariable y)
Less than operation: elementwise x < y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.lt(String name,
SDVariable x,
double y)
Less than operation: elementwise x < y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.lt(String name,
SDVariable x,
SDVariable y)
Less than operation: elementwise x < y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.lte(SDVariable x,
double y)
Less than or equals operation: elementwise x <= y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.lte(SDVariable x,
SDVariable y)
Less than or equal to operation: elementwise x <= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.lte(String name,
SDVariable x,
double y)
Less than or equals operation: elementwise x <= y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.lte(String name,
SDVariable x,
SDVariable y)
Less than or equal to operation: elementwise x <= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDMath.manhattanDistance(SDVariable x,
SDVariable y,
int... dimensions) |
SDVariable |
SDMath.manhattanDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Manhattan distance (l1 norm, l1 distance) reduction operation.
|
SDVariable |
SDBaseOps.matchCondition(SDVariable in,
Condition condition)
Returns a boolean mask of equal shape to the input, where the condition is satisfied - value 1 where satisfied, 0 otherwise
|
SDVariable |
SDBaseOps.matchCondition(String name,
SDVariable in,
Condition condition)
Returns a boolean mask of equal shape to the input, where the condition is satisfied - value 1 where satisfied, 0 otherwise
|
SDVariable |
SDBaseOps.matchConditionCount(SDVariable in,
Condition condition)
Returns a count of the number of elements that satisfy the condition
|
SDVariable |
SDBaseOps.matchConditionCount(String name,
SDVariable in,
Condition condition)
Returns a count of the number of elements that satisfy the condition
|
SDVariable |
SDBaseOps.matchConditionCount(String name,
SDVariable in,
Condition condition,
boolean keepDim,
int... dimensions)
Returns a count of the number of elements that satisfy the condition (for each slice along the specified dimensions)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.matrixBandPart(String name,
SDVariable input,
SDVariable minLower,
SDVariable maxUpper)
Copy a tensor setting everything outside a central band in each innermost matrix.
|
SDVariable |
SDMath.matrixDeterminant(SDVariable in) |
SDVariable |
SDMath.matrixDeterminant(String name,
SDVariable in)
Matrix determinant op.
|
SDVariable |
SDMath.matrixInverse(SDVariable in) |
SDVariable |
SDMath.matrixInverse(String name,
SDVariable in)
Matrix inverse op.
|
SDVariable |
SDBaseOps.max(SDVariable x,
int... dimensions)
Max array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.max(SDVariable first,
SDVariable second)
Element-wise maximum operation: out[i] = max(first[i], second[i])
Supports broadcasting |
SDVariable |
SDBaseOps.max(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Max array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.max(String name,
SDVariable x,
int... dimensions)
Max array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.max(String name,
SDVariable first,
SDVariable second)
Element-wise maximum operation: out[i] = max(first[i], second[i])
Supports broadcasting |
SDVariable |
SDCNN.maxPooling2d(SDVariable input,
Pooling2DConfig pooling2DConfig)
|
SDVariable |
SDCNN.maxPooling2d(String name,
SDVariable input,
Pooling2DConfig pooling2DConfig)
2D Convolution layer operation - max pooling 2d
|
SDVariable |
SDCNN.maxPooling3d(SDVariable input,
Pooling3DConfig pooling3DConfig)
|
SDVariable |
SDCNN.maxPooling3d(String name,
SDVariable input,
Pooling3DConfig pooling3DConfig)
3D convolution layer operation - max pooling 3d operation.
|
SDVariable[] |
SDNN.maxPoolWithArgmax(String[] names,
SDVariable x,
Pooling2DConfig pooling2DConfig)
Max pooling on the input and outputs both max values and indices
|
SDVariable |
SDBaseOps.mean(SDVariable x)
Full array mean reduction operation
|
SDVariable |
SDBaseOps.mean(SDVariable x,
int... dimension)
Mean (average) array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.mean(String name,
SDVariable x,
boolean keepDims,
int... dimension)
Mean (average) array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.mean(String name,
SDVariable x,
int... dimension)
Mean (average) array reduction operation, optionally along specified dimensions
|
SDVariable |
SDLoss.meanPairwiseSquaredError(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.meanPairwiseSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. |
SDVariable |
SDLoss.meanSquaredError(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.meanSquaredError(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.meanSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean squared error loss function.
|
SDVariable |
SDMath.mergeAdd(SDVariable... x)
Merge add function: merges an arbitrary number of equal shaped arrays using elementwise addition:
out = sum_i in[i]
|
SDVariable |
SDMath.mergeAdd(String name,
SDVariable... inputs)
Merge add function: merges an arbitrary number of equal shaped arrays using element-wise addition:
out = sum_i in[i]
|
SDVariable |
SDMath.mergeAvg(SDVariable... inputs)
Merge average function: merges an arbitrary number of equal shaped arrays using element-wise mean operation:
out = mean_i in[i]
|
SDVariable |
SDMath.mergeAvg(String name,
SDVariable... inputs)
Merge average function: merges an arbitrary number of equal shaped arrays using element-wise mean operation:
out = mean_i in[i]
|
SDVariable |
SDMath.mergeMax(SDVariable... x)
Merge max function: merges an arbitrary number of equal shaped arrays using element-wise maximum operation:
out = max_i in[i]
|
SDVariable |
SDMath.mergeMax(String name,
SDVariable... inputs)
Merge max function: merges an arbitrary number of equal shaped arrays using element-wise maximum operation:
out = max_i in[i]
|
SDVariable[] |
SDMath.meshgrid(List<String> names,
boolean cartesian,
SDVariable... inputs) |
SDVariable[] |
SDMath.meshgrid(List<String> names,
SDVariable... inputs)
Broadcast the 1D input variables onto an n-dimensional grid.
The resulting variable can be used for example for evaluating functions at all locations on a grid. Example: |
SDVariable[] |
SDMath.meshgrid(SDVariable... inputs) |
SDVariable |
SDBaseOps.min(SDVariable x,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDBaseOps.min(SDVariable first,
SDVariable second)
Element-wise minimum operation: out[i] = min(first[i], second[i])
Supports broadcasting |
SDVariable |
SDBaseOps.min(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDBaseOps.min(String name,
SDVariable x,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDBaseOps.min(String name,
SDVariable first,
SDVariable second)
Element-wise minimum operation: out[i] = min(first[i], second[i])
Supports broadcasting |
SDVariable |
SDBaseOps.mmul(SDVariable x,
SDVariable y)
Matrix multiplication: out = mmul(x,y)
|
SDVariable |
SDBaseOps.mmul(SDVariable x,
SDVariable y,
MMulTranspose transpose)
Matrix multiplication: out = mmul(x,y)
Supports specifying a MMulTranspose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDBaseOps.mmul(String name,
SDVariable x,
SDVariable y)
Matrix multiplication: out = mmul(x,y)
|
SDVariable |
SDBaseOps.mmul(String name,
SDVariable x,
SDVariable y,
MMulTranspose transpose)
Matrix multiplication: out = mmul(x,y)
Supports specifying a MMulTranspose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable[] |
SDMath.moments(SDVariable input,
int... axes) |
SDVariable[] |
SDMath.moments(String[] name,
SDVariable input,
int... axes)
Calculate the mean and (population) variance for the input variable, for the specified axis
|
SDVariable |
SDNN.multiHeadDotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled)
This performs multi-headed dot product attention on the given timeseries input
|
SDVariable |
SDNN.multiHeadDotProductAttention(String name,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled)
This performs multi-headed dot product attention on the given timeseries input
|
SDVariable |
SDMath.neg(SDVariable x)
Elementwise negative operation: out = -x
|
SDVariable |
SDMath.neg(String name,
SDVariable x)
Elementwise negative operation: out = -x
|
SDVariable |
SDBaseOps.neq(SDVariable x,
double y)
Not equals operation: elementwise x != y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.neq(SDVariable x,
SDVariable y)
Not equal to operation: elementwise x != y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.neq(String name,
SDVariable x,
double y)
Not equals operation: elementwise x != y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.neq(String name,
SDVariable x,
SDVariable y)
Not equal to operation: elementwise x != y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDImage.nonMaxSuppression(String name,
SDVariable boxes,
SDVariable scores,
SDVariable maxOutSize,
SDVariable iouThreshold,
SDVariable scoreThreshold)
Greedily selects a subset of bounding boxes in descending order of score
|
SDVariable |
SDBaseOps.norm1(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Norm1 (L1 norm) reduction operation: The output contains the L1 norm for each tensor/subset along the specified dimensions:
out = sum_i abs(x[i]) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm1(String name,
SDVariable x,
int... dimensions)
Norm1 (L1 norm) reduction operation: The output contains the L1 norm for each tensor/subset along the specified dimensions:
out = sum_i abs(x[i]) |
SDVariable |
SDBaseOps.norm2(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Norm2 (L2 norm) reduction operation: The output contains the L2 norm for each tensor/subset along the specified dimensions:
out = sqrt(sum_i x[i]^2) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm2(String name,
SDVariable x,
int... dimensions)
Norm2 (L2 norm) reduction operation: The output contains the L2 norm for each tensor/subset along the specified dimensions:
out = sqrt(sum_i x[i]^2) |
SDVariable |
SDRandom.normal(double mean,
double stddev,
long... shape) |
SDVariable |
SDRandom.normal(double mean,
double stddev,
SDVariable shape) |
SDVariable |
SDRandom.normal(String name,
double mean,
double stddev,
long... shape)
Generate a new random SDVariable, where values are randomly sampled according to a Gaussian (normal) distribution,
N(mean, stdev)
See SDRandom.normal(String, double, double, SDVariable) for the equivalent function where the shape is
specified as a long[] instead |
SDVariable |
SDRandom.normal(String name,
double mean,
double stddev,
SDVariable shape)
Generate a new random SDVariable, where values are randomly sampled according to a Gaussian (normal) distribution,
N(mean, stdev)
See SDRandom.normal(String, double, double, long...) for the equivalent function where the shape is
specified as a long[] instead |
SDVariable[] |
SDMath.normalizeMoments(SDVariable counts,
SDVariable means,
SDVariable variances,
double shift) |
SDVariable[] |
SDMath.normalizeMoments(String[] name,
SDVariable counts,
SDVariable means,
SDVariable variances,
double shift)
Calculate the mean and variance from the sufficient statistics
|
SDVariable |
SDRandom.normalTruncated(double mean,
double stddev,
long... shape) |
SDVariable |
SDRandom.normalTruncated(String name,
double mean,
double stddev,
long... shape)
Generate a new random SDVariable, where values are randomly sampled according to a Gaussian (normal) distribution,
N(mean, stdev).
|
SDVariable |
SDBaseOps.normmax(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Max norm (infinity norm) reduction operation: The output contains the max norm for each tensor/subset along the
specified dimensions:
out = max(abs(x[i])) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.normmax(String name,
SDVariable x,
int... dimensions)
Max norm (infinity norm) reduction operation: The output contains the max norm for each tensor/subset along the
specified dimensions
|
SDVariable |
SDBaseOps.oneHot(SDVariable indices,
int depth) |
SDVariable |
SDBaseOps.oneHot(SDVariable indices,
int depth,
int axis,
double on,
double off) |
SDVariable |
SDBaseOps.oneHot(SDVariable indices,
int depth,
int axis,
double on,
double off,
DataType dataType) |
SDVariable |
SDBaseOps.oneHot(String name,
SDVariable indices,
int depth)
Convert the array to a one-hot array with walues 0 and 1 for each entry
If input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with out[i, ..., j, in[i,...,j]] = 1 with other values being set to 0 |
SDVariable |
SDBaseOps.oneHot(String name,
SDVariable indices,
int depth,
int axis,
double on,
double off)
Convert the array to a one-hot array with walues
on and off for each entryIf input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with out[i, ..., j, in[i,...,j]] = on with other values being set to off |
SDVariable |
SDBaseOps.oneHot(String name,
SDVariable indices,
int depth,
int axis,
double on,
double off,
DataType dataType)
As per
SDBaseOps.oneHot(String, SDVariable, int, int, double, double) but allows configuring the output datatype |
SDVariable |
SDBaseOps.onesLike(SDVariable input)
Return a variable of all 1s, with the same shape as the input variable.
|
SDVariable |
SDBaseOps.onesLike(String name,
SDVariable input)
Return a variable of all 1s, with the same shape as the input variable.
|
SDVariable |
SDBaseOps.onesLike(String name,
SDVariable input,
DataType dataType)
As per
SDBaseOps.onesLike(String, SDVariable) but the output datatype may be specified |
SDVariable |
SDBitwise.or(SDVariable x,
SDVariable y)
|
SDVariable |
SDMath.or(SDVariable x,
SDVariable y)
Boolean OR operation: elementwise (x != 0) || (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBitwise.or(String name,
SDVariable x,
SDVariable y)
Bitwise OR operation.
|
SDVariable |
SDMath.or(String name,
SDVariable x,
SDVariable y)
Boolean OR operation: elementwise (x != 0) || (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDNN.pad(SDVariable input,
int[][] padding,
double constant)
|
SDVariable |
SDNN.pad(SDVariable input,
SDVariable padding,
double constant)
Perform padding on the given array, where padded values are the specified constant.
Example: Input array: [1, 2] [3, 4] Padding array: [2, 0] [1, 1] Contant = 0 Result: [0, 0, 0, 0] [0, 0, 0, 0] [0, 1, 2, 0] [0, 3, 4, 0] |
SDVariable |
SDNN.pad(String outputName,
SDVariable input,
SDVariable padding,
Pad.Mode mode,
double constant)
As per
SDNN.pad(SDVariable, SDVariable, double) but also supports multiple Pad.Mode modes.Example: Input array: [1, 2] [3, 4] [5, 6] Padding array: [2, 0] [1, 1] Contant = 0 Result: CONSTANT mode [0, 0, 0, 0] [0, 0, 0, 0] [0, 1, 2, 0] [0, 3, 4, 0] [0, 5, 6, 0] Result: SYMMETRIC mode [3, 3, 4, 4] [1, 1, 2, 2] [1, 1, 2, 2] [3, 3, 4, 4] [5, 5, 6, 6] Result: REFLECT: [6, 5, 6, 0] [2, 3, 4, 3] [2, 1, 2, 1] [4, 3, 4, 3] [6, 5, 6, 5] |
SDVariable |
SDBaseOps.parallel_stack(SDVariable[] values) |
SDVariable |
SDBaseOps.parallel_stack(String name,
SDVariable[] values) |
SDVariable |
SDBaseOps.permute(SDVariable x,
int... dimensions)
Array permutation operation: permute the dimensions according to the specified permutation indices.
Example: if input has shape [a,b,c] and dimensions = [2,0,1] the output has shape [c,a,b] |
SDVariable |
SDBaseOps.permute(String name,
SDVariable x,
int... dimensions)
Array permutation operation: permute the dimensions according to the specified permutation indices.
Example: if input has shape [a,b,c] and dimensions = [2,0,1] the output has shape [c,a,b] |
SDVariable |
SDBaseOps.permute(String name,
SDVariable x,
SDVariable dimensions)
As per
SDBaseOps.permute(String, SDVariable, int...) but with SDVariable permute dimension |
SDVariable |
SDMath.polygamma(String name,
SDVariable n,
SDVariable x)
Polygamma function
|
SDVariable |
SDMath.pow(SDVariable x,
double value)
Element-wise power function: out = x^value
|
SDVariable |
SDMath.pow(SDVariable x,
SDVariable y)
Element-wise (broadcastable) power function: out = x[i]^y[i]
|
SDVariable |
SDMath.pow(String name,
SDVariable x,
double value)
Element-wise power function: out = x^value
|
SDVariable |
SDMath.pow(String name,
SDVariable x,
SDVariable y)
Element-wise (broadcastable) power function: out = x[i]^y[i]
|
SDVariable |
SDNN.prelu(SDVariable input,
SDVariable alpha,
int... sharedAxes)
|
SDVariable |
SDNN.prelu(String name,
SDVariable input,
SDVariable alpha,
int... sharedAxes)
PReLU (Parameterized Rectified Linear Unit) operation.
|
SDVariable |
SDBaseOps.prod(SDVariable x,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.prod(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.prod(String name,
SDVariable x,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
|
SDVariable |
SDImage.randomCrop(String name,
SDVariable input,
SDVariable shape)
Randomly crops image
|
SDVariable |
SDBaseOps.range(double from,
double to,
double step,
DataType dataType)
Create a new variable with a 1d array, where the values start at
from and increment by step
up to (but not including) limit.For example, range(1.0, 3.0, 0.5) will return [1.0, 1.5, 2.0, 2.5] |
SDVariable |
SDBaseOps.range(String name,
double from,
double to,
double step,
DataType dataType)
Create a new variable with a 1d array, where the values start at
from and increment by step
up to (but not including) limit.For example, range(1.0, 3.0, 0.5) will return [1.0, 1.5, 2.0, 2.5] |
SDVariable |
SDBaseOps.range(String name,
SDVariable from,
SDVariable to,
SDVariable step,
DataType dataType)
As per
SDBaseOps.range(String, double, double, double, DataType) but with SDVariable arguments |
SDVariable |
SDBaseOps.rank(SDVariable in)
Returns the rank (number of dimensions, i.e., length(shape)) of the specified SDVariable as a 0D scalar variable
|
SDVariable |
SDBaseOps.rank(String name,
SDVariable in)
Returns the rank (number of dimensions, i.e., length(shape)) of the specified SDVariable as a 0D scalar variable
|
SDVariable |
SDMath.reciprocal(SDVariable a)
Element-wise reciprocal (inverse) function: out[i] = 1 / in[i]
|
SDVariable |
SDMath.reciprocal(String name,
SDVariable a)
Element-wise reciprocal (inverse) function: out[i] = 1 / in[i]
|
SDVariable |
SDNN.relu(SDVariable x,
double cutoff)
Element-wise rectified linear function with specified cutoff:
out[i] = in[i] if in[i] >= cutoff out[i] = 0 otherwise |
SDVariable |
SDNN.relu(String name,
SDVariable x,
double cutoff)
Element-wise rectified linear function with specified cutoff:
out[i] = in[i] if in[i] >= cutoff out[i] = 0 otherwise |
SDVariable |
SDNN.relu6(SDVariable x,
double cutoff)
Element-wise "rectified linear 6" function with specified cutoff:
out[i] = min(max(in, cutoff), 6) |
SDVariable |
SDNN.relu6(String name,
SDVariable x,
double cutoff)
Element-wise "rectified linear 6" function with specified cutoff:
out[i] = min(max(in, cutoff), 6) |
SDVariable |
SDNN.reluLayer(SDVariable input,
SDVariable weights,
SDVariable bias) |
SDVariable |
SDNN.reluLayer(String name,
SDVariable input,
SDVariable weights,
SDVariable bias)
ReLU (Rectified Linear Unit) layer operation: out = relu(mmul(in,w) + bias)
Note that bias array is optional |
SDVariable |
SDBaseOps.repeat(SDVariable df,
int axis) |
SDVariable |
SDBaseOps.repeat(String name,
SDVariable df,
int axis) |
SDVariable |
SDBaseOps.replaceWhere(SDVariable update,
Number value,
Condition condition)
Element-wise replace where condition:
out[i] = value if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.replaceWhere(SDVariable update,
SDVariable from,
Condition condition)
Element-wise replace where condition:
out[i] = from[i] if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.replaceWhere(String name,
SDVariable update,
Number value,
Condition condition)
Element-wise replace where condition:
out[i] = value if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.replaceWhere(String name,
SDVariable update,
SDVariable from,
Condition condition)
Element-wise replace where condition:
out[i] = from[i] if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.reshape(SDVariable x,
int... shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reshape(SDVariable x,
long... shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reshape(SDVariable x,
SDVariable shape)
Reshape the input variable to the specified (dynamic) shape.
|
SDVariable |
SDBaseOps.reshape(String name,
SDVariable x,
int... shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reshape(String name,
SDVariable x,
long... shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reshape(String name,
SDVariable x,
SDVariable shape)
Reshape the input variable to the specified (dynamic) shape.
|
SDVariable |
SDBaseOps.reverse(SDVariable x,
int... dimensions) |
SDVariable |
SDBaseOps.reverse(String name,
SDVariable x,
int... dimensions)
Reverse the values of an array for the specified dimensions
If input is: [ 1, 2, 3] [ 4, 5, 6] then reverse(in, 0): [3, 2, 1] [6, 5, 4] reverse(in, 0): [4, 5, 6] [1, 2 3] |
SDVariable |
SDBaseOps.reverseSequence(SDVariable x,
SDVariable seq_lengths) |
SDVariable |
SDBaseOps.reverseSequence(SDVariable x,
SDVariable seq_lengths,
int seqDim,
int batchDim) |
SDVariable |
SDBaseOps.reverseSequence(String name,
SDVariable x,
SDVariable seq_lengths) |
SDVariable |
SDBaseOps.reverseSequence(String name,
SDVariable x,
SDVariable seq_lengths,
int seqDim,
int batchDim)
Reverse sequence op: for each slice along dimension seqDimension, the first seqLength values are reversed
|
SDVariable |
SDBitwise.rightShift(SDVariable x,
SDVariable y)
|
SDVariable |
SDBitwise.rightShift(String name,
SDVariable x,
SDVariable y)
Bitwise right shift operation.
|
SDVariable |
SDBitwise.rightShiftCyclic(SDVariable x,
SDVariable y)
|
SDVariable |
SDBitwise.rightShiftCyclic(String name,
SDVariable x,
SDVariable y)
Bitwise right cyclical shift operation.
|
SDVariable |
SDMath.roll(String name,
SDVariable input,
SDVariable shift)
Rolls the elements of input
|
SDVariable |
SDMath.round(SDVariable x)
Elementwise round function: out = round(x).
|
SDVariable |
SDMath.round(String name,
SDVariable x)
Element-wise round function: out = round(x).
|
SDVariable |
SDMath.rsqrt(SDVariable x)
Element-wise reciprocal (inverse) of square root: out = 1.0 / sqrt(x)
|
SDVariable |
SDMath.rsqrt(String name,
SDVariable x)
Element-wise reciprocal (inverse) of square root: out = 1.0 / sqrt(x)
|
SDVariable |
SDBaseOps.scalarFloorMod(SDVariable in,
Number value)
Element-wise scalar floor modulus operation: out = floorMod(in, value).
|
SDVariable |
SDBaseOps.scalarFloorMod(String name,
SDVariable in,
Number value)
Element-wise scalar floor modulus operation: out = floorMod(in, value).
|
SDVariable |
SDBaseOps.scalarMax(SDVariable in,
Number value)
Element-wise scalar maximum operation: out = max(in, value)
|
SDVariable |
SDBaseOps.scalarMax(String name,
SDVariable in,
Number value)
Element-wise scalar maximum operation: out = max(in, value)
|
SDVariable |
SDBaseOps.scalarMin(SDVariable in,
Number value)
Element-wise scalar minimum operation: out = min(in, value)
|
SDVariable |
SDBaseOps.scalarMin(String name,
SDVariable in,
Number value)
Element-wise scalar minimum operation: out = min(in, value)
|
SDVariable |
SDBaseOps.scalarSet(SDVariable in,
Number set)
Return an array with equal shape to the input, but all elements set to value 'set'
|
SDVariable |
SDBaseOps.scalarSet(String name,
SDVariable in,
Number set)
Return a variable with equal shape to the input, but all elements set to value 'set'
|
SDVariable |
SDBaseOps.scatterAdd(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
SDBaseOps.scatterAdd(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter addition operation.
If indices is rank 0 (a scalar), then out[index, ...] += updates[...] If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] += updates[i, ...] If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] += updates[i, ..., k, ...] Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterDiv(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
SDBaseOps.scatterDiv(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter division operation.
If indices is rank 0 (a scalar), then out[index, ...] /= updates[...] If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] /= updates[i, ...] If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] /= updates[i, ..., k, ...] Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMax(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
SDBaseOps.scatterMax(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter max operation.
If indices is rank 0 (a scalar), then out[index, ...] = max(updates[...], in[index,...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = max(updates[i,...], in[indices[i],...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = max(updates[i, ..., k, ...], in[indices[i], ..., indices[k], ...] Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMin(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
SDBaseOps.scatterMin(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter min operation.
If indices is rank 0 (a scalar), then out[index, ...] = min(updates[...], in[index,...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = min(updates[i,...], in[indices[i],...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = min(updates[i, ..., k, ...], in[indices[i], ..., indices[k], ...] Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMul(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
SDBaseOps.scatterMul(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter multiplication operation.
If indices is rank 0 (a scalar), then out[index, ...] *= updates[...] If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] *= updates[i, ...] If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] *= updates[i, ..., k, ...] Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterSub(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
SDBaseOps.scatterSub(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter subtraction operation.
If indices is rank 0 (a scalar), then out[index, ...] -= updates[...] If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] -= updates[i, ...] If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] -= updates[i, ..., k, ...] Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterUpdate(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
SDBaseOps.scatterUpdate(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter update operation.
If indices is rank 0 (a scalar), then out[index, ...] = updates[...] If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = updates[i, ...] If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = updates[i, ..., k, ...] Note that if multiple indices refer to the same location, the output at those locations is undefined - different updates may occur in different orders |
SDVariable |
SDCNN.sconv2d(SDVariable[] inputs,
Conv2DConfig conv2DConfig)
|
SDVariable |
SDCNN.sconv2d(String name,
SDVariable[] inputs,
Conv2DConfig conv2DConfig)
Separable 2D convolution operation with/without optional bias
|
SDVariable |
SDBaseOps.segmentMax(SDVariable data,
SDVariable segmentIds) |
SDVariable |
SDBaseOps.segmentMax(String name,
SDVariable data,
SDVariable segmentIds)
Segment max operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [max(3,6), max(1,4,9), max(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. |
SDVariable |
SDBaseOps.segmentMean(SDVariable data,
SDVariable segmentIds) |
SDVariable |
SDBaseOps.segmentMean(String name,
SDVariable data,
SDVariable segmentIds)
Segment mean operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [4.5, 4.666, 5] = [mean(3,6), mean(1,4,9), mean(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. |
SDVariable |
SDBaseOps.segmentMin(SDVariable data,
SDVariable segmentIds) |
SDVariable |
SDBaseOps.segmentMin(String name,
SDVariable data,
SDVariable segmentIds)
Segment min operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [3, 1, 2] = [min(3,6), min(1,4,9), min(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. |
SDVariable |
SDBaseOps.segmentProd(SDVariable data,
SDVariable segmentIds) |
SDVariable |
SDBaseOps.segmentProd(String name,
SDVariable data,
SDVariable segmentIds)
Segment product operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [18, 36, 16] = [prod(3,6), prod(1,4,9), prod(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. |
SDVariable |
SDBaseOps.segmentSum(SDVariable data,
SDVariable segmentIds) |
SDVariable |
SDBaseOps.segmentSum(String name,
SDVariable data,
SDVariable segmentIds)
Segment sum operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [9, 14, 10] = [sum(3,6), sum(1,4,9), sum(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. |
SDVariable |
SDNN.selu(SDVariable x)
Element-wise SeLU function - Scaled exponential Lineal Unit: see Self-Normalizing Neural Networks
out[i] = scale * alpha * (exp(in[i])-1) if in[i]>0, or 0 if in[i] <= 0 Uses default lcale and alpha values. |
SDVariable |
SDNN.selu(String name,
SDVariable x)
Element-wise SeLU function - Scaled exponential Lineal Unit: see Self-Normalizing Neural Networks
out[i] = scale * alpha * (exp(in[i])-1) if in[i]>0, or 0 if in[i] <= 0 Uses default lcale and alpha values. |
SDVariable |
SDCNN.separableConv2d(SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
Conv2DConfig config)
|
SDVariable |
SDCNN.separableConv2d(SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
SDVariable bias,
Conv2DConfig config)
|
SDVariable |
SDCNN.separableConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
Conv2DConfig config)
|
SDVariable |
SDCNN.separableConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
SDVariable bias,
Conv2DConfig config)
Separable 2D convolution operation with optional bias
|
SDVariable |
SDBaseOps.sequenceMask(SDVariable lengths,
DataType dataType) |
SDVariable |
SDBaseOps.sequenceMask(SDVariable lengths,
int maxLen,
DataType dataType) |
SDVariable |
SDBaseOps.sequenceMask(SDVariable lengths,
SDVariable maxLen,
DataType dataType) |
SDVariable |
SDBaseOps.sequenceMask(String name,
SDVariable lengths,
DataType dataType) |
SDVariable |
SDBaseOps.sequenceMask(String name,
SDVariable lengths,
int maxLen,
DataType dataType) |
SDVariable |
SDBaseOps.sequenceMask(String name,
SDVariable lengths,
SDVariable maxLen,
DataType dataType)
Generate a sequence mask (with values 0 or 1) based on the specified lengths
Specifically, out[i, ..., k, j] = (j < lengths[i, ..., k] ? 1.0 : 0.0) |
SDVariable |
SDMath.setDiag(SDVariable in,
SDVariable diag) |
SDVariable |
SDMath.setDiag(String name,
SDVariable in,
SDVariable diag)
Set the diagonal value to the specified values
If input is [ a, b, c] [ d, e, f] [ g, h, i] and diag = [ 1, 2, 3] then output is [ 1, b, c] [ d, 2, f] [ g, h, 3] |
SDVariable |
SDMath.shannonEntropy(SDVariable in,
int... dimensions)
Shannon Entropy reduction: -sum(x * log2(x))
|
SDVariable |
SDMath.shannonEntropy(String name,
SDVariable in,
int... dimensions)
Shannon Entropy reduction: -sum(x * log2(x))
|
SDVariable |
SDBaseOps.shape(SDVariable input)
Returns the shape of the specified SDVariable as a 1D SDVariable
|
SDVariable |
SDBaseOps.shape(String name,
SDVariable input)
Returns the shape of the specified SDVariable as a 1D SDVariable
|
SDVariable |
SDNN.sigmoid(SDVariable x)
Element-wise sigmoid function: out[i] = 1.0/(1+exp(-in[i]))
|
SDVariable |
SDNN.sigmoid(String name,
SDVariable x)
Element-wise sigmoid function: out[i] = 1.0/(1+exp(-in[i]))
|
SDVariable |
SDLoss.sigmoidCrossEntropy(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.sigmoidCrossEntropy(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.sigmoidCrossEntropy(String name,
SDVariable label,
SDVariable predictionLogits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function.
|
SDVariable |
SDNN.sigmoidDerivative(SDVariable x,
SDVariable wrt)
Element-wise sigmoid function derivative: dL/dIn given input and dL/dOut
|
SDVariable |
SDNN.sigmoidDerivative(String name,
SDVariable x,
SDVariable wrt)
Element-wise sigmoid function derivative: dL/dIn given input and dL/dOut
|
SDVariable |
SDMath.sign(SDVariable x)
Element-wise sign (signum) function:
out = -1 if in < 0 out = 0 if in = 0 out = 1 if in > 0 |
SDVariable |
SDMath.sign(String name,
SDVariable x)
Element-wise sign (signum) function:
out = -1 if in < 0 out = 0 if in = 0 out = 1 if in > 0 |
SDVariable |
SDMath.sin(SDVariable x)
Elementwise sine operation: out = sin(x)
|
SDVariable |
SDMath.sin(String name,
SDVariable x)
Elementwise sine operation: out = sin(x)
|
SDVariable |
SDMath.sinh(SDVariable x)
Elementwise sinh (hyperbolic sine) operation: out = sinh(x)
|
SDVariable |
SDMath.sinh(String name,
SDVariable x)
Elementwise sinh (hyperbolic sine) operation: out = sinh(x)
|
SDVariable |
SDBaseOps.size(SDVariable in)
Returns the size (number of elements, i.e., prod(shape)) of the specified SDVariable as a 0D scalar variable
|
SDVariable |
SDBaseOps.size(String name,
SDVariable in)
Returns the size (number of elements, i.e., prod(shape)) of the specified SDVariable as a 0D scalar variable
|
SDVariable |
SDBaseOps.sizeAt(SDVariable in,
int dimension) |
SDVariable |
SDBaseOps.sizeAt(String name,
SDVariable in,
int dimension)
Returns a rank 0 (scalar) variable for the size of the specified dimension.
|
SDVariable |
SDBaseOps.slice(SDVariable input,
int[] begin,
int[] size) |
SDVariable |
SDBaseOps.slice(SDVariable input,
SDVariable begin,
SDVariable size) |
SDVariable |
SDBaseOps.slice(String name,
SDVariable input,
int[] begin,
int[] size)
Get a subset of the specified input, by specifying the first element and the size of the array.
For example, if input is: [a, b, c] [d, e, f] then slice(input, begin=[0,1], size=[2,1] will return: [b] [e] Note that for each dimension i, begin[i] + size[i] <= input.size(i) |
SDVariable |
SDBaseOps.slice(String name,
SDVariable input,
SDVariable begin,
SDVariable size) |
SDVariable |
SDNN.softmax(SDVariable x)
Softmax activation on dimension 1.
|
SDVariable |
SDNN.softmax(SDVariable x,
int dimension)
Softmax activation
|
SDVariable |
SDNN.softmax(String name,
SDVariable x)
Softmax activation on dimension 1.
|
SDVariable |
SDNN.softmax(String name,
SDVariable x,
int dimension)
Softmax activation
|
SDVariable |
SDLoss.softmaxCrossEntropy(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.softmaxCrossEntropy(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.softmaxCrossEntropy(String name,
SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits) If LossReduce.NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;
otherwise, the output is a scalar. |
SDVariable |
SDNN.softmaxDerivative(String name,
SDVariable x,
SDVariable wrt) |
SDVariable |
SDNN.softmaxDerivative(String name,
SDVariable x,
SDVariable wrt,
Integer dimension) |
SDVariable |
SDNN.softplus(SDVariable x)
Element-wise softplus function: out = log(exp(x) + 1)
|
SDVariable |
SDNN.softplus(String name,
SDVariable x)
Element-wise softplus function: out = log(exp(x) + 1)
|
SDVariable |
SDNN.softsign(SDVariable x)
Element-wise softsign function: out = x / (abs(x) + 1)
|
SDVariable |
SDNN.softsign(String name,
SDVariable x)
Element-wise softsign function: out = x / (abs(x) + 1)
|
SDVariable |
SDNN.softsignDerivative(SDVariable x)
Element-wise derivative (dOut/dIn) of the softsign function
SDNN.softsign(SDVariable) |
SDVariable |
SDNN.softsignDerivative(String name,
SDVariable x)
Element-wise derivative (dOut/dIn) of the softsign function
SDNN.softsign(SDVariable) |
SDVariable |
SDCNN.spaceToBatch(SDVariable x,
int[] blocks,
int[][] padding) |
SDVariable |
SDCNN.spaceToBatch(String name,
SDVariable x,
int[] blocks,
int[][] padding)
Convolution 2d layer space to batch operation on 4d input.
|
SDVariable |
SDCNN.spaceToDepth(SDVariable x,
int blockSize,
String dataFormat) |
SDVariable |
SDCNN.spaceToDepth(String name,
SDVariable x,
int blockSize,
String dataFormat)
Convolution 2d layer space to depth operation on 4d input.
Increases input channels (reduced spatial dimensions) by rearranging data into a larger channels dimension Example: if input has shape [mb, 2, 4, 4] and block size is 2, then output size is [mb, 8/(2*2), 2*2, 2*2] = [mb, 2, 4, 4] |
SDVariable |
SDLoss.sparseSoftmaxCrossEntropy(SDVariable logits,
SDVariable labels)
|
SDVariable |
SDLoss.sparseSoftmaxCrossEntropy(String name,
SDVariable logits,
SDVariable labels)
As per
SDLoss.softmaxCrossEntropy(String, SDVariable, SDVariable, LossReduce) but the labels variable
is represented as an integer array instead of the equivalent one-hot array.i.e., if logits are rank N, then labels have rank N-1 |
SDVariable |
SDMath.sqrt(SDVariable x)
Element-wise square root function: out = sqrt(x)
|
SDVariable |
SDMath.sqrt(String name,
SDVariable x)
Element-wise square root function: out = sqrt(x)
|
SDVariable |
SDMath.square(SDVariable x)
Element-wise square function: out = x^2
|
SDVariable |
SDMath.square(String name,
SDVariable x)
Element-wise square function: out = x^2
|
SDVariable |
SDBaseOps.squaredNorm(SDVariable x,
boolean keepDims,
int... dimensions)
Squared L2 norm: see
SDBaseOps.norm2(String, SDVariable, boolean, int...) |
SDVariable |
SDBaseOps.squaredNorm(SDVariable x,
int... dimensions)
Squared L2 norm: see
SDBaseOps.norm2(String, SDVariable, int...) |
SDVariable |
SDBaseOps.squaredNorm(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Squared L2 norm: see
SDBaseOps.norm2(String, SDVariable, boolean, int...) |
SDVariable |
SDBaseOps.squaredNorm(String name,
SDVariable x,
int... dimensions)
Squared L2 norm: see
SDBaseOps.norm2(String, SDVariable, int...) |
SDVariable |
SDBaseOps.squeeze(SDVariable x,
int axis) |
SDVariable |
SDBaseOps.squeeze(String name,
SDVariable x,
int axis)
Remove a single dimension of size 1.
|
SDVariable |
SDBaseOps.stack(int axis,
SDVariable... values) |
SDVariable |
SDBaseOps.stack(String name,
int axis,
SDVariable... values)
Stack a set of N SDVariables of rank X into one rank X+1 variable.
|
SDVariable |
SDBaseOps.standardDeviation(SDVariable x,
boolean biasCorrected,
int... dimensions) |
SDVariable |
SDBaseOps.standardDeviation(String name,
SDVariable x,
boolean biasCorrected,
boolean keepDims,
int... dimensions)
Stardard deviation array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.standardDeviation(String name,
SDVariable x,
boolean biasCorrected,
int... dimensions)
Stardard deviation array reduction operation, optionally along specified dimensions
|
SDVariable |
SDMath.standardize(SDVariable x,
int... dimensions)
Standardize input variable along given axis
|
SDVariable |
SDMath.standardize(String name,
SDVariable x,
int... dimensions)
Standardize input variable along given axis
|
SDVariable |
SDMath.step(SDVariable in,
double cutoff)
Elementwise step function:
out(x) = 1 if x >= cutoff out(x) = 0 otherwise |
SDVariable |
SDMath.step(String name,
SDVariable in,
double cutoff)
Elementwise step function:
out(x) = 1 if x >= cutoff out(x) = 0 otherwise |
SDVariable |
SDBaseOps.stridedSlice(SDVariable input,
int[] begin,
int[] end,
int[] strides) |
SDVariable |
SDBaseOps.stridedSlice(SDVariable in,
int[] begin,
int[] end,
int[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
SDVariable |
SDBaseOps.stridedSlice(SDVariable input,
long[] begin,
long[] end,
long[] strides) |
SDVariable |
SDBaseOps.stridedSlice(SDVariable in,
long[] begin,
long[] end,
long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
SDVariable |
SDBaseOps.stridedSlice(String name,
SDVariable input,
int[] begin,
int[] end,
int[] strides) |
SDVariable |
SDBaseOps.stridedSlice(String name,
SDVariable in,
int[] begin,
int[] end,
int[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
SDVariable |
SDBaseOps.stridedSlice(String name,
SDVariable input,
long[] begin,
long[] end,
long[] strides)
Get a subset of the specified input, by specifying the first element, last element, and the strides.
For example, if input is: [a, b, c] [d, e, f] [g, h, i] then stridedSlice(input, begin=[0,1], end=[2,2], strides=[2,1]) will return: [b, c] [h, i] |
SDVariable |
SDBaseOps.stridedSlice(String name,
SDVariable in,
long[] begin,
long[] end,
long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask)
Get a subset of the specified input, by specifying the first element, last element, and the strides.
Operates as described in SDBaseOps.stridedSlice(SDVariable, long[], long[], long[]) with some extra mask arrays
as described below. |
SDVariable |
SDBaseOps.sum(SDVariable x,
boolean keepDims,
int... dimensions) |
SDVariable |
SDBaseOps.sum(SDVariable x,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.sum(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions.
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.sum(String name,
SDVariable x,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions
|
SDVariable |
SDNN.swish(SDVariable x)
Element-wise "swish" function: out = x * sigmoid(b*x) with b=1.0
See: https://arxiv.org/abs/1710.05941 |
SDVariable |
SDNN.swish(String name,
SDVariable x)
Element-wise "swish" function: out = x * sigmoid(b*x) with b=1.0
See: https://arxiv.org/abs/1710.05941 |
SDVariable |
SDMath.tan(SDVariable x)
Elementwise tangent operation: out = tan(x)
|
SDVariable |
SDMath.tan(String name,
SDVariable x)
Elementwise tangent operation: out = tan(x)
|
SDVariable |
SDNN.tanh(SDVariable x) |
SDVariable |
SDMath.tanh(SDVariable x)
Elementwise tanh (hyperbolic tangent) operation: out = tanh(x)
|
SDVariable |
SDNN.tanh(String name,
SDVariable x) |
SDVariable |
SDMath.tanh(String name,
SDVariable x)
Elementwise tanh (hyperbolic tangent) operation: out = tanh(x)
|
SDVariable |
SDBaseOps.tensorMmul(SDVariable x,
SDVariable y,
int[][] dimensions) |
SDVariable |
SDBaseOps.tensorMmul(String name,
SDVariable x,
SDVariable y,
int[][] dimensions) |
SDVariable |
SDBaseOps.tile(SDVariable x,
int... repeat) |
SDVariable |
SDBaseOps.tile(SDVariable x,
SDVariable repeat) |
SDVariable |
SDBaseOps.tile(String name,
SDVariable x,
int... repeat)
Repeat (tile) the input tensor the specified number of times.
For example, if input is [1, 2] [3, 4] and repeat is [2, 3] then output is [1, 2, 1, 2, 1, 2] [3, 4, 3, 4, 3, 4] [1, 2, 1, 2, 1, 2] [3, 4, 3, 4, 3, 4] |
SDVariable |
SDBaseOps.tile(String name,
SDVariable x,
SDVariable repeat) |
SDVariable |
SDBitwise.toggleBits(String name,
SDVariable x)
Flip bits
|
SDVariable |
SDMath.trace(SDVariable in) |
SDVariable |
SDMath.trace(String name,
SDVariable in)
Matrix trace operation
For rank 2 matrices, the output is a scalar vith the trace - i.e., sum of the main diagonal.
For higher rank inputs, output[a,b,c] = trace(in[a,b,c,:,:]) |
SDVariable |
SDBaseOps.transpose(SDVariable x)
Matrix transpose operation: If input has shape [a,b] output has shape [b,a]
|
SDVariable |
SDBaseOps.transpose(String name,
SDVariable x)
Matrix transpose operation: If input has shape [a,b] output has shape [b,a]
|
SDVariable |
SDRandom.uniform(double min,
double max,
long... shape) |
SDVariable |
SDRandom.uniform(double min,
double max,
SDVariable shape) |
SDVariable |
SDRandom.uniform(double min,
double max,
SDVariable shape,
DataType dataType) |
SDVariable |
SDRandom.uniform(String name,
double min,
double max,
long... shape)
Generate a new random SDVariable, where values are randomly sampled according to a uniform distribution,
U(min,max)
See SDRandom.uniform(double, double, long...) for the equivalent function where the shape is
specified as a SDVariable instead |
SDVariable |
SDRandom.uniform(String name,
double min,
double max,
SDVariable shape)
As per
SDRandom.uniform(double, double, SDVariable, DataType) but with Float32 output |
SDVariable |
SDRandom.uniform(String name,
double min,
double max,
SDVariable shape,
DataType dataType)
Generate a new random SDVariable, where values are randomly sampled according to a uniform distribution,
U(min,max).
|
SDVariable |
SDBaseOps.unsortedSegmentMax(SDVariable data,
SDVariable segmentIds,
int numSegments)
|
SDVariable |
SDBaseOps.unsortedSegmentMax(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment max operation.
|
SDVariable |
SDBaseOps.unsortedSegmentMean(SDVariable data,
SDVariable segmentIds,
int numSegments)
|
SDVariable |
SDBaseOps.unsortedSegmentMean(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment mean operation.
|
SDVariable |
SDBaseOps.unsortedSegmentMin(SDVariable data,
SDVariable segmentIds,
int numSegments)
|
SDVariable |
SDBaseOps.unsortedSegmentMin(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment min operation.
|
SDVariable |
SDBaseOps.unsortedSegmentProd(SDVariable data,
SDVariable segmentIds,
int numSegments)
|
SDVariable |
SDBaseOps.unsortedSegmentProd(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment product operation.
|
SDVariable |
SDBaseOps.unsortedSegmentSqrtN(SDVariable data,
SDVariable segmentIds,
int numSegments)
|
SDVariable |
SDBaseOps.unsortedSegmentSqrtN(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment sqrtN operation.
|
SDVariable |
SDBaseOps.unsortedSegmentSum(SDVariable data,
SDVariable segmentIds,
int numSegments)
|
SDVariable |
SDBaseOps.unsortedSegmentSum(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment sum operation.
|
SDVariable[] |
SDBaseOps.unstack(SDVariable value,
int axis) |
SDVariable[] |
SDBaseOps.unstack(SDVariable value,
int axis,
int num) |
SDVariable[] |
SDBaseOps.unstack(String[] names,
SDVariable value,
int axis) |
SDVariable[] |
SDBaseOps.unstack(String[] names,
SDVariable value,
int axis,
int num)
Unstack a variable of rank X into N rank X-1 variables by taking slices along the specified axis.
|
protected SDVariable |
SDOps.updateVariableNameAndReference(SDVariable varToUpdate,
String newVarName) |
protected abstract SDVariable |
SDBaseOps.updateVariableNameAndReference(SDVariable varToUpdate,
String newVarName) |
protected abstract SDVariable[] |
SDBaseOps.updateVariableNamesAndReferences(SDVariable[] variablesToUpdate,
String[] newVariableNames) |
SDVariable |
SDCNN.upsampling2d(SDVariable input,
boolean nchw,
int scaleH,
int scaleW)
|
SDVariable |
SDCNN.upsampling2d(SDVariable input,
int scale)
See
SDCNN.upsampling2d(String, SDVariable, boolean, int, int) ,
scale is used for both height and width dimensions. |
SDVariable |
SDCNN.upsampling2d(String name,
SDVariable input,
boolean nchw,
int scaleH,
int scaleW)
2D Convolution layer operation - Upsampling 2d
|
SDVariable |
SDCNN.upsampling2d(String name,
SDVariable input,
int scale)
See
SDCNN.upsampling2d(String, SDVariable, boolean, int, int) ,
scale is used for both height and width dimensions. |
SDVariable |
SDBaseOps.variance(SDVariable x,
boolean biasCorrected,
int... dimensions) |
SDVariable |
SDBaseOps.variance(String name,
SDVariable x,
boolean biasCorrected,
boolean keepDims,
int... dimensions)
Variance array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.variance(String name,
SDVariable x,
boolean biasCorrected,
int... dimensions)
Variance array reduction operation, optionally along specified dimensions
|
SDVariable |
SDLoss.weightedCrossEntropyWithLogits(SDVariable targets,
SDVariable inputs,
SDVariable weights)
TODO
|
SDVariable |
SDLoss.weightedCrossEntropyWithLogits(String name,
SDVariable targets,
SDVariable inputs,
SDVariable weights)
TODO
|
SDVariable[] |
SDBaseOps.whileLoop(SDVariable[] loopVars,
SameDiffSingleLambda cond,
SameDiffLambda body)
|
SDVariable[] |
SDBaseOps.whileLoop(String[] outputNames,
String loopName,
SDVariable[] loopVars,
SameDiffSingleLambda cond,
SameDiffLambda body)
Constructs a While loop using the tensorflow style control flow operations (Switch, Merge, Enter, Exit, and NextIteration)
Repeatedly executes body on the loop variables and updates them with the results, until cond evaluates to false
Note that cond and body lambdas are only called once to construct the graph.
|
SDVariable[] |
SDBaseOps.whileLoop(String loopName,
SDVariable[] loopVars,
SameDiffSingleLambda cond,
SameDiffLambda body)
|
SDVariable |
SDBitwise.xor(SDVariable x,
SDVariable y)
|
SDVariable |
SDMath.xor(SDVariable x,
SDVariable y)
Boolean XOR (exclusive OR) operation: elementwise (x != 0) XOR (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBitwise.xor(String name,
SDVariable x,
SDVariable y)
Bitwise XOR operation (exclusive OR).
|
SDVariable |
SDMath.xor(String name,
SDVariable x,
SDVariable y)
Boolean XOR (exclusive OR) operation: elementwise (x != 0) XOR (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDMath.zeroFraction(SDVariable input)
Full array zero fraction array reduction operation, optionally along specified dimensions: out = (count(x == 0) / length(x))
|
SDVariable |
SDMath.zeroFraction(String name,
SDVariable input)
Full array zero fraction array reduction operation, optionally along specified dimensions: out = (count(x == 0) / length(x))
|
SDVariable |
SDBaseOps.zerosLike(SDVariable input)
Return a variable of all 0s, with the same shape as the input variable.
|
SDVariable |
SDBaseOps.zerosLike(String name,
SDVariable input)
Return a variable of all 0s, with the same shape as the input variable.
|
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
SDNN.dotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled,
boolean withWeights)
This operation performs dot product attention on the given timeseries input with the given queries
|
List<SDVariable> |
SDNN.dotProductAttention(String name,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled,
boolean withWeights)
This operation performs dot product attention on the given timeseries input with the given queries
out = sum(similarity(k_i, q) * v_i)
similarity(k, q) = softmax(k * q) where x * q is the dot product of x and q
Optionally with normalization step:
similarity(k, q) = softmax(k * q / sqrt(size(q))
See also "Attention is all you need" (https://arxiv.org/abs/1706.03762, p.
|
List<SDVariable> |
SDNN.multiHeadDotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled,
boolean withWeights)
This performs multi-headed dot product attention on the given timeseries input
|
List<SDVariable> |
SDNN.multiHeadDotProductAttention(String name,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled,
boolean withWeights)
This performs multi-headed dot product attention on the given timeseries input
out = concat(head_1, head_2, ..., head_n) * Wo
head_i = dot_product_attention(Wq_i*q, Wk_i*k, Wv_i*v)
Optionally with normalization when calculating the attention for each head.
|
Modifier and Type | Method and Description |
---|---|
SDVariable |
SDMath.abs(SDVariable x)
Elementwise absolute value operation: out = abs(x)
|
SDVariable |
SDMath.abs(String name,
SDVariable x)
Elementwise absolute value operation: out = abs(x)
|
SDVariable |
SDLoss.absoluteDifference(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.absoluteDifference(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.absoluteDifference(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Absolute difference loss: {@code sum_i abs( label[i] - predictions[i] )
|
SDVariable |
SDMath.acos(SDVariable x)
Elementwise acos (arccosine, inverse cosine) operation: out = arccos(x)
|
SDVariable |
SDMath.acos(String name,
SDVariable x)
Elementwise acos (arccosine, inverse cosine) operation: out = arccos(x)
|
SDVariable |
SDMath.acosh(SDVariable x)
Elementwise acosh (inverse hyperbolic cosine) function: out = acosh(x)
|
SDVariable |
SDMath.acosh(String name,
SDVariable x)
Elementwise acosh (inverse hyperbolic cosine) function: out = acosh(x)
|
SDVariable |
SDImage.adjustContrast(String name,
SDVariable in,
SDVariable factor)
Adjusts contrast of RGB or grayscale images.
|
SDVariable |
SDImage.adjustHue(String name,
SDVariable in,
SDVariable delta)
Adjust hue of RGB image
|
SDVariable |
SDImage.adjustSaturation(String name,
SDVariable in,
SDVariable factor)
Adjust saturation of RGB images
|
SDVariable |
SDBaseOps.all(SDVariable x,
int... dimensions)
|
SDVariable |
SDBaseOps.all(String name,
SDVariable x,
int... dimensions)
Boolean and array reduction operation, optionally along specified dimensions
|
SDVariable |
SDMath.amax(SDVariable in,
int... dimensions)
Absolute max array reduction operation, optionally along specified dimensions: out = max(abs(x))
|
SDVariable |
SDMath.amax(String name,
SDVariable in,
int... dimensions)
Absolute max array reduction operation, optionally along specified dimensions: out = max(abs(x))
|
SDVariable |
SDMath.amean(SDVariable in,
int... dimensions)
Absolute mean array reduction operation, optionally along specified dimensions: out = mean(abs(x))
|
SDVariable |
SDMath.amean(String name,
SDVariable in,
int... dimensions)
Absolute mean array reduction operation, optionally along specified dimensions: out = mean(abs(x))
|
SDVariable |
SDMath.amin(SDVariable in,
int... dimensions)
Absolute min array reduction operation, optionally along specified dimensions: out = min(abs(x))
|
SDVariable |
SDMath.amin(String name,
SDVariable in,
int... dimensions)
Absolute min array reduction operation, optionally along specified dimensions: out = min(abs(x))
|
SDVariable |
SDBitwise.and(SDVariable x,
SDVariable y)
|
SDVariable |
SDMath.and(SDVariable x,
SDVariable y)
Boolean AND operation: elementwise (x != 0) && (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBitwise.and(String name,
SDVariable x,
SDVariable y)
Bitwise AND operation.
|
SDVariable |
SDMath.and(String name,
SDVariable x,
SDVariable y)
Boolean AND operation: elementwise (x != 0) && (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.any(SDVariable x,
int... dimensions)
|
SDVariable |
SDBaseOps.any(String name,
SDVariable x,
int... dimensions)
Boolean or array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.argmax(SDVariable in,
boolean keepDims,
int... dimensions) |
SDVariable |
SDBaseOps.argmax(SDVariable in,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension |
SDVariable |
SDBaseOps.argmax(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmax(String name,
SDVariable in,
int... dimensions)
Argmax array reduction operation, optionally along specified dimensions.
Output values are the index of the maximum value of each slice along the specified dimension |
SDVariable |
SDBaseOps.argmin(SDVariable in,
boolean keepDims,
int... dimensions) |
SDVariable |
SDBaseOps.argmin(SDVariable in,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension |
SDVariable |
SDBaseOps.argmin(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension. Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.argmin(String name,
SDVariable in,
int... dimensions)
Argmin array reduction operation, optionally along specified dimensions.
Output values are the index of the minimum value of each slice along the specified dimension |
SDVariable |
SDMath.asin(SDVariable x)
Elementwise asin (arcsin, inverse sine) operation: out = arcsin(x)
|
SDVariable |
SDMath.asin(String name,
SDVariable x)
Elementwise asin (arcsin, inverse sine) operation: out = arcsin(x)
|
SDVariable |
SDMath.asinh(SDVariable x)
Elementwise asinh (inverse hyperbolic sine) function: out = asinh(x)
|
SDVariable |
SDMath.asinh(String name,
SDVariable x)
Elementwise asinh (inverse hyperbolic sine) function: out = asinh(x)
|
SDVariable |
SDBaseOps.assign(SDVariable in,
Number value)
Return an array with equal shape to the input, but all elements set to 'value'
|
SDVariable |
SDBaseOps.assign(SDVariable x,
SDVariable y)
Assign/copy op: out = x.assign(y).
|
SDVariable |
SDBaseOps.assign(String name,
SDVariable in,
Number value)
Return an array with equal shape to the input, but all elements set to 'value'
|
SDVariable |
SDBaseOps.assign(String name,
SDVariable x,
SDVariable y)
Assign/copy op: out = x.assign(y).
|
SDVariable |
SDMath.asum(SDVariable in,
int... dimensions)
Absolute sum array reduction operation, optionally along specified dimensions: out = sum(abs(x))
|
SDVariable |
SDMath.asum(String name,
SDVariable in,
int... dimensions)
Absolute sum array reduction operation, optionally along specified dimensions: out = sum(abs(x))
|
SDVariable |
SDMath.atan(SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = arctangent(x)
|
SDVariable |
SDMath.atan(String name,
SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = arctangent(x)
|
SDVariable |
SDMath.atan2(SDVariable y,
SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = atan2(x,y).
|
SDVariable |
SDMath.atan2(String name,
SDVariable y,
SDVariable x)
Elementwise atan (arctangent, inverse tangent) operation: out = atan2(x,y).
|
SDVariable |
SDMath.atanh(SDVariable x)
Elementwise atanh (inverse hyperbolic tangent) function: out = atanh(x)
|
SDVariable |
SDMath.atanh(String name,
SDVariable x)
Elementwise atanh (inverse hyperbolic tangent) function: out = atanh(x)
|
SDVariable |
SDCNN.avgPooling2d(SDVariable input,
Pooling2DConfig pooling2DConfig)
|
SDVariable |
SDCNN.avgPooling2d(String name,
SDVariable input,
Pooling2DConfig pooling2DConfig)
2D Convolution layer operation - average pooling 2d
|
SDVariable |
SDCNN.avgPooling3d(SDVariable input,
Pooling3DConfig pooling3DConfig)
|
SDVariable |
SDCNN.avgPooling3d(String name,
SDVariable input,
Pooling3DConfig pooling3DConfig)
3D convolution layer operation - average pooling 3d
|
SDVariable[] |
SDBaseOps.batchMmul(SDVariable[] matricesA,
SDVariable[] matricesB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(SDVariable[] matricesA,
SDVariable[] matricesB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(SDVariable[] matricesA,
SDVariable[] matricesB,
boolean transposeA,
boolean transposeB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(SDVariable[] matricesA,
SDVariable[] matricesB,
boolean transposeA,
boolean transposeB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(String[] names,
SDVariable[] matricesA,
SDVariable[] matricesB,
boolean transposeA,
boolean transposeB)
Matrix multiply a batch of matrices.
|
SDVariable[] |
SDBaseOps.batchMmul(String[] names,
SDVariable[] matricesA,
SDVariable[] matricesB,
boolean transposeA,
boolean transposeB)
Matrix multiply a batch of matrices.
|
SDVariable |
SDNN.batchNorm(SDVariable input,
SDVariable mean,
SDVariable variance,
SDVariable gamma,
SDVariable beta,
double epsilon,
int... axis)
Batch norm operation.
|
SDVariable |
SDNN.batchNorm(String name,
SDVariable input,
SDVariable mean,
SDVariable variance,
SDVariable gamma,
SDVariable beta,
boolean applyGamma,
boolean applyBeta,
double epsilon,
int... axis)
Batch normalization with optional application of gamma/beta args.
|
SDVariable |
SDNN.batchNorm(String name,
SDVariable input,
SDVariable mean,
SDVariable variance,
SDVariable gamma,
SDVariable beta,
double epsilon,
int... axis)
Neural network batch normalization operation.
For details, see https://arxiv.org/abs/1502.03167 |
SDVariable |
SDCNN.batchToSpace(SDVariable x,
int[] blocks,
int[][] crops) |
SDVariable |
SDCNN.batchToSpace(String name,
SDVariable x,
int[] blocks,
int[][] crops)
Convolution 2d layer batch to space operation on 4d input.
|
SDVariable |
SDRandom.bernoulli(double p,
SDVariable shape) |
SDVariable |
SDRandom.bernoulli(String name,
double p,
SDVariable shape)
Generate a new random SDVariable, where values are randomly sampled according to a Bernoulli distribution,
with the specified probability.
|
SDVariable |
SDMath.betainc(String name,
SDVariable a,
SDVariable b,
SDVariable x)
Compute the regularized incomplete beta integral
|
SDVariable |
SDNN.biasAdd(SDVariable input,
SDVariable bias,
boolean nchw) |
SDVariable |
SDNN.biasAdd(String name,
SDVariable input,
SDVariable bias,
boolean nchw)
Bias addition operation: a special case of addition, typically used with CNN 4D activations and a 1D bias vector
|
SDVariable |
SDMath.bitRotl(String name,
SDVariable x,
SDVariable shift)
Roll integer bits to the left, i.e.
|
SDVariable |
SDMath.bitRotr(String name,
SDVariable x,
SDVariable shift)
Roll integer bits to the right, i.e.
|
SDVariable |
SDBitwise.bitsHammingDistance(SDVariable x,
SDVariable y)
|
SDVariable |
SDBitwise.bitsHammingDistance(String name,
SDVariable x,
SDVariable y)
Bitwise Hamming distance reduction over all elements of both input arrays.
For example, if x=01100000 and y=1010000 then the bitwise Hamming distance is 2 (due to differences at positions 0 and 1) |
SDVariable |
SDMath.bitShift(String name,
SDVariable x,
SDVariable shift)
Shift integer bits to the left, i.e.
|
SDVariable |
SDMath.bitShiftRight(String name,
SDVariable x,
SDVariable shift)
Shift integer bits to the right, i.e.
|
SDVariable |
SDBaseOps.castTo(SDVariable toCast,
DataType toType) |
SDVariable |
SDBaseOps.castTo(String name,
SDVariable toCast,
DataType toType) |
SDVariable |
SDMath.ceil(SDVariable x)
Element-wise ceiling function: out = ceil(x).
|
SDVariable |
SDMath.ceil(String name,
SDVariable x)
Element-wise ceiling function: out = ceil(x).
|
SDVariable |
SDMath.clipByNorm(SDVariable x,
double clipValue)
Clipping by L2 norm
if l2Norm(x) < clipValue, then input is returned unmodifed Otherwise, out[i] = in[i] * clipValue / l2Norm(in) |
SDVariable |
SDMath.clipByNorm(SDVariable x,
double clipValue,
int... dimensions)
Clipping by L2 norm, optionally along dimension(s)
if l2Norm(x,dimension) < clipValue, then input is returned unmodifed Otherwise, out[i] = in[i] * clipValue / l2Norm(in, dimensions) where each value is clipped according to the corresponding l2Norm along the specified dimensions |
SDVariable |
SDMath.clipByNorm(String name,
SDVariable x,
double clipValue)
Clipping by L2 norm
if l2Norm(x) < clipValue, then input is returned unmodifed Otherwise, out[i] = in[i] * clipValue / l2Norm(in) |
SDVariable |
SDMath.clipByNorm(String name,
SDVariable x,
double clipValue,
int... dimensions)
Clipping by L2 norm, optionally along dimension(s)
if l2Norm(x,dimension) < clipValue, then input is returned unmodifed Otherwise, out[i] = in[i] * clipValue / l2Norm(in, dimensions) where each value is clipped according to the corresponding l2Norm along the specified dimensions |
SDVariable |
SDMath.clipByValue(SDVariable x,
double clipValueMin,
double clipValueMax)
Element-wise clipping function:
out[i] = in[i] if in[i] >= clipValueMin and in[i] <= clipValueMax out[i] = clipValueMin if in[i] < clipValueMin out[i] = clipValueMax if in[i] > clipValueMax |
SDVariable |
SDMath.clipByValue(String name,
SDVariable x,
double clipValueMin,
double clipValueMax)
Element-wise clipping function:
out[i] = in[i] if in[i] >= clipValueMin and in[i] <= clipValueMax out[i] = clipValueMin if in[i] < clipValueMin out[i] = clipValueMax if in[i] > clipValueMax |
SDVariable |
SDCNN.col2Im(SDVariable in,
Conv2DConfig config)
|
SDVariable |
SDCNN.col2Im(String name,
SDVariable in,
Conv2DConfig config)
col2im operation for use in 2D convolution operations.
|
SDVariable |
SDBaseOps.concat(int dimension,
SDVariable... inputs) |
SDVariable |
SDBaseOps.concat(String name,
int dimension,
SDVariable... inputs)
Concatenate a set of inputs along the specified dimension.
Note that inputs must have identical rank and identical dimensions, other than the dimension to stack on. For example, if 2 inputs have shape [a, x, c] and [a, y, c] and dimension = 1, then the output has shape [a, x+y, c] |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable predictions) |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable pred,
Integer numClasses) |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable pred,
Integer numClasses,
SDVariable weights) |
SDVariable |
SDMath.confusionMatrix(SDVariable labels,
SDVariable pred,
SDVariable weights) |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred) |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
DataType dataType)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values.
|
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
Integer numClasses)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values.
For example, if labels = [0, 1, 1], predicted = [0, 2, 1], and numClasses=4 then output is: [1, 0, 0, 0] [0, 1, 1, 0] [0, 0, 0, 0] [0, 0, 0, 0] |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
Integer numClasses,
SDVariable weights)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values.
For example, if labels = [0, 1, 1], predicted = [0, 2, 1], numClasses = 4, and weights = [1, 2, 3] [1, 0, 0, 0] [0, 3, 2, 0] [0, 0, 0, 0] [0, 0, 0, 0] |
SDVariable |
SDMath.confusionMatrix(String name,
SDVariable labels,
SDVariable pred,
SDVariable weights)
Compute the 2d confusion matrix of size [numClasses, numClasses] from a pair of labels and predictions, both of
which are represented as integer values.
|
SDVariable |
SDCNN.conv1d(SDVariable input,
SDVariable weights,
Conv1DConfig conv1DConfig)
|
SDVariable |
SDCNN.conv1d(SDVariable input,
SDVariable weights,
SDVariable bias,
Conv1DConfig conv1DConfig)
|
SDVariable |
SDCNN.conv1d(String name,
SDVariable input,
SDVariable weights,
Conv1DConfig conv1DConfig)
|
SDVariable |
SDCNN.conv1d(String name,
SDVariable input,
SDVariable weights,
SDVariable bias,
Conv1DConfig conv1DConfig)
Conv1d operation.
|
SDVariable |
SDCNN.conv2d(SDVariable[] inputs,
Conv2DConfig config)
|
SDVariable |
SDCNN.conv2d(SDVariable layerInput,
SDVariable weights,
Conv2DConfig config)
|
SDVariable |
SDCNN.conv2d(SDVariable layerInput,
SDVariable weights,
SDVariable bias,
Conv2DConfig config)
|
SDVariable |
SDCNN.conv2d(String name,
SDVariable[] inputs,
Conv2DConfig config)
2D Convolution operation with optional bias
|
SDVariable |
SDCNN.conv2d(String name,
SDVariable layerInput,
SDVariable weights,
Conv2DConfig config)
|
SDVariable |
SDCNN.conv2d(String name,
SDVariable layerInput,
SDVariable weights,
SDVariable bias,
Conv2DConfig config)
2D Convolution operation with optional bias
|
SDVariable |
SDCNN.conv3d(SDVariable input,
SDVariable weights,
Conv3DConfig conv3DConfig)
|
SDVariable |
SDCNN.conv3d(SDVariable input,
SDVariable weights,
SDVariable bias,
Conv3DConfig conv3DConfig)
|
SDVariable |
SDCNN.conv3d(String name,
SDVariable input,
SDVariable weights,
Conv3DConfig conv3DConfig)
|
SDVariable |
SDCNN.conv3d(String name,
SDVariable input,
SDVariable weights,
SDVariable bias,
Conv3DConfig conv3DConfig)
Convolution 3D operation with optional bias
|
SDVariable |
SDMath.cos(SDVariable x)
Elementwise cosine operation: out = cos(x)
|
SDVariable |
SDMath.cos(String name,
SDVariable x)
Elementwise cosine operation: out = cos(x)
|
SDVariable |
SDMath.cosh(SDVariable x)
Elementwise cosh (hyperbolic cosine) operation: out = cosh(x)
|
SDVariable |
SDMath.cosh(String name,
SDVariable x)
Elementwise cosh (hyperbolic cosine) operation: out = cosh(x)
|
SDVariable |
SDMath.cosineDistance(SDVariable x,
SDVariable y,
int... dimensions) |
SDVariable |
SDMath.cosineDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Cosine distance reduction operation.
|
SDVariable |
SDLoss.cosineDistance(String name,
SDVariable label,
SDVariable predictions,
int dimension)
|
SDVariable |
SDLoss.cosineDistance(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce,
int dimension)
|
SDVariable |
SDLoss.cosineDistance(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i] , which is
equivalent to cosine distance when both the predictions and labels are normalized.Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. |
SDVariable |
SDMath.cosineSimilarity(SDVariable x,
SDVariable y,
int... dimensions) |
SDVariable |
SDMath.cosineSimilarity(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Cosine similarity pairwise reduction operation.
|
SDVariable |
SDMath.countNonZero(SDVariable input,
int... dimensions)
Count non zero array reduction operation, optionally along specified dimensions: out = count(x != 0)
|
SDVariable |
SDMath.countNonZero(String name,
SDVariable input,
int... dimensions)
Count non zero array reduction operation, optionally along specified dimensions: out = count(x != 0)
|
SDVariable |
SDMath.countZero(SDVariable input,
int... dimensions)
Count zero array reduction operation, optionally along specified dimensions: out = count(x == 0)
|
SDVariable |
SDMath.countZero(String name,
SDVariable input,
int... dimensions)
Count zero array reduction operation, optionally along specified dimensions: out = count(x == 0)
|
SDVariable |
SDImage.cropAndResize(String name,
SDVariable image,
SDVariable cropBoxes,
SDVariable boxIndices,
SDVariable cropOutSize,
CropAndResize.Method method,
double extrapolationValue)
Given an input image and some crop boxes, extract out the image subsets and resize them to the specified size.
|
SDVariable |
SDMath.cross(SDVariable a,
SDVariable b) |
SDVariable |
SDMath.cross(String name,
SDVariable a,
SDVariable b)
Returns the pair-wise cross product of equal size arrays a and b: a x b = ||a||x||b|| sin(theta).
|
SDVariable |
SDMath.cube(SDVariable x)
Element-wise cube function: out = x^3
|
SDVariable |
SDMath.cube(String name,
SDVariable x)
Element-wise cube function: out = x^3
|
SDVariable |
SDBaseOps.cumprod(SDVariable in,
boolean exclusive,
boolean reverse,
int... axis) |
SDVariable |
SDBaseOps.cumprod(String name,
SDVariable in,
boolean exclusive,
boolean reverse,
int... axis)
Cumulative product operation.
For input: [ a, b, c], output is: exclusize=false, reverse=false: [a, a*b, a*b*c] exclusive=true, reverse=false, [0, a, a*b] exclusive=false, reverse=true: [a*b*c, b*c, c] exclusive=true, reverse=true: [b*c, c, 0] |
SDVariable |
SDBaseOps.cumsum(SDVariable in,
boolean exclusive,
boolean reverse,
int... axis) |
SDVariable |
SDBaseOps.cumsum(String name,
SDVariable in,
boolean exclusive,
boolean reverse,
int... axis)
Cumulative sum operation.
For input: [ a, b, c], output is: exclusize=false, reverse=false: [a, a+b, a+b+c] exclusive=true, reverse=false, [0, a, a+b] exclusive=false, reverse=true: [a+b+c, b+c, c] exclusive=true, reverse=true: [b+c, c, 0] |
SDVariable |
SDCNN.deconv2d(SDVariable[] inputs,
DeConv2DConfig deconv2DConfig)
|
SDVariable |
SDCNN.deconv2d(SDVariable layerInput,
SDVariable weights,
DeConv2DConfig deconv2DConfig)
|
SDVariable |
SDCNN.deconv2d(SDVariable layerInput,
SDVariable weights,
SDVariable bias,
DeConv2DConfig deconv2DConfig)
|
SDVariable |
SDCNN.deconv2d(String name,
SDVariable[] inputs,
DeConv2DConfig deconv2DConfig)
2D deconvolution operation with or without optional bias
|
SDVariable |
SDCNN.deconv2d(String name,
SDVariable layerInput,
SDVariable weights,
DeConv2DConfig deconv2DConfig)
|
SDVariable |
SDCNN.deconv2d(String name,
SDVariable layerInput,
SDVariable weights,
SDVariable bias,
DeConv2DConfig deconv2DConfig)
2D deconvolution operation with optional bias
|
SDVariable |
SDCNN.deconv3d(SDVariable input,
SDVariable weights,
DeConv3DConfig config)
|
SDVariable |
SDCNN.deconv3d(SDVariable input,
SDVariable weights,
SDVariable bias,
DeConv3DConfig config)
|
SDVariable |
SDCNN.deconv3d(String name,
SDVariable input,
SDVariable weights,
DeConv3DConfig config)
|
SDVariable |
SDCNN.deconv3d(String name,
SDVariable input,
SDVariable weights,
SDVariable bias,
DeConv3DConfig config)
3D CNN deconvolution operation with or without optional bias
|
SDVariable |
SDCNN.depthToSpace(SDVariable x,
int blockSize,
String dataFormat)
|
SDVariable |
SDCNN.depthToSpace(String name,
SDVariable x,
int blockSize,
String dataFormat)
Convolution 2d layer batch to space operation on 4d input.
Reduces input channels dimension by rearranging data into a larger spatial dimensions Example: if input has shape [mb, 8, 2, 2] and block size is 2, then output size is [mb, 8/(2*2), 2*2, 2*2] = [mb, 2, 4, 4] |
SDVariable |
SDCNN.depthWiseConv2d(SDVariable[] inputs,
Conv2DConfig depthConv2DConfig)
|
SDVariable |
SDCNN.depthWiseConv2d(SDVariable layerInput,
SDVariable depthWeights,
Conv2DConfig config)
|
SDVariable |
SDCNN.depthWiseConv2d(SDVariable layerInput,
SDVariable depthWeights,
SDVariable bias,
Conv2DConfig config)
|
SDVariable |
SDCNN.depthWiseConv2d(String name,
SDVariable[] inputs,
Conv2DConfig depthConv2DConfig)
Depth-wise convolution 2D operation.
|
SDVariable |
SDCNN.depthWiseConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
Conv2DConfig config)
|
SDVariable |
SDCNN.depthWiseConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
SDVariable bias,
Conv2DConfig config)
Depth-wise 2D convolution operation with optional bias
|
SDVariable |
SDMath.diag(SDVariable x) |
SDVariable |
SDMath.diag(String name,
SDVariable x)
Returns an output variable with diagonal values equal to the specified values; off-diagonal values will be set to 0
For example, if input = [1,2,3], then output is given by: [ 1, 0, 0] [ 0, 2, 0] [ 0, 0, 3] Higher input ranks are also supported: if input has shape [a,...,R-1] then output[i,...,k,i,...,k] = input[i,...,k]. |
SDVariable |
SDMath.diagPart(SDVariable x) |
SDVariable |
SDMath.diagPart(String name,
SDVariable x)
Extract the diagonal part from the input array.
If input is [ 1, 0, 0] [ 0, 2, 0] [ 0, 0, 3] then output is [1, 2, 3]. Supports higher dimensions: in general, out[i,...,k] = in[i,...,k,i,...,k] |
SDVariable |
SDCNN.dilation2D(SDVariable df,
SDVariable weights,
int[] strides,
int[] rates,
boolean isSameMode)
|
SDVariable |
SDCNN.dilation2D(String name,
SDVariable df,
SDVariable weights,
int[] strides,
int[] rates,
boolean isSameMode)
TODO doc string
|
SDVariable |
SDBaseOps.dot(SDVariable x,
SDVariable y,
int... dimensions)
TODO doc string
|
SDVariable |
SDBaseOps.dot(String name,
SDVariable x,
SDVariable y,
int... dimensions)
TODO doc string
|
SDVariable |
SDNN.dotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled)
This operation performs dot product attention on the given timeseries input with the given queries
|
List<SDVariable> |
SDNN.dotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled,
boolean withWeights)
This operation performs dot product attention on the given timeseries input with the given queries
|
SDVariable |
SDNN.dotProductAttention(String name,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled)
This operation performs dot product attention on the given timeseries input with the given queries
|
List<SDVariable> |
SDNN.dotProductAttention(String name,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled,
boolean withWeights)
This operation performs dot product attention on the given timeseries input with the given queries
out = sum(similarity(k_i, q) * v_i)
similarity(k, q) = softmax(k * q) where x * q is the dot product of x and q
Optionally with normalization step:
similarity(k, q) = softmax(k * q / sqrt(size(q))
See also "Attention is all you need" (https://arxiv.org/abs/1706.03762, p.
|
SDVariable |
SDNN.dropout(SDVariable input,
double inputRetainProbability) |
SDVariable |
SDNN.dropout(String name,
SDVariable input,
double inputRetainProbability) |
SDVariable[] |
SDBaseOps.dynamicPartition(SDVariable x,
SDVariable partitions,
int numPartitions) |
SDVariable[] |
SDBaseOps.dynamicPartition(String[] name,
SDVariable x,
SDVariable partitions,
int numPartitions)
Dynamically partition the input variable values into the specified number of paritions, using the indices.
Example: |
SDVariable |
SDBaseOps.dynamicStitch(SDVariable[] indices,
SDVariable[] x) |
SDVariable |
SDBaseOps.dynamicStitch(SDVariable[] indices,
SDVariable[] x) |
SDVariable |
SDBaseOps.dynamicStitch(String name,
SDVariable[] indices,
SDVariable[] x)
Dynamically merge the specified input arrays into a single array, using the specified indices
|
SDVariable |
SDBaseOps.dynamicStitch(String name,
SDVariable[] indices,
SDVariable[] x)
Dynamically merge the specified input arrays into a single array, using the specified indices
|
SDVariable |
SDNN.elu(SDVariable x)
Element-wise exponential linear unit (ELU) function:
out = x if x > 0 out = a * (exp(x) - 1) if x <= 0 with constant a = 1.0 |
SDVariable |
SDNN.elu(String name,
SDVariable x)
Element-wise exponential linear unit (ELU) function:
out = x if x > 0 out = a * (exp(x) - 1) if x <= 0 with constant a = 1.0 |
SDVariable |
SDMath.entropy(SDVariable in,
int... dimensions)
Entropy reduction: -sum(x * log(x))
|
SDVariable |
SDMath.entropy(String name,
SDVariable in,
int... dimensions)
Entropy reduction: -sum(x * log(x))
|
SDVariable |
SDBaseOps.eq(SDVariable x,
double y)
Equals operation: elementwise x == y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.eq(SDVariable x,
SDVariable y)
Equal to operation: elementwise x == y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.eq(String name,
SDVariable x,
double y)
Equals operation: elementwise x == y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.eq(String name,
SDVariable x,
SDVariable y)
Equal to operation: elementwise x == y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDMath.erf(SDVariable x)
Element-wise Gaussian error function - out = erf(in)
|
SDVariable |
SDMath.erf(String name,
SDVariable x)
Element-wise Gaussian error function - out = erf(in)
|
SDVariable |
SDMath.erfc(SDVariable x)
Element-wise complementary Gaussian error function - out = erfc(in) = 1 - erf(in)
|
SDVariable |
SDMath.erfc(String name,
SDVariable x)
Element-wise complementary Gaussian error function - out = erfc(in) = 1 - erf(in)
|
SDVariable |
SDMath.euclideanDistance(SDVariable x,
SDVariable y,
int... dimensions) |
SDVariable |
SDMath.euclideanDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Euclidean distance (l2 norm, l2 distance) reduction operation.
|
SDVariable |
SDMath.exp(SDVariable x)
Elementwise exponent function: out = exp(x) = 2.71828...^x
|
SDVariable |
SDMath.exp(String name,
SDVariable x)
Elementwise exponent function: out = exp(x) = 2.71828...^x
|
SDVariable |
SDBaseOps.expandDims(SDVariable x,
int axis) |
SDVariable |
SDBaseOps.expandDims(String name,
SDVariable x,
int axis)
Reshape the input by adding a 1 at the specified location.
For example, if input has shape [a, b], then output shape is: axis = 0: [1, a, b] axis = 1: [a, 1, b] axis = 2: [a, b, 1] |
SDVariable |
SDMath.expm1(SDVariable x)
Elementwise 1.0 - exponent function: out = 1.0 - exp(x) = 1.0 - 2.71828...^x
|
SDVariable |
SDMath.expm1(String name,
SDVariable x)
Elementwise 1.0 - exponent function: out = 1.0 - exp(x) = 1.0 - 2.71828...^x
|
SDVariable |
SDRandom.exponential(double lambda,
SDVariable shape)
Generate a new random SDVariable, where values are randomly sampled according to a exponential distribution:
P(x) = lambda * exp(-lambda * x)
|
SDVariable |
SDRandom.exponential(String name,
double lambda,
SDVariable shape)
Generate a new random SDVariable, where values are randomly sampled according to a exponential distribution:
P(x) = lambda * exp(-lambda * x)
|
SDVariable |
SDImage.extractImagePatches(String name,
SDVariable image,
int[] kSizes,
int[] strides,
int[] rates,
boolean sameMode)
Given an input image, extract out image patches (of size kSizes - h x w) and place them in the depth dimension.
|
SDVariable |
SDCNN.extractImagePatches(String name,
SDVariable input,
int kH,
int kW,
int sH,
int sW,
int rH,
int rW,
boolean sameMode)
Extract image patches
|
SDVariable |
SDMath.eye(SDVariable rows)
As per
SDMath.eye(int) but with the number of rows specified as a scalar SDVariable |
SDVariable |
SDMath.eye(SDVariable rows,
SDVariable cols)
As per
SDMath.eye(int, int) bit with the number of rows/columns specified as scalar SDVariables |
SDVariable |
SDMath.eye(SDVariable rows,
SDVariable cols,
SDVariable batchDimension)
As per
SDMath.eye(int, int, DataType, int...) bit with the number of rows/columns specified as scalar SDVariables,
and the batch dimension specified as a 1D SDVariable |
SDVariable |
SDMath.eye(String name,
SDVariable rows)
As per
SDMath.eye(String, int) but with the number of rows specified as a scalar SDVariable |
SDVariable |
SDMath.eye(String name,
SDVariable rows,
SDVariable cols)
As per
SDMath.eye(String, int, int) bit with the number of rows/columns specified as scalar SDVariables |
SDVariable |
SDMath.eye(String name,
SDVariable rows,
SDVariable cols,
SDVariable batchDimension)
As per
#eye(String, int, int, int...) bit with the number of rows/columns specified as scalar SDVariables,
and the batch dimension specified as a 1D SDVariable |
SDVariable |
SDBaseOps.fill(SDVariable shape,
DataType dataType,
double value)
Generate an output variable with the specified (dynamic) shape with all elements set to the specified value
|
SDVariable |
SDBaseOps.fill(String name,
SDVariable shape,
DataType dataType,
double value)
Generate an output variable with the specified (dynamic) shape with all elements set to the specified value
|
SDVariable |
SDMath.firstIndex(SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions) |
SDVariable |
SDMath.firstIndex(SDVariable in,
Condition condition,
int... dimensions) |
SDVariable |
SDMath.firstIndex(String name,
SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions)
First index reduction operation.
Returns a variable that contains the index of the first element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.firstIndex(String name,
SDVariable in,
Condition condition,
int... dimensions)
First index reduction operation.
Returns a variable that contains the index of the first element that matches the specified condition (for each slice along the specified dimensions) |
SDVariable |
SDMath.floor(SDVariable x)
Element-wise floor function: out = floor(x).
|
SDVariable |
SDMath.floor(String name,
SDVariable x)
Element-wise floor function: out = floor(x).
|
SDVariable[] |
SDNN.fusedBatchNorm(String[] names,
SDVariable x,
SDVariable scale,
SDVariable offset,
SDVariable dataFormat,
SDVariable isTraining)
Batch normalization
|
SDVariable |
SDBaseOps.gather(SDVariable df,
int[] indices,
int axis) |
SDVariable |
SDBaseOps.gather(SDVariable df,
SDVariable indices,
int axis) |
SDVariable |
SDBaseOps.gather(String name,
SDVariable df,
int[] indices,
int axis)
Gather slices from the input variable where the indices are specified as fixed int[] values.
Output shape is same as input shape, except for axis dimension, which has size equal to indices.length. |
SDVariable |
SDBaseOps.gather(String name,
SDVariable df,
SDVariable indices,
int axis)
Gather slices from the input variable where the indices are specified as dynamic SDVariable values.
Output shape is same as input shape, except for axis dimension, which has size equal to indices.length. |
SDVariable |
SDBaseOps.gatherNd(SDVariable df,
SDVariable indices)
TODO doc string
|
SDVariable |
SDBaseOps.gatherNd(String name,
SDVariable df,
SDVariable indices)
TODO doc string
|
SDVariable |
SDNN.gelu(SDVariable x)
GELU activation function - Gaussian Error Linear Units
For more details, see Gaussian Error Linear Units (GELUs) - https://arxiv.org/abs/1606.08415 This method uses the sigmoid approximation |
SDVariable |
SDNN.gelu(String name,
SDVariable x)
GELU activation function - Gaussian Error Linear Units
For more details, see Gaussian Error Linear Units (GELUs) - https://arxiv.org/abs/1606.08415 This method uses the sigmoid approximation |
protected SDVariable |
SDBaseOps.gradientBackwardsMarker(SDVariable x)
Intended for internal/developer use
|
protected SDVariable |
SDBaseOps.gradientBackwardsMarker(String name,
SDVariable x)
Intended for internal/developer use
|
GRUCellOutputs |
SDRNN.gru(SDVariable x,
SDVariable hLast,
GRUWeights weights)
|
GRUCellOutputs |
SDRNN.gru(String baseName,
SDVariable x,
SDVariable hLast,
GRUWeights weights)
The GRU cell.
|
SDVariable |
SDBaseOps.gt(SDVariable x,
double y)
Greater than operation: elementwise x > y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.gt(SDVariable x,
SDVariable y)
Greater than operation: elementwise x > y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.gt(String name,
SDVariable x,
double y)
Greater than operation: elementwise x > y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.gt(String name,
SDVariable x,
SDVariable y)
Greater than operation: elementwise x > y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.gte(SDVariable x,
double y)
Greater than or equals operation: elementwise x >= y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.gte(SDVariable x,
SDVariable y)
Greater than or equal to operation: elementwise x >= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.gte(String name,
SDVariable x,
double y)
Greater than or equals operation: elementwise x >= y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.gte(String name,
SDVariable x,
SDVariable y)
Greater than or equal to operation: elementwise x >= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDMath.hammingDistance(SDVariable x,
SDVariable y,
int... dimensions) |
SDVariable |
SDMath.hammingDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Hamming distance reduction operation.
|
SDVariable |
SDNN.hardSigmoid(SDVariable in)
Element-wise hard sigmoid function:
out[i] = 0 if in[i] <= -2.5 out[1] = 0.2*in[i]+0.5 if -2.5 < in[i] < 2.5 out[i] = 1 if in[i] >= 2.5 |
SDVariable |
SDNN.hardSigmoid(String name,
SDVariable in)
Element-wise hard sigmoid function:
out[i] = 0 if in[i] <= -2.5 out[1] = 0.2*in[i]+0.5 if -2.5 < in[i] < 2.5 out[i] = 1 if in[i] >= 2.5 |
SDVariable |
SDNN.hardTanh(SDVariable in)
Element-wise hard tanh function:
out[i] = -1 if in[i] <= -1 out[1] = in[i] if -1 < in[i] < 1 out[i] = 1 if in[i] >= 1 |
SDVariable |
SDNN.hardTanh(String name,
SDVariable in)
Element-wise hard tanh function:
out[i] = -1 if in[i] <= -1 out[1] = in[i] if -1 < in[i] < 1 out[i] = 1 if in[i] >= 1 |
SDVariable |
SDNN.hardTanhDerivative(SDVariable x)
Derivative (dOut/dIn) of the element-wise hard Tanh function -
SDNN.hardTanh(SDVariable) |
SDVariable |
SDNN.hardTanhDerivative(String name,
SDVariable x)
Derivative (dOut/dIn) of the element-wise hard Tanh function -
SDNN.hardTanh(SDVariable) |
SDVariable |
SDLoss.hingeLoss(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.hingeLoss(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.hingeLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Hinge loss: a loss function used for training classifiers.
|
SDVariable |
SDLoss.huberLoss(String name,
SDVariable label,
SDVariable predictions,
double delta)
|
SDVariable |
SDLoss.huberLoss(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce,
double delta)
|
SDVariable |
SDLoss.huberLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
SDMath.iamax(SDVariable in,
boolean keepDims,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
|
SDVariable |
SDMath.iamax(SDVariable in,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
|
SDVariable |
SDMath.iamax(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
|
SDVariable |
SDMath.iamax(String name,
SDVariable in,
int... dimensions)
Index of the max absolute value: argmax(abs(in))
|
SDVariable |
SDMath.iamin(SDVariable in,
boolean keepDims,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
|
SDVariable |
SDMath.iamin(SDVariable in,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
|
SDVariable |
SDMath.iamin(String name,
SDVariable in,
boolean keepDims,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
|
SDVariable |
SDMath.iamin(String name,
SDVariable in,
int... dimensions)
Index of the min absolute value: argmin(abs(in))
|
SDVariable |
SDBaseOps.identity(SDVariable input)
Elementwise identity operation: out = x
|
SDVariable |
SDBaseOps.identity(String name,
SDVariable input)
Elementwise identity operation: out = x
|
SDVariable |
SDCNN.im2Col(SDVariable in,
Conv2DConfig config)
|
SDVariable |
SDCNN.im2Col(String name,
SDVariable in,
Conv2DConfig config)
im2col operation for use in 2D convolution operations.
|
SDVariable |
SDBaseOps.invertPermutation(SDVariable input)
Compute the inverse permutation indices for a permutation operation
Example: if input is [2, 0, 1] then output is [1, 2, 0] The idea is that x.permute(input).permute(invertPermutation(input)) == x |
SDVariable |
SDBaseOps.invertPermutation(String name,
SDVariable input)
Compute the inverse permutation indices for a permutation operation
Example: if input is [2, 0, 1] then output is [1, 2, 0] The idea is that x.permute(input).permute(invertPermutation(input)) == x |
SDVariable |
SDMath.isFinite(SDVariable x)
Is finite operation: elementwise isFinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isFinite(String name,
SDVariable x)
Is finite operation: elementwise isFinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isInfinite(SDVariable x)
Is infinite operation: elementwise isInfinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isInfinite(String name,
SDVariable x)
Is infinite operation: elementwise isInfinite(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isMax(SDVariable x)
Is maximum operation: elementwise x == max(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isMax(String name,
SDVariable x)
Is maximum operation: elementwise x == max(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isNaN(SDVariable x)
Is Not a Number operation: elementwise isNaN(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isNaN(String name,
SDVariable x)
Is Not a Number operation: elementwise isNaN(x)
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDMath.isNonDecreasing(SDVariable x)
Is the array non decreasing?
An array is non-decreasing if for every valid i, x[i] <= x[i+1]. |
SDVariable |
SDMath.isNonDecreasing(String name,
SDVariable x)
Is the array non decreasing?
An array is non-decreasing if for every valid i, x[i] <= x[i+1]. |
SDVariable |
SDBaseOps.isNumericTensor(SDVariable x)
Is the director a numeric tensor? In the current version of ND4J/SameDiff, this always returns true/1
|
SDVariable |
SDBaseOps.isNumericTensor(String name,
SDVariable x)
Is the director a numeric tensor? In the current version of ND4J/SameDiff, this always returns true/1
|
SDVariable |
SDMath.isStrictlyIncreasing(SDVariable x)
Is the array strictly increasing?
An array is strictly increasing if for every valid i, x[i] < x[i+1]. |
SDVariable |
SDMath.isStrictlyIncreasing(String name,
SDVariable x)
Is the array strictly increasing?
An array is strictly increasing if for every valid i, x[i] < x[i+1]. |
SDVariable |
SDMath.jaccardDistance(SDVariable x,
SDVariable y,
int... dimensions)
Jaccard similarity reduction operation.
|
SDVariable |
SDMath.jaccardDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Jaccard similarity reduction operation.
|
SDVariable |
SDLoss.l2Loss(SDVariable var)
L2 loss: 1/2 * sum(x^2)
|
SDVariable |
SDLoss.l2Loss(String name,
SDVariable var)
L2 loss: 1/2 * sum(x^2)
|
SDVariable |
SDMath.lastIndex(SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions) |
SDVariable |
SDMath.lastIndex(SDVariable in,
Condition condition,
int... dimensions) |
SDVariable |
SDMath.lastIndex(String name,
SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions)
Last index reduction operation.
Returns a variable that contains the index of the last element that matches the specified condition (for each slice along the specified dimensions) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.lastIndex(String name,
SDVariable in,
Condition condition,
int... dimensions)
Last index reduction operation.
Returns a variable that contains the index of the last element that matches the specified condition (for each slice along the specified dimensions) |
SDVariable |
SDNN.layerNorm(SDVariable input,
SDVariable gain,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization without bias
y = gain * standardize(x)
|
SDVariable |
SDNN.layerNorm(SDVariable input,
SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization
y = gain * standardize(x) + bias
|
SDVariable |
SDNN.layerNorm(String name,
SDVariable input,
SDVariable gain,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization
y = gain * standardize(x)
|
SDVariable |
SDNN.layerNorm(String name,
SDVariable input,
SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions)
Apply Layer Normalization
y = gain * standardize(x) + bias
|
SDVariable |
SDNN.leakyRelu(SDVariable x,
double alpha)
Element-wise leaky ReLU function:
out = x if x >= 0.0 out = alpha * x if x < cutoff Alpha value is most commonly set to 0.01 |
SDVariable |
SDNN.leakyRelu(String name,
SDVariable x,
double alpha)
Element-wise leaky ReLU function:
out = x if x >= 0.0 out = alpha * x if x < cutoff Alpha value is most commonly set to 0.01 |
SDVariable |
SDNN.leakyReluDerivative(String name,
SDVariable x,
double alpha)
Leaky ReLU derivative: dOut/dIn given input.
See SDNN.leakyRelu(String, SDVariable, double) |
SDVariable |
SDBitwise.leftShift(SDVariable x,
SDVariable y)
|
SDVariable |
SDBitwise.leftShift(String name,
SDVariable x,
SDVariable y)
Bitwise left shift operation.
|
SDVariable |
SDBitwise.leftShiftCyclic(SDVariable x,
SDVariable y)
|
SDVariable |
SDBitwise.leftShiftCyclic(String name,
SDVariable x,
SDVariable y)
Bitwise left cyclical shift operation.
|
SDVariable |
SDNN.linear(SDVariable input,
SDVariable weights,
SDVariable bias) |
SDVariable |
SDNN.linear(String name,
SDVariable input,
SDVariable weights,
SDVariable bias)
Linear layer operation: out = mmul(in,w) + bias
Note that bias array is optional |
SDVariable |
SDBaseOps.linspace(String name,
SDVariable from,
SDVariable to,
SDVariable length,
DataType dt)
Create a new 1d array with values evenly spaced between values 'start' and 'stop'
For example, linspace(start=3.0, stop=4.0, number=3) will generate [3.0, 3.5, 4.0]
|
SDVariable[] |
SDMath.listDiff(SDVariable x,
SDVariable y)
List diff operation computes the difference between two 1d arrays, and also returns the indices - i.e., the positions
where the output appears in the input X.
For inputs X and Y, listDiff returns everything in X but not in Y. For example, if X=[1,10,3,7,6] and Y=[10, 6]), then:
output 0 (difference) = {@code [1,3,7]}output 1 (indices) = {@code [0, 2, 3]} |
SDVariable |
SDCNN.localResponseNormalization(SDVariable inputs,
LocalResponseNormalizationConfig lrnConfig)
|
SDVariable |
SDCNN.localResponseNormalization(String name,
SDVariable input,
LocalResponseNormalizationConfig lrnConfig)
2D convolution layer operation - local response normalization
|
SDVariable |
SDMath.log(SDVariable x)
Element-wise logarithm function (base e - natural logarithm): out = log(x)
|
SDVariable |
SDMath.log(SDVariable in,
double base)
Element-wise logarithm function (with specified base): out = log_{base}(x)
|
SDVariable |
SDMath.log(String name,
SDVariable x)
Element-wise logarithm function (base e - natural logarithm): out = log(x)
|
SDVariable |
SDMath.log(String name,
SDVariable in,
double base)
Element-wise logarithm function (with specified base): out = log_{base}(x)
|
SDVariable |
SDMath.log1p(SDVariable x)
Elementwise natural logarithm function: out = log_e (1 + x)
|
SDVariable |
SDMath.log1p(String name,
SDVariable x)
Elementwise natural logarithm function: out = log_e (1 + x)
|
SDVariable |
SDMath.logEntropy(SDVariable in,
int... dimensions)
Log entropy reduction: log(-sum(x * log(x)))
|
SDVariable |
SDMath.logEntropy(String name,
SDVariable in,
int... dimensions)
Log entropy reduction: log(-sum(x * log(x)))
|
SDVariable |
SDLoss.logLoss(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.logLoss(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.logLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
SDLoss.logPoisson(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.logPoisson(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.logPoisson(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Log poisson loss: a loss function used for training classifiers.
|
SDVariable |
SDLoss.logPoissonFull(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.logPoissonFull(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.logPoissonFull(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Log poisson loss: a loss function used for training classifiers.
|
SDVariable |
SDNN.logSigmoid(SDVariable x)
Element-wise sigmoid function: out[i] = log(sigmoid(in[i]))
|
SDVariable |
SDNN.logSigmoid(String name,
SDVariable x)
Element-wise sigmoid function: out[i] = log(sigmoid(in[i]))
|
SDVariable |
SDNN.logSoftmax(SDVariable x)
Log softmax activation
|
SDVariable |
SDNN.logSoftmax(SDVariable x,
int dimension)
Log softmax activation
|
SDVariable |
SDNN.logSoftmax(String name,
SDVariable x)
Log softmax activation
|
SDVariable |
SDNN.logSoftmax(String name,
SDVariable x,
int dimension)
Log softmax activation
|
SDVariable |
SDMath.logSumExp(SDVariable input,
int... dimensions)
Log-sum-exp reduction (optionally along dimension).
|
SDVariable |
SDMath.logSumExp(String name,
SDVariable input,
boolean keepDims,
int... dimensions) |
SDVariable |
SDMath.logSumExp(String name,
SDVariable input,
int... dimensions)
Log-sum-exp reduction (optionally along dimension).
|
LSTMCellOutputs |
SDRNN.lstmCell(SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights weights,
LSTMConfiguration config)
|
LSTMCellOutputs |
SDRNN.lstmCell(String baseName,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights weights,
LSTMConfiguration config)
The LSTM cell.
|
LSTMLayerOutputs |
SDRNN.lstmLayer(int maxTSLength,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights weights,
LSTMConfiguration config)
|
LSTMLayerOutputs |
SDRNN.lstmLayer(SDVariable maxTSLength,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights weights,
LSTMConfiguration config)
|
LSTMLayerOutputs |
SDRNN.lstmLayer(String baseName,
int maxTSLength,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights weights,
LSTMConfiguration config)
|
LSTMLayerOutputs |
SDRNN.lstmLayer(String baseName,
SDVariable maxTSLength,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights weights,
LSTMConfiguration config)
The LSTM layer.
|
SDVariable |
SDBaseOps.lt(SDVariable x,
double y)
Less than operation: elementwise x < y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.lt(SDVariable x,
SDVariable y)
Less than operation: elementwise x < y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.lt(String name,
SDVariable x,
double y)
Less than operation: elementwise x < y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.lt(String name,
SDVariable x,
SDVariable y)
Less than operation: elementwise x < y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.lte(SDVariable x,
double y)
Less than or equals operation: elementwise x <= y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.lte(SDVariable x,
SDVariable y)
Less than or equal to operation: elementwise x <= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.lte(String name,
SDVariable x,
double y)
Less than or equals operation: elementwise x <= y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.lte(String name,
SDVariable x,
SDVariable y)
Less than or equal to operation: elementwise x <= y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDMath.manhattanDistance(SDVariable x,
SDVariable y,
int... dimensions) |
SDVariable |
SDMath.manhattanDistance(String name,
SDVariable x,
SDVariable y,
int... dimensions)
Manhattan distance (l1 norm, l1 distance) reduction operation.
|
SDVariable |
SDBaseOps.matchCondition(SDVariable in,
Condition condition)
Returns a boolean mask of equal shape to the input, where the condition is satisfied - value 1 where satisfied, 0 otherwise
|
SDVariable |
SDBaseOps.matchCondition(String name,
SDVariable in,
Condition condition)
Returns a boolean mask of equal shape to the input, where the condition is satisfied - value 1 where satisfied, 0 otherwise
|
SDVariable |
SDBaseOps.matchConditionCount(SDVariable in,
Condition condition)
Returns a count of the number of elements that satisfy the condition
|
SDVariable |
SDBaseOps.matchConditionCount(String name,
SDVariable in,
Condition condition)
Returns a count of the number of elements that satisfy the condition
|
SDVariable |
SDBaseOps.matchConditionCount(String name,
SDVariable in,
Condition condition,
boolean keepDim,
int... dimensions)
Returns a count of the number of elements that satisfy the condition (for each slice along the specified dimensions)
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDMath.matrixBandPart(String name,
SDVariable input,
SDVariable minLower,
SDVariable maxUpper)
Copy a tensor setting everything outside a central band in each innermost matrix.
|
SDVariable |
SDMath.matrixDeterminant(SDVariable in) |
SDVariable |
SDMath.matrixDeterminant(String name,
SDVariable in)
Matrix determinant op.
|
SDVariable |
SDMath.matrixInverse(SDVariable in) |
SDVariable |
SDMath.matrixInverse(String name,
SDVariable in)
Matrix inverse op.
|
SDVariable |
SDBaseOps.max(SDVariable x,
int... dimensions)
Max array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.max(SDVariable first,
SDVariable second)
Element-wise maximum operation: out[i] = max(first[i], second[i])
Supports broadcasting |
SDVariable |
SDBaseOps.max(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Max array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.max(String name,
SDVariable x,
int... dimensions)
Max array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.max(String name,
SDVariable first,
SDVariable second)
Element-wise maximum operation: out[i] = max(first[i], second[i])
Supports broadcasting |
SDVariable |
SDCNN.maxPooling2d(SDVariable input,
Pooling2DConfig pooling2DConfig)
|
SDVariable |
SDCNN.maxPooling2d(String name,
SDVariable input,
Pooling2DConfig pooling2DConfig)
2D Convolution layer operation - max pooling 2d
|
SDVariable |
SDCNN.maxPooling3d(SDVariable input,
Pooling3DConfig pooling3DConfig)
|
SDVariable |
SDCNN.maxPooling3d(String name,
SDVariable input,
Pooling3DConfig pooling3DConfig)
3D convolution layer operation - max pooling 3d operation.
|
SDVariable[] |
SDNN.maxPoolWithArgmax(String[] names,
SDVariable x,
Pooling2DConfig pooling2DConfig)
Max pooling on the input and outputs both max values and indices
|
SDVariable |
SDBaseOps.mean(SDVariable x)
Full array mean reduction operation
|
SDVariable |
SDBaseOps.mean(SDVariable x,
int... dimension)
Mean (average) array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.mean(String name,
SDVariable x,
boolean keepDims,
int... dimension)
Mean (average) array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.mean(String name,
SDVariable x,
int... dimension)
Mean (average) array reduction operation, optionally along specified dimensions
|
SDVariable |
SDLoss.meanPairwiseSquaredError(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.meanPairwiseSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. |
SDVariable |
SDLoss.meanSquaredError(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.meanSquaredError(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.meanSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean squared error loss function.
|
SDVariable |
SDMath.mergeAdd(SDVariable... x)
Merge add function: merges an arbitrary number of equal shaped arrays using elementwise addition:
out = sum_i in[i]
|
SDVariable |
SDMath.mergeAdd(String name,
SDVariable... inputs)
Merge add function: merges an arbitrary number of equal shaped arrays using element-wise addition:
out = sum_i in[i]
|
SDVariable |
SDMath.mergeAvg(SDVariable... inputs)
Merge average function: merges an arbitrary number of equal shaped arrays using element-wise mean operation:
out = mean_i in[i]
|
SDVariable |
SDMath.mergeAvg(String name,
SDVariable... inputs)
Merge average function: merges an arbitrary number of equal shaped arrays using element-wise mean operation:
out = mean_i in[i]
|
SDVariable |
SDMath.mergeMax(SDVariable... x)
Merge max function: merges an arbitrary number of equal shaped arrays using element-wise maximum operation:
out = max_i in[i]
|
SDVariable |
SDMath.mergeMax(String name,
SDVariable... inputs)
Merge max function: merges an arbitrary number of equal shaped arrays using element-wise maximum operation:
out = max_i in[i]
|
SDVariable[] |
SDMath.meshgrid(List<String> names,
boolean cartesian,
SDVariable... inputs) |
SDVariable[] |
SDMath.meshgrid(List<String> names,
SDVariable... inputs)
Broadcast the 1D input variables onto an n-dimensional grid.
The resulting variable can be used for example for evaluating functions at all locations on a grid. Example: |
SDVariable[] |
SDMath.meshgrid(SDVariable... inputs) |
SDVariable |
SDBaseOps.min(SDVariable x,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDBaseOps.min(SDVariable first,
SDVariable second)
Element-wise minimum operation: out[i] = min(first[i], second[i])
Supports broadcasting |
SDVariable |
SDBaseOps.min(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDBaseOps.min(String name,
SDVariable x,
int... dimensions)
Minimum array reduction operation, optionally along specified dimensions.
|
SDVariable |
SDBaseOps.min(String name,
SDVariable first,
SDVariable second)
Element-wise minimum operation: out[i] = min(first[i], second[i])
Supports broadcasting |
SDVariable |
SDBaseOps.mmul(SDVariable x,
SDVariable y)
Matrix multiplication: out = mmul(x,y)
|
SDVariable |
SDBaseOps.mmul(SDVariable x,
SDVariable y,
MMulTranspose transpose)
Matrix multiplication: out = mmul(x,y)
Supports specifying a MMulTranspose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable |
SDBaseOps.mmul(String name,
SDVariable x,
SDVariable y)
Matrix multiplication: out = mmul(x,y)
|
SDVariable |
SDBaseOps.mmul(String name,
SDVariable x,
SDVariable y,
MMulTranspose transpose)
Matrix multiplication: out = mmul(x,y)
Supports specifying a MMulTranspose argument to perform operation such as mmul(a^T, b), etc. |
SDVariable[] |
SDMath.moments(SDVariable input,
int... axes) |
SDVariable[] |
SDMath.moments(String[] name,
SDVariable input,
int... axes)
Calculate the mean and (population) variance for the input variable, for the specified axis
|
SDVariable |
SDNN.multiHeadDotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled)
This performs multi-headed dot product attention on the given timeseries input
|
List<SDVariable> |
SDNN.multiHeadDotProductAttention(SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled,
boolean withWeights)
This performs multi-headed dot product attention on the given timeseries input
|
SDVariable |
SDNN.multiHeadDotProductAttention(String name,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled)
This performs multi-headed dot product attention on the given timeseries input
|
List<SDVariable> |
SDNN.multiHeadDotProductAttention(String name,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled,
boolean withWeights)
This performs multi-headed dot product attention on the given timeseries input
out = concat(head_1, head_2, ..., head_n) * Wo
head_i = dot_product_attention(Wq_i*q, Wk_i*k, Wv_i*v)
Optionally with normalization when calculating the attention for each head.
|
SDVariable |
SDMath.neg(SDVariable x)
Elementwise negative operation: out = -x
|
SDVariable |
SDMath.neg(String name,
SDVariable x)
Elementwise negative operation: out = -x
|
SDVariable |
SDBaseOps.neq(SDVariable x,
double y)
Not equals operation: elementwise x != y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.neq(SDVariable x,
SDVariable y)
Not equal to operation: elementwise x != y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBaseOps.neq(String name,
SDVariable x,
double y)
Not equals operation: elementwise x != y
Returns an array with the same shape/size as the input, with values 1 where condition is satisfied, or value 0 otherwise |
SDVariable |
SDBaseOps.neq(String name,
SDVariable x,
SDVariable y)
Not equal to operation: elementwise x != y
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDImage.nonMaxSuppression(String name,
SDVariable boxes,
SDVariable scores,
SDVariable maxOutSize,
SDVariable iouThreshold,
SDVariable scoreThreshold)
Greedily selects a subset of bounding boxes in descending order of score
|
SDVariable |
SDBaseOps.norm1(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Norm1 (L1 norm) reduction operation: The output contains the L1 norm for each tensor/subset along the specified dimensions:
out = sum_i abs(x[i]) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm1(String name,
SDVariable x,
int... dimensions)
Norm1 (L1 norm) reduction operation: The output contains the L1 norm for each tensor/subset along the specified dimensions:
out = sum_i abs(x[i]) |
SDVariable |
SDBaseOps.norm2(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Norm2 (L2 norm) reduction operation: The output contains the L2 norm for each tensor/subset along the specified dimensions:
out = sqrt(sum_i x[i]^2) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.norm2(String name,
SDVariable x,
int... dimensions)
Norm2 (L2 norm) reduction operation: The output contains the L2 norm for each tensor/subset along the specified dimensions:
out = sqrt(sum_i x[i]^2) |
SDVariable |
SDRandom.normal(double mean,
double stddev,
SDVariable shape) |
SDVariable |
SDRandom.normal(String name,
double mean,
double stddev,
SDVariable shape)
Generate a new random SDVariable, where values are randomly sampled according to a Gaussian (normal) distribution,
N(mean, stdev)
See SDRandom.normal(String, double, double, long...) for the equivalent function where the shape is
specified as a long[] instead |
SDVariable[] |
SDMath.normalizeMoments(SDVariable counts,
SDVariable means,
SDVariable variances,
double shift) |
SDVariable[] |
SDMath.normalizeMoments(String[] name,
SDVariable counts,
SDVariable means,
SDVariable variances,
double shift)
Calculate the mean and variance from the sufficient statistics
|
SDVariable |
SDBaseOps.normmax(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Max norm (infinity norm) reduction operation: The output contains the max norm for each tensor/subset along the
specified dimensions:
out = max(abs(x[i])) Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.normmax(String name,
SDVariable x,
int... dimensions)
Max norm (infinity norm) reduction operation: The output contains the max norm for each tensor/subset along the
specified dimensions
|
SDVariable |
SDBaseOps.oneHot(SDVariable indices,
int depth) |
SDVariable |
SDBaseOps.oneHot(SDVariable indices,
int depth,
int axis,
double on,
double off) |
SDVariable |
SDBaseOps.oneHot(SDVariable indices,
int depth,
int axis,
double on,
double off,
DataType dataType) |
SDVariable |
SDBaseOps.oneHot(String name,
SDVariable indices,
int depth)
Convert the array to a one-hot array with walues 0 and 1 for each entry
If input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with out[i, ..., j, in[i,...,j]] = 1 with other values being set to 0 |
SDVariable |
SDBaseOps.oneHot(String name,
SDVariable indices,
int depth,
int axis,
double on,
double off)
Convert the array to a one-hot array with walues
on and off for each entryIf input has shape [ a, ..., n] then output has shape [ a, ..., n, depth], with out[i, ..., j, in[i,...,j]] = on with other values being set to off |
SDVariable |
SDBaseOps.oneHot(String name,
SDVariable indices,
int depth,
int axis,
double on,
double off,
DataType dataType)
As per
SDBaseOps.oneHot(String, SDVariable, int, int, double, double) but allows configuring the output datatype |
SDVariable |
SDBaseOps.onesLike(SDVariable input)
Return a variable of all 1s, with the same shape as the input variable.
|
SDVariable |
SDBaseOps.onesLike(String name,
SDVariable input)
Return a variable of all 1s, with the same shape as the input variable.
|
SDVariable |
SDBaseOps.onesLike(String name,
SDVariable input,
DataType dataType)
As per
SDBaseOps.onesLike(String, SDVariable) but the output datatype may be specified |
SDVariable |
SDBitwise.or(SDVariable x,
SDVariable y)
|
SDVariable |
SDMath.or(SDVariable x,
SDVariable y)
Boolean OR operation: elementwise (x != 0) || (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBitwise.or(String name,
SDVariable x,
SDVariable y)
Bitwise OR operation.
|
SDVariable |
SDMath.or(String name,
SDVariable x,
SDVariable y)
Boolean OR operation: elementwise (x != 0) || (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDNN.pad(SDVariable input,
int[][] padding,
double constant)
|
SDVariable |
SDNN.pad(SDVariable input,
SDVariable padding,
double constant)
Perform padding on the given array, where padded values are the specified constant.
Example: Input array: [1, 2] [3, 4] Padding array: [2, 0] [1, 1] Contant = 0 Result: [0, 0, 0, 0] [0, 0, 0, 0] [0, 1, 2, 0] [0, 3, 4, 0] |
SDVariable |
SDNN.pad(String outputName,
SDVariable input,
SDVariable padding,
Pad.Mode mode,
double constant)
As per
SDNN.pad(SDVariable, SDVariable, double) but also supports multiple Pad.Mode modes.Example: Input array: [1, 2] [3, 4] [5, 6] Padding array: [2, 0] [1, 1] Contant = 0 Result: CONSTANT mode [0, 0, 0, 0] [0, 0, 0, 0] [0, 1, 2, 0] [0, 3, 4, 0] [0, 5, 6, 0] Result: SYMMETRIC mode [3, 3, 4, 4] [1, 1, 2, 2] [1, 1, 2, 2] [3, 3, 4, 4] [5, 5, 6, 6] Result: REFLECT: [6, 5, 6, 0] [2, 3, 4, 3] [2, 1, 2, 1] [4, 3, 4, 3] [6, 5, 6, 5] |
SDVariable |
SDBaseOps.parallel_stack(SDVariable[] values) |
SDVariable |
SDBaseOps.parallel_stack(String name,
SDVariable[] values) |
SDVariable |
SDBaseOps.permute(SDVariable x,
int... dimensions)
Array permutation operation: permute the dimensions according to the specified permutation indices.
Example: if input has shape [a,b,c] and dimensions = [2,0,1] the output has shape [c,a,b] |
SDVariable |
SDBaseOps.permute(String name,
SDVariable x,
int... dimensions)
Array permutation operation: permute the dimensions according to the specified permutation indices.
Example: if input has shape [a,b,c] and dimensions = [2,0,1] the output has shape [c,a,b] |
SDVariable |
SDBaseOps.permute(String name,
SDVariable x,
SDVariable dimensions)
As per
SDBaseOps.permute(String, SDVariable, int...) but with SDVariable permute dimension |
SDVariable |
SDMath.polygamma(String name,
SDVariable n,
SDVariable x)
Polygamma function
|
SDVariable |
SDMath.pow(SDVariable x,
double value)
Element-wise power function: out = x^value
|
SDVariable |
SDMath.pow(SDVariable x,
SDVariable y)
Element-wise (broadcastable) power function: out = x[i]^y[i]
|
SDVariable |
SDMath.pow(String name,
SDVariable x,
double value)
Element-wise power function: out = x^value
|
SDVariable |
SDMath.pow(String name,
SDVariable x,
SDVariable y)
Element-wise (broadcastable) power function: out = x[i]^y[i]
|
SDVariable |
SDNN.prelu(SDVariable input,
SDVariable alpha,
int... sharedAxes)
|
SDVariable |
SDNN.prelu(String name,
SDVariable input,
SDVariable alpha,
int... sharedAxes)
PReLU (Parameterized Rectified Linear Unit) operation.
|
SDVariable |
SDBaseOps.prod(SDVariable x,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.prod(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.prod(String name,
SDVariable x,
int... dimensions)
Product array reduction operation, optionally along specified dimensions
|
SDVariable |
SDImage.randomCrop(String name,
SDVariable input,
SDVariable shape)
Randomly crops image
|
SDVariable |
SDBaseOps.range(String name,
SDVariable from,
SDVariable to,
SDVariable step,
DataType dataType)
As per
SDBaseOps.range(String, double, double, double, DataType) but with SDVariable arguments |
SDVariable |
SDBaseOps.rank(SDVariable in)
Returns the rank (number of dimensions, i.e., length(shape)) of the specified SDVariable as a 0D scalar variable
|
SDVariable |
SDBaseOps.rank(String name,
SDVariable in)
Returns the rank (number of dimensions, i.e., length(shape)) of the specified SDVariable as a 0D scalar variable
|
SDVariable |
SDMath.reciprocal(SDVariable a)
Element-wise reciprocal (inverse) function: out[i] = 1 / in[i]
|
SDVariable |
SDMath.reciprocal(String name,
SDVariable a)
Element-wise reciprocal (inverse) function: out[i] = 1 / in[i]
|
SDVariable |
SDNN.relu(SDVariable x,
double cutoff)
Element-wise rectified linear function with specified cutoff:
out[i] = in[i] if in[i] >= cutoff out[i] = 0 otherwise |
SDVariable |
SDNN.relu(String name,
SDVariable x,
double cutoff)
Element-wise rectified linear function with specified cutoff:
out[i] = in[i] if in[i] >= cutoff out[i] = 0 otherwise |
SDVariable |
SDNN.relu6(SDVariable x,
double cutoff)
Element-wise "rectified linear 6" function with specified cutoff:
out[i] = min(max(in, cutoff), 6) |
SDVariable |
SDNN.relu6(String name,
SDVariable x,
double cutoff)
Element-wise "rectified linear 6" function with specified cutoff:
out[i] = min(max(in, cutoff), 6) |
SDVariable |
SDNN.reluLayer(SDVariable input,
SDVariable weights,
SDVariable bias) |
SDVariable |
SDNN.reluLayer(String name,
SDVariable input,
SDVariable weights,
SDVariable bias)
ReLU (Rectified Linear Unit) layer operation: out = relu(mmul(in,w) + bias)
Note that bias array is optional |
SDVariable |
SDBaseOps.repeat(SDVariable df,
int axis) |
SDVariable |
SDBaseOps.repeat(String name,
SDVariable df,
int axis) |
SDVariable |
SDBaseOps.replaceWhere(SDVariable update,
Number value,
Condition condition)
Element-wise replace where condition:
out[i] = value if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.replaceWhere(SDVariable update,
SDVariable from,
Condition condition)
Element-wise replace where condition:
out[i] = from[i] if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.replaceWhere(String name,
SDVariable update,
Number value,
Condition condition)
Element-wise replace where condition:
out[i] = value if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.replaceWhere(String name,
SDVariable update,
SDVariable from,
Condition condition)
Element-wise replace where condition:
out[i] = from[i] if condition(update[i]) is satisfied, or out[i] = update[i] if condition(update[i]) is NOT satisfied |
SDVariable |
SDBaseOps.reshape(SDVariable x,
int... shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reshape(SDVariable x,
long... shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reshape(SDVariable x,
SDVariable shape)
Reshape the input variable to the specified (dynamic) shape.
|
SDVariable |
SDBaseOps.reshape(String name,
SDVariable x,
int... shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reshape(String name,
SDVariable x,
long... shape)
Reshape the input variable to the specified (fixed) shape.
|
SDVariable |
SDBaseOps.reshape(String name,
SDVariable x,
SDVariable shape)
Reshape the input variable to the specified (dynamic) shape.
|
SDVariable |
SDBaseOps.reverse(SDVariable x,
int... dimensions) |
SDVariable |
SDBaseOps.reverse(String name,
SDVariable x,
int... dimensions)
Reverse the values of an array for the specified dimensions
If input is: [ 1, 2, 3] [ 4, 5, 6] then reverse(in, 0): [3, 2, 1] [6, 5, 4] reverse(in, 0): [4, 5, 6] [1, 2 3] |
SDVariable |
SDBaseOps.reverseSequence(SDVariable x,
SDVariable seq_lengths) |
SDVariable |
SDBaseOps.reverseSequence(SDVariable x,
SDVariable seq_lengths,
int seqDim,
int batchDim) |
SDVariable |
SDBaseOps.reverseSequence(String name,
SDVariable x,
SDVariable seq_lengths) |
SDVariable |
SDBaseOps.reverseSequence(String name,
SDVariable x,
SDVariable seq_lengths,
int seqDim,
int batchDim)
Reverse sequence op: for each slice along dimension seqDimension, the first seqLength values are reversed
|
SDVariable |
SDBitwise.rightShift(SDVariable x,
SDVariable y)
|
SDVariable |
SDBitwise.rightShift(String name,
SDVariable x,
SDVariable y)
Bitwise right shift operation.
|
SDVariable |
SDBitwise.rightShiftCyclic(SDVariable x,
SDVariable y)
|
SDVariable |
SDBitwise.rightShiftCyclic(String name,
SDVariable x,
SDVariable y)
Bitwise right cyclical shift operation.
|
SDVariable |
SDMath.roll(String name,
SDVariable input,
SDVariable shift)
Rolls the elements of input
|
SDVariable |
SDMath.round(SDVariable x)
Elementwise round function: out = round(x).
|
SDVariable |
SDMath.round(String name,
SDVariable x)
Element-wise round function: out = round(x).
|
SDVariable |
SDMath.rsqrt(SDVariable x)
Element-wise reciprocal (inverse) of square root: out = 1.0 / sqrt(x)
|
SDVariable |
SDMath.rsqrt(String name,
SDVariable x)
Element-wise reciprocal (inverse) of square root: out = 1.0 / sqrt(x)
|
SDVariable |
SDBaseOps.scalarFloorMod(SDVariable in,
Number value)
Element-wise scalar floor modulus operation: out = floorMod(in, value).
|
SDVariable |
SDBaseOps.scalarFloorMod(String name,
SDVariable in,
Number value)
Element-wise scalar floor modulus operation: out = floorMod(in, value).
|
SDVariable |
SDBaseOps.scalarMax(SDVariable in,
Number value)
Element-wise scalar maximum operation: out = max(in, value)
|
SDVariable |
SDBaseOps.scalarMax(String name,
SDVariable in,
Number value)
Element-wise scalar maximum operation: out = max(in, value)
|
SDVariable |
SDBaseOps.scalarMin(SDVariable in,
Number value)
Element-wise scalar minimum operation: out = min(in, value)
|
SDVariable |
SDBaseOps.scalarMin(String name,
SDVariable in,
Number value)
Element-wise scalar minimum operation: out = min(in, value)
|
SDVariable |
SDBaseOps.scalarSet(SDVariable in,
Number set)
Return an array with equal shape to the input, but all elements set to value 'set'
|
SDVariable |
SDBaseOps.scalarSet(String name,
SDVariable in,
Number set)
Return a variable with equal shape to the input, but all elements set to value 'set'
|
SDVariable |
SDBaseOps.scatterAdd(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
SDBaseOps.scatterAdd(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter addition operation.
If indices is rank 0 (a scalar), then out[index, ...] += updates[...] If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] += updates[i, ...] If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] += updates[i, ..., k, ...] Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterDiv(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
SDBaseOps.scatterDiv(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter division operation.
If indices is rank 0 (a scalar), then out[index, ...] /= updates[...] If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] /= updates[i, ...] If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] /= updates[i, ..., k, ...] Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMax(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
SDBaseOps.scatterMax(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter max operation.
If indices is rank 0 (a scalar), then out[index, ...] = max(updates[...], in[index,...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = max(updates[i,...], in[indices[i],...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = max(updates[i, ..., k, ...], in[indices[i], ..., indices[k], ...] Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMin(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
SDBaseOps.scatterMin(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter min operation.
If indices is rank 0 (a scalar), then out[index, ...] = min(updates[...], in[index,...]) If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = min(updates[i,...], in[indices[i],...]) If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = min(updates[i, ..., k, ...], in[indices[i], ..., indices[k], ...] Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterMul(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
SDBaseOps.scatterMul(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter multiplication operation.
If indices is rank 0 (a scalar), then out[index, ...] *= updates[...] If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] *= updates[i, ...] If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] *= updates[i, ..., k, ...] Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterSub(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
SDBaseOps.scatterSub(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter subtraction operation.
If indices is rank 0 (a scalar), then out[index, ...] -= updates[...] If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] -= updates[i, ...] If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] -= updates[i, ..., k, ...] Note that if multiple indices refer to the same location, the contributions from each is handled correctly. |
SDVariable |
SDBaseOps.scatterUpdate(SDVariable ref,
SDVariable indices,
SDVariable updates) |
SDVariable |
SDBaseOps.scatterUpdate(String name,
SDVariable ref,
SDVariable indices,
SDVariable updates)
Scatter update operation.
If indices is rank 0 (a scalar), then out[index, ...] = updates[...] If indices is rank 1 (a vector), then for each position i, out[indices[i], ...] = updates[i, ...] If indices is rank 2+, then for each position (i,...,k), out[indices[i], ..., indices[k], ...] = updates[i, ..., k, ...] Note that if multiple indices refer to the same location, the output at those locations is undefined - different updates may occur in different orders |
SDVariable |
SDCNN.sconv2d(SDVariable[] inputs,
Conv2DConfig conv2DConfig)
|
SDVariable |
SDCNN.sconv2d(String name,
SDVariable[] inputs,
Conv2DConfig conv2DConfig)
Separable 2D convolution operation with/without optional bias
|
SDVariable |
SDBaseOps.segmentMax(SDVariable data,
SDVariable segmentIds) |
SDVariable |
SDBaseOps.segmentMax(String name,
SDVariable data,
SDVariable segmentIds)
Segment max operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [6, 9, 8] = [max(3,6), max(1,4,9), max(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. |
SDVariable |
SDBaseOps.segmentMean(SDVariable data,
SDVariable segmentIds) |
SDVariable |
SDBaseOps.segmentMean(String name,
SDVariable data,
SDVariable segmentIds)
Segment mean operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [4.5, 4.666, 5] = [mean(3,6), mean(1,4,9), mean(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. |
SDVariable |
SDBaseOps.segmentMin(SDVariable data,
SDVariable segmentIds) |
SDVariable |
SDBaseOps.segmentMin(String name,
SDVariable data,
SDVariable segmentIds)
Segment min operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [3, 1, 2] = [min(3,6), min(1,4,9), min(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. |
SDVariable |
SDBaseOps.segmentProd(SDVariable data,
SDVariable segmentIds) |
SDVariable |
SDBaseOps.segmentProd(String name,
SDVariable data,
SDVariable segmentIds)
Segment product operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [18, 36, 16] = [prod(3,6), prod(1,4,9), prod(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. |
SDVariable |
SDBaseOps.segmentSum(SDVariable data,
SDVariable segmentIds) |
SDVariable |
SDBaseOps.segmentSum(String name,
SDVariable data,
SDVariable segmentIds)
Segment sum operation.
If data = [3, 6, 1, 4, 9, 2, 8] segmentIds = [0, 0, 1, 1, 1, 2, 2] then output = [9, 14, 10] = [sum(3,6), sum(1,4,9), sum(2,8)] Note that the segment IDs must be sorted from smallest to largest segment. |
SDVariable |
SDNN.selu(SDVariable x)
Element-wise SeLU function - Scaled exponential Lineal Unit: see Self-Normalizing Neural Networks
out[i] = scale * alpha * (exp(in[i])-1) if in[i]>0, or 0 if in[i] <= 0 Uses default lcale and alpha values. |
SDVariable |
SDNN.selu(String name,
SDVariable x)
Element-wise SeLU function - Scaled exponential Lineal Unit: see Self-Normalizing Neural Networks
out[i] = scale * alpha * (exp(in[i])-1) if in[i]>0, or 0 if in[i] <= 0 Uses default lcale and alpha values. |
SDVariable |
SDCNN.separableConv2d(SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
Conv2DConfig config)
|
SDVariable |
SDCNN.separableConv2d(SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
SDVariable bias,
Conv2DConfig config)
|
SDVariable |
SDCNN.separableConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
Conv2DConfig config)
|
SDVariable |
SDCNN.separableConv2d(String name,
SDVariable layerInput,
SDVariable depthWeights,
SDVariable pointWeights,
SDVariable bias,
Conv2DConfig config)
Separable 2D convolution operation with optional bias
|
SDVariable |
SDBaseOps.sequenceMask(SDVariable lengths,
DataType dataType) |
SDVariable |
SDBaseOps.sequenceMask(SDVariable lengths,
int maxLen,
DataType dataType) |
SDVariable |
SDBaseOps.sequenceMask(SDVariable lengths,
SDVariable maxLen,
DataType dataType) |
SDVariable |
SDBaseOps.sequenceMask(String name,
SDVariable lengths,
DataType dataType) |
SDVariable |
SDBaseOps.sequenceMask(String name,
SDVariable lengths,
int maxLen,
DataType dataType) |
SDVariable |
SDBaseOps.sequenceMask(String name,
SDVariable lengths,
SDVariable maxLen,
DataType dataType)
Generate a sequence mask (with values 0 or 1) based on the specified lengths
Specifically, out[i, ..., k, j] = (j < lengths[i, ..., k] ? 1.0 : 0.0) |
SDVariable |
SDMath.setDiag(SDVariable in,
SDVariable diag) |
SDVariable |
SDMath.setDiag(String name,
SDVariable in,
SDVariable diag)
Set the diagonal value to the specified values
If input is [ a, b, c] [ d, e, f] [ g, h, i] and diag = [ 1, 2, 3] then output is [ 1, b, c] [ d, 2, f] [ g, h, 3] |
SDVariable |
SDMath.shannonEntropy(SDVariable in,
int... dimensions)
Shannon Entropy reduction: -sum(x * log2(x))
|
SDVariable |
SDMath.shannonEntropy(String name,
SDVariable in,
int... dimensions)
Shannon Entropy reduction: -sum(x * log2(x))
|
SDVariable |
SDBaseOps.shape(SDVariable input)
Returns the shape of the specified SDVariable as a 1D SDVariable
|
SDVariable |
SDBaseOps.shape(String name,
SDVariable input)
Returns the shape of the specified SDVariable as a 1D SDVariable
|
SDVariable |
SDNN.sigmoid(SDVariable x)
Element-wise sigmoid function: out[i] = 1.0/(1+exp(-in[i]))
|
SDVariable |
SDNN.sigmoid(String name,
SDVariable x)
Element-wise sigmoid function: out[i] = 1.0/(1+exp(-in[i]))
|
SDVariable |
SDLoss.sigmoidCrossEntropy(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.sigmoidCrossEntropy(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.sigmoidCrossEntropy(String name,
SDVariable label,
SDVariable predictionLogits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function.
|
SDVariable |
SDNN.sigmoidDerivative(SDVariable x,
SDVariable wrt)
Element-wise sigmoid function derivative: dL/dIn given input and dL/dOut
|
SDVariable |
SDNN.sigmoidDerivative(String name,
SDVariable x,
SDVariable wrt)
Element-wise sigmoid function derivative: dL/dIn given input and dL/dOut
|
SDVariable |
SDMath.sign(SDVariable x)
Element-wise sign (signum) function:
out = -1 if in < 0 out = 0 if in = 0 out = 1 if in > 0 |
SDVariable |
SDMath.sign(String name,
SDVariable x)
Element-wise sign (signum) function:
out = -1 if in < 0 out = 0 if in = 0 out = 1 if in > 0 |
SDVariable |
SDMath.sin(SDVariable x)
Elementwise sine operation: out = sin(x)
|
SDVariable |
SDMath.sin(String name,
SDVariable x)
Elementwise sine operation: out = sin(x)
|
SDVariable |
SDMath.sinh(SDVariable x)
Elementwise sinh (hyperbolic sine) operation: out = sinh(x)
|
SDVariable |
SDMath.sinh(String name,
SDVariable x)
Elementwise sinh (hyperbolic sine) operation: out = sinh(x)
|
SDVariable |
SDBaseOps.size(SDVariable in)
Returns the size (number of elements, i.e., prod(shape)) of the specified SDVariable as a 0D scalar variable
|
SDVariable |
SDBaseOps.size(String name,
SDVariable in)
Returns the size (number of elements, i.e., prod(shape)) of the specified SDVariable as a 0D scalar variable
|
SDVariable |
SDBaseOps.sizeAt(SDVariable in,
int dimension) |
SDVariable |
SDBaseOps.sizeAt(String name,
SDVariable in,
int dimension)
Returns a rank 0 (scalar) variable for the size of the specified dimension.
|
SDVariable |
SDBaseOps.slice(SDVariable input,
int[] begin,
int[] size) |
SDVariable |
SDBaseOps.slice(SDVariable input,
SDVariable begin,
SDVariable size) |
SDVariable |
SDBaseOps.slice(String name,
SDVariable input,
int[] begin,
int[] size)
Get a subset of the specified input, by specifying the first element and the size of the array.
For example, if input is: [a, b, c] [d, e, f] then slice(input, begin=[0,1], size=[2,1] will return: [b] [e] Note that for each dimension i, begin[i] + size[i] <= input.size(i) |
SDVariable |
SDBaseOps.slice(String name,
SDVariable input,
SDVariable begin,
SDVariable size) |
SDVariable |
SDNN.softmax(SDVariable x)
Softmax activation on dimension 1.
|
SDVariable |
SDNN.softmax(SDVariable x,
int dimension)
Softmax activation
|
SDVariable |
SDNN.softmax(String name,
SDVariable x)
Softmax activation on dimension 1.
|
SDVariable |
SDNN.softmax(String name,
SDVariable x,
int dimension)
Softmax activation
|
SDVariable |
SDLoss.softmaxCrossEntropy(String name,
SDVariable label,
SDVariable predictions)
|
SDVariable |
SDLoss.softmaxCrossEntropy(String name,
SDVariable label,
SDVariable predictions,
LossReduce lossReduce)
|
SDVariable |
SDLoss.softmaxCrossEntropy(String name,
SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits) If LossReduce.NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;
otherwise, the output is a scalar. |
SDVariable |
SDNN.softmaxDerivative(String name,
SDVariable x,
SDVariable wrt) |
SDVariable |
SDNN.softmaxDerivative(String name,
SDVariable x,
SDVariable wrt,
Integer dimension) |
SDVariable |
SDNN.softplus(SDVariable x)
Element-wise softplus function: out = log(exp(x) + 1)
|
SDVariable |
SDNN.softplus(String name,
SDVariable x)
Element-wise softplus function: out = log(exp(x) + 1)
|
SDVariable |
SDNN.softsign(SDVariable x)
Element-wise softsign function: out = x / (abs(x) + 1)
|
SDVariable |
SDNN.softsign(String name,
SDVariable x)
Element-wise softsign function: out = x / (abs(x) + 1)
|
SDVariable |
SDNN.softsignDerivative(SDVariable x)
Element-wise derivative (dOut/dIn) of the softsign function
SDNN.softsign(SDVariable) |
SDVariable |
SDNN.softsignDerivative(String name,
SDVariable x)
Element-wise derivative (dOut/dIn) of the softsign function
SDNN.softsign(SDVariable) |
SDVariable |
SDCNN.spaceToBatch(SDVariable x,
int[] blocks,
int[][] padding) |
SDVariable |
SDCNN.spaceToBatch(String name,
SDVariable x,
int[] blocks,
int[][] padding)
Convolution 2d layer space to batch operation on 4d input.
|
SDVariable |
SDCNN.spaceToDepth(SDVariable x,
int blockSize,
String dataFormat) |
SDVariable |
SDCNN.spaceToDepth(String name,
SDVariable x,
int blockSize,
String dataFormat)
Convolution 2d layer space to depth operation on 4d input.
Increases input channels (reduced spatial dimensions) by rearranging data into a larger channels dimension Example: if input has shape [mb, 2, 4, 4] and block size is 2, then output size is [mb, 8/(2*2), 2*2, 2*2] = [mb, 2, 4, 4] |
SDVariable |
SDLoss.sparseSoftmaxCrossEntropy(SDVariable logits,
SDVariable labels)
|
SDVariable |
SDLoss.sparseSoftmaxCrossEntropy(String name,
SDVariable logits,
SDVariable labels)
As per
SDLoss.softmaxCrossEntropy(String, SDVariable, SDVariable, LossReduce) but the labels variable
is represented as an integer array instead of the equivalent one-hot array.i.e., if logits are rank N, then labels have rank N-1 |
SDVariable |
SDMath.sqrt(SDVariable x)
Element-wise square root function: out = sqrt(x)
|
SDVariable |
SDMath.sqrt(String name,
SDVariable x)
Element-wise square root function: out = sqrt(x)
|
SDVariable |
SDMath.square(SDVariable x)
Element-wise square function: out = x^2
|
SDVariable |
SDMath.square(String name,
SDVariable x)
Element-wise square function: out = x^2
|
SDVariable |
SDBaseOps.squaredNorm(SDVariable x,
boolean keepDims,
int... dimensions)
Squared L2 norm: see
SDBaseOps.norm2(String, SDVariable, boolean, int...) |
SDVariable |
SDBaseOps.squaredNorm(SDVariable x,
int... dimensions)
Squared L2 norm: see
SDBaseOps.norm2(String, SDVariable, int...) |
SDVariable |
SDBaseOps.squaredNorm(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Squared L2 norm: see
SDBaseOps.norm2(String, SDVariable, boolean, int...) |
SDVariable |
SDBaseOps.squaredNorm(String name,
SDVariable x,
int... dimensions)
Squared L2 norm: see
SDBaseOps.norm2(String, SDVariable, int...) |
SDVariable |
SDBaseOps.squeeze(SDVariable x,
int axis) |
SDVariable |
SDBaseOps.squeeze(String name,
SDVariable x,
int axis)
Remove a single dimension of size 1.
|
SRULayerOutputs |
SDRNN.sru(SDVariable x,
SDVariable initialC,
SDVariable mask,
SRUWeights weights)
|
SRULayerOutputs |
SDRNN.sru(SDVariable x,
SDVariable initialC,
SRUWeights weights)
|
SRULayerOutputs |
SDRNN.sru(String baseName,
SDVariable x,
SDVariable initialC,
SDVariable mask,
SRUWeights weights)
The SRU layer.
|
SRULayerOutputs |
SDRNN.sru(String baseName,
SDVariable x,
SDVariable initialC,
SRUWeights weights)
|
SRUCellOutputs |
SDRNN.sruCell(SDVariable x,
SDVariable cLast,
SRUWeights weights)
|
SRUCellOutputs |
SDRNN.sruCell(String baseName,
SDVariable x,
SDVariable cLast,
SRUWeights weights)
The SRU cell.
|
SDVariable |
SDBaseOps.stack(int axis,
SDVariable... values) |
SDVariable |
SDBaseOps.stack(String name,
int axis,
SDVariable... values)
Stack a set of N SDVariables of rank X into one rank X+1 variable.
|
SDVariable |
SDBaseOps.standardDeviation(SDVariable x,
boolean biasCorrected,
int... dimensions) |
SDVariable |
SDBaseOps.standardDeviation(String name,
SDVariable x,
boolean biasCorrected,
boolean keepDims,
int... dimensions)
Stardard deviation array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.standardDeviation(String name,
SDVariable x,
boolean biasCorrected,
int... dimensions)
Stardard deviation array reduction operation, optionally along specified dimensions
|
SDVariable |
SDMath.standardize(SDVariable x,
int... dimensions)
Standardize input variable along given axis
|
SDVariable |
SDMath.standardize(String name,
SDVariable x,
int... dimensions)
Standardize input variable along given axis
|
SDVariable |
SDMath.step(SDVariable in,
double cutoff)
Elementwise step function:
out(x) = 1 if x >= cutoff out(x) = 0 otherwise |
SDVariable |
SDMath.step(String name,
SDVariable in,
double cutoff)
Elementwise step function:
out(x) = 1 if x >= cutoff out(x) = 0 otherwise |
SDVariable |
SDBaseOps.stridedSlice(SDVariable input,
int[] begin,
int[] end,
int[] strides) |
SDVariable |
SDBaseOps.stridedSlice(SDVariable in,
int[] begin,
int[] end,
int[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
SDVariable |
SDBaseOps.stridedSlice(SDVariable input,
long[] begin,
long[] end,
long[] strides) |
SDVariable |
SDBaseOps.stridedSlice(SDVariable in,
long[] begin,
long[] end,
long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
SDVariable |
SDBaseOps.stridedSlice(String name,
SDVariable input,
int[] begin,
int[] end,
int[] strides) |
SDVariable |
SDBaseOps.stridedSlice(String name,
SDVariable in,
int[] begin,
int[] end,
int[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
SDVariable |
SDBaseOps.stridedSlice(String name,
SDVariable input,
long[] begin,
long[] end,
long[] strides)
Get a subset of the specified input, by specifying the first element, last element, and the strides.
For example, if input is: [a, b, c] [d, e, f] [g, h, i] then stridedSlice(input, begin=[0,1], end=[2,2], strides=[2,1]) will return: [b, c] [h, i] |
SDVariable |
SDBaseOps.stridedSlice(String name,
SDVariable in,
long[] begin,
long[] end,
long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask)
Get a subset of the specified input, by specifying the first element, last element, and the strides.
Operates as described in SDBaseOps.stridedSlice(SDVariable, long[], long[], long[]) with some extra mask arrays
as described below. |
SDVariable |
SDBaseOps.sum(SDVariable x,
boolean keepDims,
int... dimensions) |
SDVariable |
SDBaseOps.sum(SDVariable x,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions
|
SDVariable |
SDBaseOps.sum(String name,
SDVariable x,
boolean keepDims,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions.
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.sum(String name,
SDVariable x,
int... dimensions)
Sum array reduction operation, optionally along specified dimensions
|
SDVariable |
SDNN.swish(SDVariable x)
Element-wise "swish" function: out = x * sigmoid(b*x) with b=1.0
See: https://arxiv.org/abs/1710.05941 |
SDVariable |
SDNN.swish(String name,
SDVariable x)
Element-wise "swish" function: out = x * sigmoid(b*x) with b=1.0
See: https://arxiv.org/abs/1710.05941 |
SDVariable |
SDMath.tan(SDVariable x)
Elementwise tangent operation: out = tan(x)
|
SDVariable |
SDMath.tan(String name,
SDVariable x)
Elementwise tangent operation: out = tan(x)
|
SDVariable |
SDNN.tanh(SDVariable x) |
SDVariable |
SDMath.tanh(SDVariable x)
Elementwise tanh (hyperbolic tangent) operation: out = tanh(x)
|
SDVariable |
SDNN.tanh(String name,
SDVariable x) |
SDVariable |
SDMath.tanh(String name,
SDVariable x)
Elementwise tanh (hyperbolic tangent) operation: out = tanh(x)
|
SDVariable |
SDBaseOps.tensorMmul(SDVariable x,
SDVariable y,
int[][] dimensions) |
SDVariable |
SDBaseOps.tensorMmul(String name,
SDVariable x,
SDVariable y,
int[][] dimensions) |
SDVariable |
SDBaseOps.tile(SDVariable x,
int... repeat) |
SDVariable |
SDBaseOps.tile(SDVariable x,
SDVariable repeat) |
SDVariable |
SDBaseOps.tile(String name,
SDVariable x,
int... repeat)
Repeat (tile) the input tensor the specified number of times.
For example, if input is [1, 2] [3, 4] and repeat is [2, 3] then output is [1, 2, 1, 2, 1, 2] [3, 4, 3, 4, 3, 4] [1, 2, 1, 2, 1, 2] [3, 4, 3, 4, 3, 4] |
SDVariable |
SDBaseOps.tile(String name,
SDVariable x,
SDVariable repeat) |
SDVariable |
SDBitwise.toggleBits(String name,
SDVariable x)
Flip bits
|
SDVariable |
SDMath.trace(SDVariable in) |
SDVariable |
SDMath.trace(String name,
SDVariable in)
Matrix trace operation
For rank 2 matrices, the output is a scalar vith the trace - i.e., sum of the main diagonal.
For higher rank inputs, output[a,b,c] = trace(in[a,b,c,:,:]) |
SDVariable |
SDBaseOps.transpose(SDVariable x)
Matrix transpose operation: If input has shape [a,b] output has shape [b,a]
|
SDVariable |
SDBaseOps.transpose(String name,
SDVariable x)
Matrix transpose operation: If input has shape [a,b] output has shape [b,a]
|
SDVariable |
SDRandom.uniform(double min,
double max,
SDVariable shape) |
SDVariable |
SDRandom.uniform(double min,
double max,
SDVariable shape,
DataType dataType) |
SDVariable |
SDRandom.uniform(String name,
double min,
double max,
SDVariable shape)
As per
SDRandom.uniform(double, double, SDVariable, DataType) but with Float32 output |
SDVariable |
SDRandom.uniform(String name,
double min,
double max,
SDVariable shape,
DataType dataType)
Generate a new random SDVariable, where values are randomly sampled according to a uniform distribution,
U(min,max).
|
SDVariable |
SDBaseOps.unsortedSegmentMax(SDVariable data,
SDVariable segmentIds,
int numSegments)
|
SDVariable |
SDBaseOps.unsortedSegmentMax(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment max operation.
|
SDVariable |
SDBaseOps.unsortedSegmentMean(SDVariable data,
SDVariable segmentIds,
int numSegments)
|
SDVariable |
SDBaseOps.unsortedSegmentMean(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment mean operation.
|
SDVariable |
SDBaseOps.unsortedSegmentMin(SDVariable data,
SDVariable segmentIds,
int numSegments)
|
SDVariable |
SDBaseOps.unsortedSegmentMin(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment min operation.
|
SDVariable |
SDBaseOps.unsortedSegmentProd(SDVariable data,
SDVariable segmentIds,
int numSegments)
|
SDVariable |
SDBaseOps.unsortedSegmentProd(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment product operation.
|
SDVariable |
SDBaseOps.unsortedSegmentSqrtN(SDVariable data,
SDVariable segmentIds,
int numSegments)
|
SDVariable |
SDBaseOps.unsortedSegmentSqrtN(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment sqrtN operation.
|
SDVariable |
SDBaseOps.unsortedSegmentSum(SDVariable data,
SDVariable segmentIds,
int numSegments)
|
SDVariable |
SDBaseOps.unsortedSegmentSum(String name,
SDVariable data,
SDVariable segmentIds,
int numSegments)
Unsorted segment sum operation.
|
SDVariable[] |
SDBaseOps.unstack(SDVariable value,
int axis) |
SDVariable[] |
SDBaseOps.unstack(SDVariable value,
int axis,
int num) |
SDVariable[] |
SDBaseOps.unstack(String[] names,
SDVariable value,
int axis) |
SDVariable[] |
SDBaseOps.unstack(String[] names,
SDVariable value,
int axis,
int num)
Unstack a variable of rank X into N rank X-1 variables by taking slices along the specified axis.
|
protected SDVariable |
SDOps.updateVariableNameAndReference(SDVariable varToUpdate,
String newVarName) |
protected abstract SDVariable |
SDBaseOps.updateVariableNameAndReference(SDVariable varToUpdate,
String newVarName) |
protected abstract SDVariable[] |
SDBaseOps.updateVariableNamesAndReferences(SDVariable[] variablesToUpdate,
String[] newVariableNames) |
SDVariable |
SDCNN.upsampling2d(SDVariable input,
boolean nchw,
int scaleH,
int scaleW)
|
SDVariable |
SDCNN.upsampling2d(SDVariable input,
int scale)
See
SDCNN.upsampling2d(String, SDVariable, boolean, int, int) ,
scale is used for both height and width dimensions. |
SDVariable |
SDCNN.upsampling2d(String name,
SDVariable input,
boolean nchw,
int scaleH,
int scaleW)
2D Convolution layer operation - Upsampling 2d
|
SDVariable |
SDCNN.upsampling2d(String name,
SDVariable input,
int scale)
See
SDCNN.upsampling2d(String, SDVariable, boolean, int, int) ,
scale is used for both height and width dimensions. |
protected static void |
SDValidation.validateBool(String opName,
SDVariable v)
Validate that the operation is being applied on a boolean type SDVariable
|
protected static void |
SDValidation.validateBool(String opName,
SDVariable v1,
SDVariable v2)
Validate that the operation is being applied on boolean SDVariables
|
protected static void |
SDValidation.validateBool(String opName,
String inputName,
SDVariable v)
Validate that the operation is being applied on a boolean type SDVariable
|
protected static void |
SDValidation.validateFloatingPoint(String opName,
SDVariable v)
Validate that the operation is being applied on an floating point type SDVariable
|
protected static void |
SDValidation.validateFloatingPoint(String opName,
String inputName,
SDVariable v)
Validate that the operation is being applied on a floating point type SDVariable
|
protected static void |
SDValidation.validateInteger(String opName,
SDVariable v)
Validate that the operation is being applied on an integer type SDVariable
|
protected static void |
SDValidation.validateInteger(String opName,
String inputName,
SDVariable v)
Validate that the operation is being applied on an integer type SDVariable
|
protected static void |
SDValidation.validateNumerical(String opName,
SDVariable v)
Validate that the operation is being applied on a numerical SDVariable (not boolean or utf8).
|
protected static void |
SDValidation.validateNumerical(String opName,
SDVariable v1,
SDVariable v2)
Validate that the operation is being applied on numerical SDVariables (not boolean or utf8).
|
protected static void |
SDValidation.validateNumerical(String opName,
String inputName,
SDVariable v)
Validate that the operation is being applied on a numerical SDVariable (not boolean or utf8).
|
protected static void |
SDValidation.validateSameType(String opName,
boolean numericalOnly,
SDVariable... vars)
Validate that the operation is being applied on array with the exact same datatypes (which may optionally be
restricted to numerical SDVariables only (not boolean or utf8))
|
SDVariable |
SDBaseOps.variance(SDVariable x,
boolean biasCorrected,
int... dimensions) |
SDVariable |
SDBaseOps.variance(String name,
SDVariable x,
boolean biasCorrected,
boolean keepDims,
int... dimensions)
Variance array reduction operation, optionally along specified dimensions
Note that if keepDims = true, the output variable has the same rank as the input variable, with the reduced dimensions having size 1. |
SDVariable |
SDBaseOps.variance(String name,
SDVariable x,
boolean biasCorrected,
int... dimensions)
Variance array reduction operation, optionally along specified dimensions
|
SDVariable |
SDLoss.weightedCrossEntropyWithLogits(SDVariable targets,
SDVariable inputs,
SDVariable weights)
TODO
|
SDVariable |
SDLoss.weightedCrossEntropyWithLogits(String name,
SDVariable targets,
SDVariable inputs,
SDVariable weights)
TODO
|
SDVariable[] |
SDBaseOps.whileLoop(SDVariable[] loopVars,
SameDiffSingleLambda cond,
SameDiffLambda body)
|
SDVariable[] |
SDBaseOps.whileLoop(String[] outputNames,
String loopName,
SDVariable[] loopVars,
SameDiffSingleLambda cond,
SameDiffLambda body)
Constructs a While loop using the tensorflow style control flow operations (Switch, Merge, Enter, Exit, and NextIteration)
Repeatedly executes body on the loop variables and updates them with the results, until cond evaluates to false
Note that cond and body lambdas are only called once to construct the graph.
|
SDVariable[] |
SDBaseOps.whileLoop(String loopName,
SDVariable[] loopVars,
SameDiffSingleLambda cond,
SameDiffLambda body)
|
SDVariable |
SDBitwise.xor(SDVariable x,
SDVariable y)
|
SDVariable |
SDMath.xor(SDVariable x,
SDVariable y)
Boolean XOR (exclusive OR) operation: elementwise (x != 0) XOR (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDBitwise.xor(String name,
SDVariable x,
SDVariable y)
Bitwise XOR operation (exclusive OR).
|
SDVariable |
SDMath.xor(String name,
SDVariable x,
SDVariable y)
Boolean XOR (exclusive OR) operation: elementwise (x != 0) XOR (y != 0)
If x and y arrays have equal shape, the output shape is the same as these inputs. Note: supports broadcasting if x and y have different shapes and are broadcastable. Returns an array with values 1 where condition is satisfied, or value 0 otherwise. |
SDVariable |
SDMath.zeroFraction(SDVariable input)
Full array zero fraction array reduction operation, optionally along specified dimensions: out = (count(x == 0) / length(x))
|
SDVariable |
SDMath.zeroFraction(String name,
SDVariable input)
Full array zero fraction array reduction operation, optionally along specified dimensions: out = (count(x == 0) / length(x))
|
SDVariable |
SDBaseOps.zerosLike(SDVariable input)
Return a variable of all 0s, with the same shape as the input variable.
|
SDVariable |
SDBaseOps.zerosLike(String name,
SDVariable input)
Return a variable of all 0s, with the same shape as the input variable.
|
Modifier and Type | Method and Description |
---|---|
static int |
FlatBuffersMapper.asFlatNode(SameDiff sameDiff,
DifferentialFunction node,
com.google.flatbuffers.FlatBufferBuilder bufferBuilder,
List<SDVariable> variables,
Map<String,Integer> reverseMap,
Map<String,Integer> forwardMap,
Map<String,Integer> framesMap,
AtomicInteger idCounter,
Integer id) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
SubGraph.inputs() |
List<SDVariable> |
SubGraph.outputs() |
List<SDVariable> |
SubGraphProcessor.processSubgraph(SameDiff sd,
SubGraph subGraph)
Replace the subgraph, and return the new outputs that should replace the old outputs.
Note that the order of the outputs you return matters! If the original outputs are [A,B,C] and you return output variables [X,Y,Z], then anywhere "A" was used as input will now use "X"; similarly Y replaces B, and Z replaces C. |
Modifier and Type | Method and Description |
---|---|
TestCase |
TestCase.expected(SDVariable var,
org.nd4j.linalg.function.Function<INDArray,String> validationFn) |
TestCase |
TestCase.expected(SDVariable var,
INDArray output)
Validate the output (forward pass) for a single variable using INDArray.equals(INDArray)
|
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
OpImportOverride.initFromTensorFlow(List<SDVariable> inputs,
List<SDVariable> controlDepInputs,
NODE_TYPE nodeDef,
SameDiff initWith,
Map<String,ATTR_TYPE> attributesForNode,
GRAPH_TYPE graph)
Initialize the operation and return its output variables
|
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
OpImportOverride.initFromTensorFlow(List<SDVariable> inputs,
List<SDVariable> controlDepInputs,
NODE_TYPE nodeDef,
SameDiff initWith,
Map<String,ATTR_TYPE> attributesForNode,
GRAPH_TYPE graph)
Initialize the operation and return its output variables
|
List<SDVariable> |
OpImportOverride.initFromTensorFlow(List<SDVariable> inputs,
List<SDVariable> controlDepInputs,
NODE_TYPE nodeDef,
SameDiff initWith,
Map<String,ATTR_TYPE> attributesForNode,
GRAPH_TYPE graph)
Initialize the operation and return its output variables
|
Modifier and Type | Method and Description |
---|---|
SDVariable |
Activation.asSameDiff(SameDiff sd,
SDVariable input)
Get the Activation as a SameDiff variable
|
SDVariable |
Activation.asSameDiff(String variableName,
SameDiff sd,
SDVariable input)
Get the Activation as a SameDiff variable
|
Modifier and Type | Method and Description |
---|---|
SDVariable |
Activation.asSameDiff(SameDiff sd,
SDVariable input)
Get the Activation as a SameDiff variable
|
SDVariable |
Activation.asSameDiff(String variableName,
SameDiff sd,
SDVariable input)
Get the Activation as a SameDiff variable
|
Modifier and Type | Field and Description |
---|---|
protected SDVariable[] |
DynamicCustomOp.outputVariables |
Modifier and Type | Method and Description |
---|---|
SDVariable[] |
DynamicCustomOp.outputVariables() |
SDVariable[] |
BaseOp.outputVariables(String baseName) |
SDVariable[] |
DynamicCustomOp.outputVariables(String baseName) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
DynamicCustomOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
NoOp.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
DynamicCustomOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
NoOp.doDiff(List<SDVariable> f1) |
Constructor and Description |
---|
BaseBroadcastBoolOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BaseBroadcastBoolOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BaseBroadcastBoolOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BaseBroadcastBoolOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BaseBroadcastBoolOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BaseBroadcastBoolOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BaseBroadcastOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BaseBroadcastOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BaseBroadcastOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BaseBroadcastOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BaseBroadcastOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BaseBroadcastOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BaseIndexAccumulation(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
BaseIndexAccumulation(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
boolean keepDims,
int[] dimensions) |
BaseReduceBoolOp(SameDiff sameDiff,
SDVariable input,
int... dimensions) |
BaseReduceBoolOp(SameDiff sameDiff,
SDVariable input,
int[] dimensions,
boolean keepDims) |
BaseReduceBoolOp(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
BaseReduceFloatOp(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
BaseReduceFloatOp(SameDiff sameDiff,
SDVariable input,
int... dimensions) |
BaseReduceFloatOp(SameDiff sameDiff,
SDVariable input,
int[] dimensions,
boolean keepDims) |
BaseReduceFloatOp(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
BaseReduceLongOp(SameDiff sameDiff,
SDVariable input,
int... dimensions) |
BaseReduceLongOp(SameDiff sameDiff,
SDVariable input,
int[] dimensions,
boolean keepDims) |
BaseReduceLongOp(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
BaseReduceOp(SameDiff sameDiff,
SDVariable i_v) |
BaseReduceOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
BaseReduceOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions,
boolean keepDims) |
BaseReduceOp(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
BaseReduceOp(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions,
boolean keepDims) |
BaseReduceSameOp(SameDiff sameDiff,
SDVariable input,
int... dimensions) |
BaseReduceSameOp(SameDiff sameDiff,
SDVariable input,
int[] dimensions,
boolean keepDims) |
BaseReduceSameOp(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
BaseScalarBoolOp(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
BaseScalarBoolOp(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
BaseScalarBoolOp(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace,
Object[] extraArgs) |
BaseScalarBoolOp(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
Object[] extraArgs) |
BaseScalarOp(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
BaseScalarOp(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
BaseScalarOp(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace,
Object[] extraArgs) |
BaseScalarOp(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
Object[] extraArgs) |
BaseTransformAnyOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BaseTransformAnyOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
Object[] extraArgs) |
BaseTransformAnyOp(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs) |
BaseTransformAnyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
BaseTransformAnyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
BaseTransformAnyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
Object[] extraArgs) |
BaseTransformBoolOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BaseTransformBoolOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
Object[] extraArgs) |
BaseTransformBoolOp(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs) |
BaseTransformBoolOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
BaseTransformBoolOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
BaseTransformBoolOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
Object[] extraArgs) |
BaseTransformFloatOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BaseTransformFloatOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
Object[] extraArgs) |
BaseTransformFloatOp(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs) |
BaseTransformOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BaseTransformOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
Object[] extraArgs) |
BaseTransformOp(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs) |
BaseTransformOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
BaseTransformOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
BaseTransformOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
Object[] extraArgs) |
BaseTransformSameOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BaseTransformSameOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
Object[] extraArgs) |
BaseTransformSameOp(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs) |
BaseTransformSameOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
BaseTransformSameOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
BaseTransformSameOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
Object[] extraArgs) |
BaseTransformStrictOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BaseTransformStrictOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
Object[] extraArgs) |
BaseTransformStrictOp(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs) |
BaseTransformStrictOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
BaseTransformStrictOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
DynamicCustomOp(SameDiff sameDiff,
SDVariable arg) |
DynamicCustomOp(SameDiff sameDiff,
SDVariable[] args) |
DynamicCustomOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
DynamicCustomOp(String opName,
SameDiff sameDiff,
SDVariable[] args) |
DynamicCustomOp(String opName,
SameDiff sameDiff,
SDVariable[] args,
boolean inPlace)
Initialize this for
SameDiff execution
Any extra int or float arguments for operations
must be added to the respective TArguments
or IArguments lists upon construction |
NoOp(SameDiff sd,
SDVariable in) |
Constructor and Description |
---|
BiasAdd(SameDiff sameDiff,
SDVariable input,
SDVariable bias,
boolean nchw) |
BiasAddGrad(SameDiff sameDiff,
SDVariable input,
SDVariable bias,
SDVariable gradient,
boolean nchw) |
BroadcastAddOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastAddOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastAddOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastAMax(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastAMax(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastAMax(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastAMax(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastAMax(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastAMax(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastAMin(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastAMin(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastAMin(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastAMin(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastAMin(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastAMin(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastCopyOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastCopyOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastCopyOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastCopyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastCopyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastCopyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastDivOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastDivOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastDivOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastGradientArgs(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastGradientArgs(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastGradientArgs(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastMax(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastMax(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastMax(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastMax(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastMax(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastMax(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastMin(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastMin(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastMin(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastMin(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastMin(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastMin(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastMulOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastMulOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastMulOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastMulOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastMulOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastMulOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastRDivOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastRDivOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastRDivOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastRDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastRDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastRDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastRSubOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastRSubOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastRSubOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastRSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastRSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastRSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastSubOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastSubOp(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastSubOp(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastTo(SameDiff sameDiff,
SDVariable input,
SDVariable shape) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
BroadcastGreaterThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastEqualTo.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastGreaterThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastLessThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastLessThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastNotEqual.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
BroadcastGreaterThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastEqualTo.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastGreaterThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastLessThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastLessThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BroadcastNotEqual.doDiff(List<SDVariable> f1) |
Constructor and Description |
---|
BroadcastEqualTo(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastEqualTo(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastEqualTo(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastGreaterThan(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastGreaterThan(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastGreaterThan(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastGreaterThan(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastGreaterThan(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastGreaterThan(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastGreaterThanOrEqual(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastGreaterThanOrEqual(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastGreaterThanOrEqual(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastGreaterThanOrEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastGreaterThanOrEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastGreaterThanOrEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastLessThan(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastLessThan(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastLessThan(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastLessThan(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastLessThan(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastLessThan(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastLessThanOrEqual(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastLessThanOrEqual(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastLessThanOrEqual(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastLessThanOrEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastLessThanOrEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastLessThanOrEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
BroadcastNotEqual(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
boolean inPlace) |
BroadcastNotEqual(SameDiff sameDiff,
SDVariable i_v,
int[] dimension,
Object[] extraArgs) |
BroadcastNotEqual(SameDiff sameDiff,
SDVariable i_v,
long[] shape,
boolean inPlace,
int[] dimension,
Object[] extraArgs) |
BroadcastNotEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
int[] dimension) |
BroadcastNotEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension) |
BroadcastNotEqual(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
int[] dimension,
Object[] extraArgs) |
Constructor and Description |
---|
Select(SameDiff sameDiff,
SDVariable[] args) |
Select(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
Where(SameDiff sameDiff,
SDVariable[] args) |
Where(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
WhereNumpy(SameDiff sameDiff,
SDVariable[] args) |
WhereNumpy(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
Modifier and Type | Method and Description |
---|---|
SDVariable[] |
Exit.outputVariables() |
SDVariable[] |
Switch.outputVariables() |
SDVariable[] |
LoopCond.outputVariables() |
SDVariable[] |
NextIteration.outputVariables() |
SDVariable[] |
Enter.outputVariables() |
SDVariable[] |
Merge.outputVariables() |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
StopGradient.doDiff(List<SDVariable> gradients) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
StopGradient.doDiff(List<SDVariable> gradients) |
Constructor and Description |
---|
BaseCompatOp(SameDiff sameDiff,
SDVariable[] inputs) |
Enter(SameDiff sameDiff,
SDVariable[] inputs) |
Enter(SameDiff sameDiff,
String frameName,
SDVariable input) |
Enter(SameDiff sameDiff,
String frameName,
SDVariable input,
boolean isConstant) |
Exit(SameDiff sameDiff,
SDVariable x) |
Merge(SameDiff sd,
SDVariable[] inputs) |
Merge(SameDiff sd,
SDVariable a,
SDVariable b) |
NextIteration(SameDiff sameDiff,
SDVariable x) |
StopGradient(SameDiff sd,
SDVariable in) |
Switch(SameDiff sameDiff,
SDVariable input,
SDVariable predicate) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
FreeGridOp.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
FreeGridOp.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
CropAndResize.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ResizeNearestNeighbor.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ResizeBilinear.doDiff(List<SDVariable> f1) |
List<SDVariable> |
NonMaxSuppressionV3.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
NonMaxSuppression.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
ExtractImagePatches.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
CropAndResize.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ResizeNearestNeighbor.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ResizeBilinear.doDiff(List<SDVariable> f1) |
List<SDVariable> |
NonMaxSuppressionV3.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
NonMaxSuppression.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
ExtractImagePatches.doDiff(List<SDVariable> f1) |
Constructor and Description |
---|
CropAndResize(SameDiff sameDiff,
SDVariable image,
SDVariable cropBoxes,
SDVariable boxIndices,
SDVariable cropOutSize,
CropAndResize.Method method,
double extrapolationValue) |
ExtractImagePatches(SameDiff samediff,
SDVariable input,
int[] kSizes,
int[] strides,
int[] rates,
boolean sameMode) |
NonMaxSuppression(SameDiff sameDiff,
SDVariable boxes,
SDVariable scores,
SDVariable maxOutSize,
SDVariable iouThreshold,
SDVariable scoreThreshold) |
NonMaxSuppressionV3(SameDiff sameDiff,
SDVariable boxes,
SDVariable scores,
SDVariable maxOutSize,
SDVariable iouThreshold,
SDVariable scoreThreshold) |
ResizeBicubic(SameDiff sameDiff,
SDVariable image,
SDVariable size,
boolean alignCorners,
boolean alignPixelCenters) |
ResizeBilinear(SameDiff sd,
SDVariable input,
int height,
int width,
boolean alignCorners,
boolean halfPixelCenters) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
LastIndex.doDiff(List<SDVariable> f1) |
List<SDVariable> |
FirstIndex.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IAMax.doDiff(List<SDVariable> grad) |
List<SDVariable> |
IMin.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IMax.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IAMin.doDiff(List<SDVariable> grad) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
LastIndex.doDiff(List<SDVariable> f1) |
List<SDVariable> |
FirstIndex.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IAMax.doDiff(List<SDVariable> grad) |
List<SDVariable> |
IMin.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IMax.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IAMin.doDiff(List<SDVariable> grad) |
Constructor and Description |
---|
FirstIndex(SameDiff sameDiff,
SDVariable i_v,
Condition condition,
boolean keepDims,
int... dimensions) |
IAMax(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
IAMin(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
IMax(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
IMin(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
LastIndex(SameDiff sameDiff,
SDVariable i_v,
Condition condition,
boolean keepDims,
int... dimensions) |
Modifier and Type | Method and Description |
---|---|
SDVariable[] |
ExternalErrorsFunction.outputVariables(String baseName) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
ExternalErrorsFunction.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
ExternalErrorsFunction.doDiff(List<SDVariable> f1) |
Constructor and Description |
---|
ExternalErrorsFunction(SameDiff sd,
List<SDVariable> inputs,
Map<String,INDArray> gradients) |
Constructor and Description |
---|
AvgPooling2D(SameDiff sameDiff,
SDVariable input,
Pooling2DConfig config) |
AvgPooling3D(SameDiff sameDiff,
SDVariable input,
Pooling3DConfig config) |
BatchNorm(SameDiff sameDiff,
SDVariable[] inputFunctions,
INDArray[] inputArrays,
INDArray[] outputArrays,
boolean inPlace,
boolean applyGamma,
boolean applyBeta,
double epsilon,
int[] axis) |
BatchNormDerivative(SameDiff sameDiff,
SDVariable[] inputFunctions,
INDArray[] inputArrays,
INDArray[] outputArrays,
boolean inPlace,
boolean applyGamma,
boolean applyBeta,
double epsilon,
int[] axis) |
Col2Im(SameDiff sameDiff,
SDVariable[] inputFunctions,
INDArray[] inputArrays,
INDArray[] outputs,
Conv2DConfig conv2DConfig) |
Col2Im(SameDiff sd,
SDVariable input,
Conv2DConfig config) |
Conv1D(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv1DConfig config) |
Conv1DDerivative(SameDiff sameDiff,
SDVariable[] inputs,
Conv1DConfig config) |
Conv1DDerivative(SameDiff sd,
SDVariable input,
SDVariable weights,
SDVariable bias,
SDVariable gradOut,
Conv1DConfig config) |
Conv2D(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv2DConfig config) |
Conv2DDerivative(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv2DConfig config) |
Conv3D(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv3DConfig config) |
Conv3DDerivative(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv3DConfig conv3DConfig) |
DeConv2D(SameDiff sameDiff,
SDVariable[] inputs,
DeConv2DConfig config) |
DeConv2DDerivative(SameDiff sameDiff,
SDVariable[] inputs,
DeConv2DConfig config) |
DeConv2DTF(SameDiff sameDiff,
SDVariable[] inputs,
DeConv2DConfig config) |
DeConv3D(SameDiff sameDiff,
SDVariable input,
SDVariable weights,
SDVariable bias,
DeConv3DConfig config) |
DeConv3DDerivative(SameDiff sameDiff,
SDVariable input,
SDVariable weights,
SDVariable bias,
SDVariable grad,
DeConv3DConfig config) |
DeConv3DTF(SameDiff sameDiff,
SDVariable shape,
SDVariable weights,
SDVariable input,
DeConv3DConfig config) |
DepthToSpace(SameDiff sameDiff,
SDVariable[] args,
int blockSize,
String dataFormat) |
DepthwiseConv2D(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv2DConfig config) |
Im2col(SameDiff sameDiff,
SDVariable[] inputFunctions,
INDArray[] inputArrays,
INDArray[] outputs,
Conv2DConfig conv2DConfig) |
Im2col(SameDiff sd,
SDVariable input,
Conv2DConfig config) |
Im2colBp(SameDiff sd,
SDVariable input,
Conv2DConfig config) |
Im2colBp(SameDiff sameDiff,
SDVariable i2cInput,
SDVariable gradAtOutput,
Conv2DConfig conv2DConfig) |
LocalResponseNormalization(SameDiff sameDiff,
SDVariable[] inputFunctions,
boolean inPlace,
LocalResponseNormalizationConfig config) |
LocalResponseNormalizationDerivative(SameDiff sameDiff,
SDVariable[] inputFunctions,
boolean inPlace,
LocalResponseNormalizationConfig config) |
MaxPooling2D(SameDiff sameDiff,
SDVariable input,
Pooling2DConfig config) |
MaxPooling3D(SameDiff sameDiff,
SDVariable input,
Pooling3DConfig config) |
MaxPoolWithArgmax(SameDiff sameDiff,
SDVariable input,
Pooling2DConfig config) |
Pooling2D(SameDiff sameDiff,
SDVariable[] inputs,
Pooling2DConfig config) |
Pooling2DDerivative(SameDiff sameDiff,
SDVariable[] inputs,
Pooling2DConfig config) |
Pooling3D(SameDiff sameDiff,
SDVariable[] inputs,
INDArray[] inputArrays,
INDArray[] outputs,
boolean inPlace,
Pooling3DConfig pooling3DConfig,
Pooling3D.Pooling3DType type) |
Pooling3DDerivative(SameDiff sameDiff,
SDVariable[] inputs,
INDArray[] inputArrays,
INDArray[] outputs,
boolean inPlace,
Pooling3DConfig pooling3DConfig,
Pooling3D.Pooling3DType type) |
SConv2D(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv2DConfig conv2DConfig) |
SConv2DDerivative(SameDiff sameDiff,
SDVariable[] inputFunctions,
Conv2DConfig conv2DConfig) |
SpaceToDepth(SameDiff sameDiff,
SDVariable[] args,
int blockSize,
String dataFormat) |
Upsampling2d(SameDiff sameDiff,
SDVariable input,
boolean nchw,
int scaleH,
int scaleW) |
Upsampling2dDerivative(SameDiff sameDiff,
SDVariable input,
SDVariable gradient,
boolean nchw,
int scaleH,
int scaleW) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
LSTMBlockCell.doDiff(List<SDVariable> grads) |
List<SDVariable> |
LSTMLayer.doDiff(List<SDVariable> grads) |
List<SDVariable> |
GRUCell.doDiff(List<SDVariable> grads) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
LSTMBlockCell.doDiff(List<SDVariable> grads) |
List<SDVariable> |
LSTMLayer.doDiff(List<SDVariable> grads) |
List<SDVariable> |
GRUCell.doDiff(List<SDVariable> grads) |
Constructor and Description |
---|
GRUCell(SameDiff sameDiff,
SDVariable x,
SDVariable hLast,
GRUWeights weights) |
LSTMBlockCell(SameDiff sameDiff,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights weights,
LSTMConfiguration configuration) |
LSTMLayer(SameDiff sameDiff,
SDVariable maxTSLength,
SDVariable x,
SDVariable cLast,
SDVariable yLast,
LSTMWeights weights,
LSTMConfiguration configuration) |
SRU(SameDiff sameDiff,
SDVariable x,
SDVariable initialC,
SDVariable mask,
SRUWeights weights) |
SRUCell(SameDiff sameDiff,
SDVariable x,
SDVariable cLast,
SRUWeights weights) |
Modifier and Type | Method and Description |
---|---|
SDVariable[] |
LSTMCellConfiguration.args() |
SDVariable[] |
GRUCellConfiguration.args() |
Modifier and Type | Method and Description |
---|---|
SDVariable |
LSTMLayerOutputs.getLastOutput()
Get y, the output of the cell, for the last time step.
|
SDVariable |
SRULayerOutputs.getLastOutput()
Get y, the output of the cell, for the last time step.
|
SDVariable |
LSTMLayerOutputs.getLastState()
Get c, the state of the cell, for the last time step.
|
SDVariable |
SRULayerOutputs.getLastState()
Get c, the state of the cell, for the last time step.
|
SDVariable |
GRUCellOutputs.getOutput()
Get h, the output of the cell.
|
SDVariable |
LSTMLayerOutputs.getOutput()
Get y, the output of the cell for all time steps.
|
SDVariable |
SRULayerOutputs.getOutput()
Get h, the output of the cell.
|
SDVariable |
LSTMCellOutputs.getOutput()
Get y, the output of the cell.
|
SDVariable |
SRUCellOutputs.getOutput()
Get h, the output of the cell.
|
SDVariable |
LSTMLayerOutputs.getState()
Get c, the cell's state for all time steps.
|
SDVariable |
SRULayerOutputs.getState()
Get c, the state of the cell.
|
SDVariable |
LSTMCellOutputs.getState()
Get c, the cell's state.
|
SDVariable |
SRUCellOutputs.getState()
Get c, the state of the cell.
|
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
GRUCellOutputs.getAllOutputs()
Get all outputs returned by the cell.
|
List<SDVariable> |
LSTMLayerOutputs.getAllOutputs()
Get all outputs returned by the cell.
|
List<SDVariable> |
SRULayerOutputs.getAllOutputs()
Get all outputs returned by the cell.
|
List<SDVariable> |
LSTMCellOutputs.getAllOutputs()
Get all outputs returned by the cell.
|
List<SDVariable> |
SRUCellOutputs.getAllOutputs()
Get all outputs returned by the cell.
|
Constructor and Description |
---|
GRUCellOutputs(SDVariable[] outputs) |
LSTMCellOutputs(SDVariable[] outputs) |
LSTMLayerOutputs(SDVariable[] outputs,
RnnDataFormat dataFormat) |
SRUCellOutputs(SDVariable[] outputs) |
SRULayerOutputs(SDVariable[] outputs) |
Modifier and Type | Method and Description |
---|---|
abstract SDVariable[] |
RNNWeights.args() |
SDVariable[] |
GRUWeights.args() |
SDVariable[] |
LSTMWeights.args() |
SDVariable[] |
SRUWeights.args() |
SDVariable[] |
RNNWeights.argsWithInputs(SDVariable... inputs) |
protected static SDVariable[] |
RNNWeights.filterNonNull(SDVariable... args) |
Modifier and Type | Method and Description |
---|---|
SDVariable[] |
RNNWeights.argsWithInputs(SDVariable... inputs) |
protected static SDVariable[] |
RNNWeights.filterNonNull(SDVariable... args) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
LogLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SigmoidCrossEntropyLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
HingeLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SoftmaxCrossEntropyLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
LogPoissonLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
AbsoluteDifferenceLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
MeanPairwiseSquaredErrorLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
MeanSquaredErrorLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
CosineDistanceLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SoftmaxCrossEntropyWithLogitsLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SparseSoftmaxCrossEntropyLossWithLogits.doDiff(List<SDVariable> grad) |
List<SDVariable> |
HuberLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
L2Loss.doDiff(List<SDVariable> grad) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
LogLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SigmoidCrossEntropyLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
HingeLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SoftmaxCrossEntropyLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
LogPoissonLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
AbsoluteDifferenceLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
MeanPairwiseSquaredErrorLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
MeanSquaredErrorLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
CosineDistanceLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SoftmaxCrossEntropyWithLogitsLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SparseSoftmaxCrossEntropyLossWithLogits.doDiff(List<SDVariable> grad) |
List<SDVariable> |
HuberLoss.doDiff(List<SDVariable> grad) |
List<SDVariable> |
L2Loss.doDiff(List<SDVariable> grad) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
SparseSoftmaxCrossEntropyLossWithLogitsBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
BaseLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
MeanPairwiseSquaredErrorLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
AbsoluteDifferenceLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SoftmaxCrossEntropyWithLogitsLossBp.doDiff(List<SDVariable> grad) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
SparseSoftmaxCrossEntropyLossWithLogitsBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
BaseLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
MeanPairwiseSquaredErrorLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
AbsoluteDifferenceLossBp.doDiff(List<SDVariable> grad) |
List<SDVariable> |
SoftmaxCrossEntropyWithLogitsLossBp.doDiff(List<SDVariable> grad) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
PredicateMetaOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ReduceMetaOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
InvertedPredicateMetaOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
PostulateMetaOp.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
PredicateMetaOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ReduceMetaOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
InvertedPredicateMetaOp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
PostulateMetaOp.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
TensorMmul.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
MmulBp.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Mmul.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SufficientStatistics.doDiff(List<SDVariable> grad) |
List<SDVariable> |
ZeroFraction.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Moments.doDiff(List<SDVariable> grad) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
TensorMmul.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
MmulBp.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Mmul.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SufficientStatistics.doDiff(List<SDVariable> grad) |
List<SDVariable> |
ZeroFraction.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Moments.doDiff(List<SDVariable> grad) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
IsNaN.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Any.doDiff(List<SDVariable> f1) |
List<SDVariable> |
All.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IsInf.doDiff(List<SDVariable> i_v) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
IsNaN.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Any.doDiff(List<SDVariable> f1) |
List<SDVariable> |
All.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IsInf.doDiff(List<SDVariable> i_v) |
Constructor and Description |
---|
All(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
Any(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
IsInf(SameDiff sameDiff,
SDVariable i_v,
int[] dims) |
IsInf(SameDiff sameDiff,
SDVariable i_v,
int[] dims,
boolean keepDims) |
IsNaN(SameDiff sameDiff,
SDVariable i_v,
int[] dims) |
IsNaN(SameDiff sameDiff,
SDVariable i_v,
int[] dims,
boolean keepDims) |
Constructor and Description |
---|
BaseReductionBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
BaseReductionBp(SameDiff sameDiff,
SDVariable origInput1,
SDVariable origInput2,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
CumProdBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean exclusive,
boolean reverse,
int... axis) |
CumSumBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean exclusive,
boolean reverse,
int... axis) |
DotBp(SameDiff sameDiff,
SDVariable origInput1,
SDVariable origInput2,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
MaxBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
MeanBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
MinBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
Norm1Bp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
Norm2Bp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
NormMaxBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
ProdBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
SquaredNormBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
StandardDeviationBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean biasCorrected,
boolean keepDims,
int... dimensions) |
SumBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean keepDims,
int... dimensions) |
VarianceBp(SameDiff sameDiff,
SDVariable origInput,
SDVariable gradAtOutput,
boolean biasCorrected,
boolean keepDims,
int... dimensions) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
LogSumExp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BatchMmul.doDiff(List<SDVariable> grads) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
LogSumExp.doDiff(List<SDVariable> f1) |
List<SDVariable> |
BatchMmul.doDiff(List<SDVariable> grads) |
Constructor and Description |
---|
BatchMmul(SameDiff sameDiff,
SDVariable[] matrices,
boolean transposeA,
boolean transposeB) |
LogSumExp(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
ShannonEntropy.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Entropy.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Bias.doDiff(List<SDVariable> f1) |
List<SDVariable> |
NormMax.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Norm1.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Norm2.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Mean.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
SquaredNorm.doDiff(List<SDVariable> grad) |
List<SDVariable> |
AMean.doDiff(List<SDVariable> f1) |
List<SDVariable> |
LogEntropy.doDiff(List<SDVariable> f1) |
static List<SDVariable> |
Entropy.grad(DifferentialFunctionFactory f,
SDVariable arg,
SDVariable grad,
int[] dimensions) |
Modifier and Type | Method and Description |
---|---|
static List<SDVariable> |
Entropy.grad(DifferentialFunctionFactory f,
SDVariable arg,
SDVariable grad,
int[] dimensions) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
ShannonEntropy.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Entropy.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Bias.doDiff(List<SDVariable> f1) |
List<SDVariable> |
NormMax.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Norm1.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Norm2.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Mean.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
SquaredNorm.doDiff(List<SDVariable> grad) |
List<SDVariable> |
AMean.doDiff(List<SDVariable> f1) |
List<SDVariable> |
LogEntropy.doDiff(List<SDVariable> f1) |
Constructor and Description |
---|
AMean(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
AMean(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
Bias(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions,
double mean) |
Bias(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions,
double mean) |
Entropy(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
LogEntropy(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
Mean(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
Norm1(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
Norm2(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
NormMax(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
NormMax(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
ShannonEntropy(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
ShannonEntropy(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
SquaredNorm(SameDiff sameDiff,
SDVariable input,
boolean keepDims,
int... dimensions) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
CountZero.doDiff(List<SDVariable> f1) |
List<SDVariable> |
MatchCondition.doDiff(List<SDVariable> f1) |
List<SDVariable> |
CountNonZero.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
CountZero.doDiff(List<SDVariable> f1) |
List<SDVariable> |
MatchCondition.doDiff(List<SDVariable> f1) |
List<SDVariable> |
CountNonZero.doDiff(List<SDVariable> f1) |
Constructor and Description |
---|
CountNonZero(SameDiff sameDiff,
SDVariable input,
int... dimensions) |
CountZero(SameDiff sameDiff,
SDVariable input,
int... dimensions) |
MatchCondition(SameDiff sameDiff,
SDVariable in,
Condition condition,
boolean keepDims,
int... dimensions) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
AMin.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Min.doDiff(List<SDVariable> grad) |
List<SDVariable> |
AMax.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Max.doDiff(List<SDVariable> grad) |
List<SDVariable> |
ASum.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Sum.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Prod.doDiff(List<SDVariable> grad) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
AMin.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Min.doDiff(List<SDVariable> grad) |
List<SDVariable> |
AMax.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Max.doDiff(List<SDVariable> grad) |
List<SDVariable> |
ASum.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Sum.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Prod.doDiff(List<SDVariable> grad) |
Constructor and Description |
---|
AMax(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
AMax(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
AMin(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
AMin(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
ASum(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
ASum(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
Max(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
Max(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
Min(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
Prod(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
Prod(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
Sum(SameDiff sameDiff,
SDVariable i_v,
boolean keepDims,
int[] dimensions) |
Sum(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
JaccardDistance.doDiff(List<SDVariable> f1) |
List<SDVariable> |
EuclideanDistance.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Dot.doDiff(List<SDVariable> f1) |
List<SDVariable> |
CosineDistance.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
EqualsWithEps.doDiff(List<SDVariable> f1) |
List<SDVariable> |
HammingDistance.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ManhattanDistance.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
CosineSimilarity.doDiff(List<SDVariable> i_v1) |
static List<SDVariable> |
CosineSimilarity.doDiff(SameDiff sameDiff,
DifferentialFunctionFactory f,
SDVariable x,
SDVariable y,
SDVariable gradOut,
boolean keepDims,
int... dimensions) |
Modifier and Type | Method and Description |
---|---|
static List<SDVariable> |
CosineSimilarity.doDiff(SameDiff sameDiff,
DifferentialFunctionFactory f,
SDVariable x,
SDVariable y,
SDVariable gradOut,
boolean keepDims,
int... dimensions) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
JaccardDistance.doDiff(List<SDVariable> f1) |
List<SDVariable> |
EuclideanDistance.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Dot.doDiff(List<SDVariable> f1) |
List<SDVariable> |
CosineDistance.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
EqualsWithEps.doDiff(List<SDVariable> f1) |
List<SDVariable> |
HammingDistance.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ManhattanDistance.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
CosineSimilarity.doDiff(List<SDVariable> i_v1) |
Constructor and Description |
---|
BaseReduce3Op(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
BaseReduce3Op(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int... dimensions) |
CosineDistance(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int... dimensions) |
CosineSimilarity(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
CosineSimilarity(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
Dot(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int... dimensions) |
EqualsWithEps(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions,
double eps) |
EqualsWithEps(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions,
double eps) |
EuclideanDistance(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
EuclideanDistance(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int[] dimensions) |
HammingDistance(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int... dimensions) |
JaccardDistance(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int... dimensions) |
ManhattanDistance(SameDiff sameDiff,
SDVariable i_v,
int[] dimensions) |
ManhattanDistance(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2,
int... dimensions) |
Constructor and Description |
---|
LeakyReLU(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double alpha) |
LeakyReLU(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs,
double alpha) |
LogX(SameDiff sameDiff,
SDVariable i_v,
double base) |
Pow(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double pow) |
Pow(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs,
double pow) |
PowDerivative(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double pow) |
PRelu(SameDiff sameDiff,
SDVariable x,
SDVariable alpha,
int... sharedAxes) |
RectifiedLinear(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double cutoff) |
RectifiedLinearDerivative(SameDiff sd,
SDVariable input,
SDVariable gradient) |
Relu6(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double cutoff) |
ReplaceNans(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double set) |
ReplaceNans(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs,
double set) |
ScalarAdd(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
ScalarAdd(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace,
Object[] extraArgs) |
ScalarAdd(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
Object[] extraArgs) |
ScalarDivision(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
ScalarDivision(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
ScalarFMod(SameDiff sd,
SDVariable in,
Number number) |
ScalarMax(SameDiff sd,
SDVariable in,
Number number) |
ScalarMin(SameDiff sd,
SDVariable in,
Number number) |
ScalarMultiplication(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
ScalarMultiplication(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
ScalarRemainder(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
ScalarRemainder(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
ScalarReverseDivision(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
ScalarReverseDivision(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
ScalarReverseSubtraction(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
ScalarReverseSubtraction(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
ScalarSet(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
ScalarSet(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
ScalarSubtraction(SameDiff sameDiff,
SDVariable i_v,
Number scalar) |
ScalarSubtraction(SameDiff sameDiff,
SDVariable i_v,
Number scalar,
boolean inPlace) |
Step(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double cutoff) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
ScalarLessThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarSetValue.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarEquals.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarEps.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarLessThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarGreaterThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarNotEquals.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarOr.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarAnd.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarNot.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarGreaterThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarXor.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
ScalarLessThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarSetValue.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarEquals.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarEps.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarLessThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarGreaterThanOrEqual.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarNotEquals.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarOr.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarAnd.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarNot.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarGreaterThan.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ScalarXor.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
ScatterNd.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterAdd.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterNdAdd.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterSub.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterDiv.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterMin.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterUpdate.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterMul.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterMax.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterNdSub.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterNdUpdate.doDiff(List<SDVariable> gradOut) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
ScatterNd.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterAdd.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterNdAdd.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterSub.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterDiv.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterMin.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterUpdate.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterMul.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterMax.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterNdSub.doDiff(List<SDVariable> gradOut) |
List<SDVariable> |
ScatterNdUpdate.doDiff(List<SDVariable> gradOut) |
Constructor and Description |
---|
BroadcastDynamicShape(SameDiff sameDiff,
SDVariable in,
SDVariable shape) |
Concat(SameDiff sameDiff,
int concatDimension,
SDVariable... inputs) |
ConfusionMatrix(SameDiff sameDiff,
SDVariable labels,
SDVariable pred,
DataType dataType) |
ConfusionMatrix(SameDiff sameDiff,
SDVariable labels,
SDVariable pred,
Integer numClasses) |
ConfusionMatrix(SameDiff sameDiff,
SDVariable labels,
SDVariable pred,
Integer numClasses,
SDVariable weights) |
ConfusionMatrix(SameDiff sameDiff,
SDVariable labels,
SDVariable pred,
SDVariable weights) |
Create(String name,
SameDiff sameDiff,
SDVariable input,
boolean initialize) |
Create(String name,
SameDiff sameDiff,
SDVariable input,
char order,
boolean initialize,
DataType dataType) |
Cross(SameDiff sameDiff,
SDVariable[] args) |
Diag(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
DiagPart(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
ExpandDims(SameDiff sameDiff,
SDVariable[] args) |
ExpandDims(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
ExpandDims(SameDiff sameDiff,
SDVariable[] args,
int axis) |
Eye(SameDiff sameDiff,
SDVariable numRows) |
Eye(SameDiff sameDiff,
SDVariable numRows,
SDVariable numCols) |
Eye(SameDiff sameDiff,
SDVariable numRows,
SDVariable numCols,
SDVariable batch_shape) |
Gather(SameDiff sameDiff,
SDVariable input,
int[] indices,
int axis,
boolean inPlace) |
Gather(SameDiff sameDiff,
SDVariable input,
SDVariable indices,
int axis,
boolean inPlace) |
GatherNd(SameDiff sameDiff,
SDVariable input,
SDVariable indices,
boolean inPlace) |
Linspace(SameDiff sameDiff,
SDVariable from,
SDVariable to,
SDVariable length,
DataType dataType) |
MergeAvg(SameDiff sameDiff,
SDVariable... inputs) |
MergeMax(SameDiff sameDiff,
SDVariable... inputs) |
MergeSum(SameDiff sameDiff,
SDVariable... inputs) |
MeshGrid(SameDiff sd,
boolean cartesian,
SDVariable... inputs) |
OneHot(SameDiff sameDiff,
SDVariable indices,
int depth) |
OneHot(SameDiff sameDiff,
SDVariable indices,
int depth,
int axis,
double on,
double off,
DataType dataType) |
OnesLike(String name,
SameDiff sameDiff,
SDVariable input) |
OnesLike(String name,
SameDiff sameDiff,
SDVariable input,
DataType dataType) |
ParallelStack(SameDiff sameDiff,
SDVariable[] values) |
Permute(SameDiff sameDiff,
SDVariable i_v,
int... permuteDims) |
Permute(SameDiff sd,
SDVariable input,
SDVariable permuteDims) |
Rank(SameDiff sameDiff,
SDVariable input,
boolean inPlace) |
ReductionShape(SameDiff sameDiff,
SDVariable shape,
SDVariable axis,
boolean keepDims) |
Repeat(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace,
int axis) |
Repeat(SameDiff sameDiff,
SDVariable[] args,
int axis) |
Reshape(SameDiff sameDiff,
SDVariable i_v,
long[] shape) |
Reshape(SameDiff sameDiff,
SDVariable i_v,
SDVariable shape) |
SequenceMask(SameDiff sameDiff,
SDVariable input,
DataType dataType) |
SequenceMask(SameDiff sameDiff,
SDVariable input,
int maxLen,
DataType dataType) |
SequenceMask(SameDiff sameDiff,
SDVariable input,
SDVariable maxLen,
DataType dataType) |
Shape(SameDiff sameDiff,
SDVariable input,
boolean inPlace) |
ShapeN(SameDiff sameDiff,
SDVariable[] inputs,
boolean inPlace) |
Size(SameDiff sameDiff,
SDVariable input) |
SizeAt(SameDiff sameDiff,
SDVariable input,
int dimension) |
Slice(SameDiff sameDiff,
SDVariable input,
int[] begin,
int[] size) |
Slice(SameDiff sameDiff,
SDVariable input,
SDVariable begin,
SDVariable end) |
Squeeze(SameDiff sameDiff,
SDVariable arg,
int[] squeezeDims) |
Stack(SameDiff sameDiff,
SDVariable[] values,
int axis) |
StridedSlice(SameDiff sameDiff,
SDVariable in,
int[] begin,
int[] end,
int[] strides) |
StridedSlice(SameDiff sameDiff,
SDVariable in,
int[] begin,
int[] end,
int[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
StridedSlice(SameDiff sameDiff,
SDVariable in,
long[] begin,
long[] end,
long[] strides) |
StridedSlice(SameDiff sameDiff,
SDVariable in,
long[] begin,
long[] end,
long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
Tile(SameDiff sameDiff,
SDVariable i_v,
int[] axis) |
Tile(SameDiff sameDiff,
SDVariable i_v,
SDVariable axis) |
Transpose(SameDiff sameDiff,
SDVariable i_v) |
Transpose(SameDiff sameDiff,
SDVariable in,
int[] permuteDims) |
Transpose(SameDiff sameDiff,
SDVariable in,
SDVariable permuteDims) |
Unstack(SameDiff sameDiff,
SDVariable value,
int axis) |
Unstack(SameDiff sameDiff,
SDVariable value,
int axis,
int num) |
ZerosLike(String name,
SameDiff sameDiff,
SDVariable input) |
ZerosLike(String name,
SameDiff sameDiff,
SDVariable input,
boolean inPlace) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
StridedSliceBp.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
TileBp.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
SliceBp.doDiff(List<SDVariable> i_v) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
StridedSliceBp.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
TileBp.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
SliceBp.doDiff(List<SDVariable> i_v) |
Constructor and Description |
---|
ConcatBp(SameDiff sameDiff,
int concatDimension,
SDVariable... inputsAndGrad) |
ConcatBp(SameDiff sameDiff,
SDVariable... inputsGradAxis) |
SliceBp(SameDiff sameDiff,
SDVariable input,
SDVariable gradient,
int[] begin,
int[] size) |
SliceBp(SameDiff sameDiff,
SDVariable input,
SDVariable gradient,
SDVariable begin,
SDVariable size) |
StridedSliceBp(SameDiff sameDiff,
SDVariable in,
SDVariable grad,
long[] begin,
long[] end,
long[] strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
StridedSliceBp(SameDiff sameDiff,
SDVariable in,
SDVariable grad,
SDVariable begin,
SDVariable end,
SDVariable strides,
int beginMask,
int endMask,
int ellipsisMask,
int newAxisMask,
int shrinkAxisMask) |
TileBp(SameDiff sameDiff,
SDVariable in,
SDVariable grad,
int[] repeat) |
TileBp(SameDiff sameDiff,
SDVariable in,
SDVariable repeat,
SDVariable grad) |
Modifier and Type | Method and Description |
---|---|
SDVariable |
TensorArray.concat(SDVariable flow) |
SDVariable |
TensorArray.gather(SDVariable flow,
int... indices) |
SDVariable |
TensorArray.gather(SDVariable flow,
SDVariable indices) |
SDVariable |
TensorArray.read(int index) |
SDVariable |
TensorArray.read(SDVariable index) |
SDVariable |
TensorArray.scatter(SDVariable flow,
SDVariable value,
int... indices) |
SDVariable |
TensorArray.scatter(SDVariable flow,
SDVariable value,
SDVariable indices) |
SDVariable |
TensorArray.stack(SDVariable flow) |
SDVariable |
TensorArray.unstack(SDVariable flow,
SDVariable value) |
SDVariable |
TensorArray.write(SDVariable flow,
int index,
SDVariable value) |
SDVariable |
TensorArray.write(SDVariable flow,
SDVariable index,
SDVariable value) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
BaseTensorOp.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
SDVariable |
TensorArray.concat(SDVariable flow) |
SDVariable |
TensorArray.gather(SDVariable flow,
int... indices) |
SDVariable |
TensorArray.gather(SDVariable flow,
SDVariable indices) |
SDVariable |
TensorArray.read(SDVariable index) |
SDVariable |
TensorArray.scatter(SDVariable flow,
SDVariable value,
int... indices) |
SDVariable |
TensorArray.scatter(SDVariable flow,
SDVariable value,
SDVariable indices) |
SDVariable |
TensorArray.stack(SDVariable flow) |
SDVariable |
TensorArray.unstack(SDVariable flow,
SDVariable value) |
SDVariable |
TensorArray.write(SDVariable flow,
int index,
SDVariable value) |
SDVariable |
TensorArray.write(SDVariable flow,
SDVariable index,
SDVariable value) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
BaseTensorOp.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
StandardDeviation.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Variance.doDiff(List<SDVariable> grad) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
StandardDeviation.doDiff(List<SDVariable> grad) |
List<SDVariable> |
Variance.doDiff(List<SDVariable> grad) |
Constructor and Description |
---|
StandardDeviation(SameDiff sameDiff,
SDVariable i_v,
boolean biasCorrected,
boolean keepDims,
int[] dimensions) |
Variance(SameDiff sameDiff,
SDVariable i_v,
boolean biasCorrected,
boolean keepDims,
int[] dimensions) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
Cholesky.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Pad.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
BinCount.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
HistogramFixedWidth.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Angle.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
CheckNumerics.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IdentityN.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Assert.doDiff(List<SDVariable> f1) |
List<SDVariable> |
NthElement.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ReluLayer.doDiff(List<SDVariable> gradient) |
List<SDVariable> |
MaxOut.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
Cholesky.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Pad.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
BinCount.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
HistogramFixedWidth.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Angle.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
CheckNumerics.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IdentityN.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Assert.doDiff(List<SDVariable> f1) |
List<SDVariable> |
NthElement.doDiff(List<SDVariable> f1) |
List<SDVariable> |
ReluLayer.doDiff(List<SDVariable> gradient) |
List<SDVariable> |
MaxOut.doDiff(List<SDVariable> f1) |
Constructor and Description |
---|
Angle(SameDiff sameDiff,
SDVariable input) |
BaseDynamicTransformOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
BinCount(SameDiff sd,
SDVariable in,
SDVariable weights,
Integer minLength,
Integer maxLength,
DataType outputType) |
CheckNumerics(SameDiff sd,
SDVariable input,
SDVariable message) |
HistogramFixedWidth(SameDiff sameDiff,
SDVariable values,
SDVariable valuesRange,
SDVariable numBins) |
IdentityN(SameDiff sameDiff,
SDVariable input) |
IdentityN(SameDiff sameDiff,
SDVariable[] inputs) |
MaxOut(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
Number max) |
MaxOut(SameDiff sameDiff,
SDVariable i_v,
Object[] extraArgs,
Number max) |
Pad(SameDiff sd,
SDVariable in,
SDVariable padding,
Pad.Mode mode,
double padValue) |
ReluLayer(SameDiff sameDiff,
SDVariable input,
SDVariable weights,
SDVariable bias) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
Assign.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
IsMax.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
Assign.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
IsMax.doDiff(List<SDVariable> f1) |
Constructor and Description |
---|
Assign(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
IsMax(SameDiff sameDiff,
SDVariable i_v) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
IsNaN.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
BooleanNot.doDiff(List<SDVariable> f1) |
List<SDVariable> |
MatchConditionTransform.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IsInf.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
IsFinite.doDiff(List<SDVariable> i_v) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
IsNaN.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
BooleanNot.doDiff(List<SDVariable> f1) |
List<SDVariable> |
MatchConditionTransform.doDiff(List<SDVariable> f1) |
List<SDVariable> |
IsInf.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
IsFinite.doDiff(List<SDVariable> i_v) |
Constructor and Description |
---|
BooleanNot(SameDiff sameDiff,
SDVariable i_v) |
IsFinite(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
IsInf(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
IsNaN(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
MatchConditionTransform(SameDiff sameDiff,
SDVariable in,
Condition condition) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
ClipByNorm.doDiff(List<SDVariable> grad) |
List<SDVariable> |
ClipByValue.doDiff(List<SDVariable> grad) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
ClipByNorm.doDiff(List<SDVariable> grad) |
List<SDVariable> |
ClipByValue.doDiff(List<SDVariable> grad) |
Constructor and Description |
---|
ClipByNorm(SameDiff sameDiff,
SDVariable x,
double clipValue,
int... dimensions) |
ClipByNormBp(SameDiff sameDiff,
SDVariable x,
SDVariable eps,
double clipValue,
int... dimensions) |
ClipByValue(SameDiff sameDiff,
SDVariable x,
double clipValueMin,
double clipValueMax) |
ClipByValue(SameDiff sameDiff,
SDVariable x,
double clipValueMin,
double clipValueMax,
boolean inPlace) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
Eps.doDiff(List<SDVariable> f1) |
List<SDVariable> |
CompareAndReplace.doDiff(List<SDVariable> grad) |
List<SDVariable> |
CompareAndSet.doDiff(List<SDVariable> gradient) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
Eps.doDiff(List<SDVariable> f1) |
List<SDVariable> |
CompareAndReplace.doDiff(List<SDVariable> grad) |
List<SDVariable> |
CompareAndSet.doDiff(List<SDVariable> gradient) |
Constructor and Description |
---|
CompareAndReplace(SameDiff sameDiff,
SDVariable to,
SDVariable from,
Condition condition) |
CompareAndSet(SameDiff sameDiff,
SDVariable to,
Number set,
Condition condition) |
Eps(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
Eps(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
Eps(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
Object[] extraArgs) |
Constructor and Description |
---|
Assign(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
ATan2(SameDiff sameDiff,
SDVariable y,
SDVariable x) |
BatchToSpace(SameDiff sameDiff,
SDVariable[] args,
int[] blocks,
int[][] crops,
boolean inPlace) |
BatchToSpaceND(SameDiff sameDiff,
SDVariable[] args,
int[] blocks,
int[][] crops,
boolean inPlace) |
BitsHammingDistance(SameDiff sd,
SDVariable x,
SDVariable y) |
BitwiseAnd(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
BitwiseOr(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
BitwiseXor(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
Choose(SameDiff sameDiff,
SDVariable[] args,
Condition condition) |
Choose(String opName,
SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
CumProd(SameDiff sameDiff,
SDVariable x,
boolean exclusive,
boolean reverse,
int... axis) |
CumProd(SameDiff sameDiff,
SDVariable x,
int... axis) |
CumSum(SameDiff sameDiff,
SDVariable x,
boolean exclusive,
boolean reverse,
int... axis) |
CumSum(SameDiff sameDiff,
SDVariable x,
int... axis) |
CyclicRShiftBits(SameDiff sameDiff,
SDVariable x,
SDVariable shift) |
CyclicShiftBits(SameDiff sameDiff,
SDVariable x,
SDVariable shift) |
Dilation2D(SameDiff sameDiff,
SDVariable[] inputAndWeights,
int[] strides,
int[] rates,
boolean isSameMode,
boolean inPlace) |
DotProductAttention(SameDiff sameDiff,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable mask,
boolean scaled,
boolean withWeights) |
DotProductAttentionBp(SameDiff sameDiff,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable eps,
SDVariable mask,
boolean scaled) |
DynamicPartition(SameDiff sameDiff,
SDVariable input,
SDVariable partitions,
int numPartitions) |
DynamicStitch(SameDiff sameDiff,
SDVariable[] indices,
SDVariable[] inputs) |
DynamicStitch(SameDiff sameDiff,
SDVariable[] indices,
SDVariable[] inputs) |
EqualTo(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
FakeQuantWithMinMaxArgs(SameDiff sd,
SDVariable input,
float min,
float max,
boolean narrowRange,
int numBits) |
FakeQuantWithMinMaxVars(SameDiff sd,
SDVariable input,
SDVariable min,
SDVariable max,
boolean narrowRange,
int numBits) |
Fill(SameDiff sameDiff,
SDVariable shape,
DataType outputDataType,
double value) |
GreaterThan(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
GreaterThanOrEqual(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
InTopK(SameDiff sd,
SDVariable predictions,
SDVariable targets,
int k) |
InvertPermutation(SameDiff sameDiff,
SDVariable input,
boolean inPlace) |
IsNonDecreasing(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
IsNumericTensor(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
IsStrictlyIncreasing(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
LayerNorm(SameDiff sameDiff,
SDVariable input,
SDVariable gain,
boolean channelsFirst,
int... dimensions) |
LayerNorm(SameDiff sameDiff,
SDVariable input,
SDVariable gain,
SDVariable bias,
boolean channelsFirst,
int... dimensions) |
LayerNormBp(SameDiff sameDiff,
SDVariable input,
SDVariable gain,
SDVariable gradient,
boolean channelsFirst,
int... dimensions) |
LayerNormBp(SameDiff sameDiff,
SDVariable input,
SDVariable gain,
SDVariable bias,
SDVariable gradient,
boolean channelsFirst,
int... dimensions) |
LessThan(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
LessThanOrEqual(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
ListDiff(SameDiff sd,
SDVariable x,
SDVariable y) |
LogicalAnd(SameDiff sd,
SDVariable in1,
SDVariable in2) |
LogicalNot(SameDiff sd,
SDVariable in1,
SDVariable in2) |
LogicalOr(SameDiff sd,
SDVariable in1,
SDVariable in2) |
LogicalXor(SameDiff sd,
SDVariable in1,
SDVariable in2) |
LogMatrixDeterminant(SameDiff sameDiff,
SDVariable in,
boolean inPlace) |
LogSoftMax(SameDiff sameDiff,
SDVariable i_v) |
LogSoftMax(SameDiff sameDiff,
SDVariable i_v,
int dimension) |
MatrixDeterminant(SameDiff sameDiff,
SDVariable in,
boolean inPlace) |
MatrixDiag(SameDiff sameDiff,
SDVariable in,
boolean inPlace) |
MatrixDiagPart(SameDiff sameDiff,
SDVariable in,
boolean inPlace) |
MatrixInverse(SameDiff sameDiff,
SDVariable in,
boolean inPlace) |
MatrixSetDiag(SameDiff sameDiff,
SDVariable in,
SDVariable diag,
boolean inPlace) |
Max(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
Max(SameDiff sameDiff,
SDVariable first,
SDVariable second) |
Min(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
Min(SameDiff sameDiff,
SDVariable first,
SDVariable second) |
MultiHeadDotProductAttention(SameDiff sameDiff,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable mask,
boolean scaled,
boolean withWeights) |
MultiHeadDotProductAttentionBp(SameDiff sameDiff,
SDVariable queries,
SDVariable keys,
SDVariable values,
SDVariable Wq,
SDVariable Wk,
SDVariable Wv,
SDVariable Wo,
SDVariable eps,
SDVariable mask,
boolean scaled) |
NotEqualTo(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
Pow(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
Reverse(SameDiff sameDiff,
SDVariable i_v,
int... dimensions) |
ReverseSequence(SameDiff sameDiff,
SDVariable i_v,
SDVariable seqLengths) |
ReverseSequence(SameDiff sameDiff,
SDVariable i_v,
SDVariable seqLengths,
int seqDim,
int batchDim) |
RShiftBits(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
ShiftBits(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
SoftMax(SameDiff sameDiff,
SDVariable[] args) |
SoftMax(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
SoftMax(SameDiff sameDiff,
SDVariable[] args,
int dimension) |
SoftMax(SameDiff sameDiff,
SDVariable[] args,
int dimension,
boolean inPlace) |
SpaceToBatch(SameDiff sameDiff,
SDVariable[] args,
int[] blocks,
int[][] padding,
boolean inPlace) |
SpaceToBatchND(SameDiff sameDiff,
SDVariable[] args,
int[] blocks,
int[][] padding,
boolean inPlace) |
Standardize(SameDiff sameDiff,
SDVariable i_v,
int... dimensions) |
StandardizeBp(SameDiff sameDiff,
SDVariable i_v,
SDVariable grad,
int... dimensions) |
Svd(SameDiff sd,
SDVariable input,
boolean fullUV,
boolean computeUv) |
Svd(SameDiff sd,
SDVariable input,
boolean fullUV,
boolean computeUv,
int switchNum) |
ThresholdRelu(SameDiff sd,
SDVariable input,
boolean inPlace,
double cutoff) |
ThresholdRelu(SameDiff sd,
SDVariable input,
double cutoff) |
TopK(SameDiff sd,
SDVariable in,
int k,
boolean sorted) |
Trace(SameDiff sd,
SDVariable in) |
Unique(SameDiff sd,
SDVariable in) |
UniqueWithCounts(SameDiff sd,
SDVariable in) |
XwPlusB(SameDiff sameDiff,
SDVariable input,
SDVariable weights,
SDVariable bias) |
Zeta(SameDiff sameDiff,
SDVariable x,
SDVariable q) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
SegmentMax.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentSum.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentProd.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentMean.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentMin.doDiff(List<SDVariable> gradients) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
SegmentMax.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentSum.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentProd.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentMean.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
SegmentMin.doDiff(List<SDVariable> gradients) |
Constructor and Description |
---|
SegmentMax(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds) |
SegmentMean(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds) |
SegmentMin(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds) |
SegmentProd(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds) |
SegmentSum(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
Cast.doDiff(List<SDVariable> i_v) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
Cast.doDiff(List<SDVariable> i_v) |
Constructor and Description |
---|
Cast(SameDiff sameDiff,
SDVariable arg,
DataType dst) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
RSqrt.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Sqrt.doDiff(List<SDVariable> i_v) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
RSqrt.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Sqrt.doDiff(List<SDVariable> i_v) |
Constructor and Description |
---|
RSqrt(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Sqrt(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
RelativeError.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Set.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
BinaryRelativeError.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
BinaryMinimalRelativeError.doDiff(List<SDVariable> i_v1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
RelativeError.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Set.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
BinaryRelativeError.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
BinaryMinimalRelativeError.doDiff(List<SDVariable> i_v1) |
Constructor and Description |
---|
BinaryMinimalRelativeError(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BinaryMinimalRelativeError(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
BinaryMinimalRelativeError(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
BinaryRelativeError(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
BinaryRelativeError(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
BinaryRelativeError(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
RelativeError(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
RelativeError(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
RelativeError(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
Set(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Constructor and Description |
---|
AddOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
Axpy(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double p) |
Axpy(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace,
double p) |
Axpy(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
double p) |
CopyOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
CopyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
CopyOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
DivOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
FloorDivOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
FloorDivOp(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
FloorModOp(SameDiff sameDiff,
SDVariable x,
SDVariable y) |
FModOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
FModOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
FModOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
MergeAddOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
ModOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
MulOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
PowPairwise(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
PowPairwise(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
RDivOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
RealDivOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
RemainderOp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
RemainderOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
RemainderOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
RSubOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
RSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
RSubOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
SquaredDifferenceOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
SubOp(SameDiff sameDiff,
SDVariable[] args,
boolean inPlace) |
TruncateDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
TruncateDivOp(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
SquaredDifferenceBpOp.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
BaseArithmeticBackpropOp.doDiff(List<SDVariable> i_v) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
SquaredDifferenceBpOp.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
BaseArithmeticBackpropOp.doDiff(List<SDVariable> i_v) |
Constructor and Description |
---|
AddBpOp(SameDiff sameDiff,
SDVariable x,
SDVariable y,
SDVariable eps) |
BaseArithmeticBackpropOp(SameDiff sameDiff,
SDVariable x,
SDVariable y,
SDVariable eps) |
DivBpOp(SameDiff sameDiff,
SDVariable x,
SDVariable y,
SDVariable eps) |
FloorDivBpOp(SameDiff sameDiff,
SDVariable x,
SDVariable y,
SDVariable eps) |
FloorModBpOp(SameDiff sameDiff,
SDVariable x,
SDVariable y,
SDVariable eps) |
ModBpOp(SameDiff sameDiff,
SDVariable x,
SDVariable y,
SDVariable eps) |
MulBpOp(SameDiff sameDiff,
SDVariable x,
SDVariable y,
SDVariable eps) |
RDivBpOp(SameDiff sameDiff,
SDVariable x,
SDVariable y,
SDVariable eps) |
RSubBpOp(SameDiff sameDiff,
SDVariable x,
SDVariable y,
SDVariable eps) |
SquaredDifferenceBpOp(SameDiff sameDiff,
SDVariable[] args) |
SubBpOp(SameDiff sameDiff,
SDVariable x,
SDVariable y,
SDVariable eps) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
Or.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
And.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Not.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Xor.doDiff(List<SDVariable> f1) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
Or.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
And.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Not.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Xor.doDiff(List<SDVariable> f1) |
Constructor and Description |
---|
And(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
And(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double comparable) |
And(SameDiff sameDiff,
SDVariable ix,
SDVariable iy) |
Not(SameDiff sameDiff,
SDVariable i_v) |
Or(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double comparable) |
Or(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
Or(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
Xor(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double comparable) |
Xor(SameDiff sameDiff,
SDVariable ix,
SDVariable iy) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
Identity.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
AMin.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Floor.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Negative.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Ceil.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Min.doDiff(List<SDVariable> f1) |
List<SDVariable> |
AMax.doDiff(List<SDVariable> f1) |
List<SDVariable> |
TimesOneMinus.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Reciprocal.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Cube.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Sign.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Max.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Round.doDiff(List<SDVariable> f1) |
List<SDVariable> |
OneMinus.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Square.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Abs.doDiff(List<SDVariable> i_v) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
Identity.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
AMin.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Floor.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Negative.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Ceil.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Min.doDiff(List<SDVariable> f1) |
List<SDVariable> |
AMax.doDiff(List<SDVariable> f1) |
List<SDVariable> |
TimesOneMinus.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Reciprocal.doDiff(List<SDVariable> i_v1) |
List<SDVariable> |
Cube.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Sign.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Max.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Round.doDiff(List<SDVariable> f1) |
List<SDVariable> |
OneMinus.doDiff(List<SDVariable> f1) |
List<SDVariable> |
Square.doDiff(List<SDVariable> i_v) |
List<SDVariable> |
Abs.doDiff(List<SDVariable> i_v) |
Constructor and Description |
---|
Abs(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
AMax(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2) |
AMin(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2) |
Ceil(SameDiff sameDiff,
SDVariable i_v) |
Ceil(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Cube(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Floor(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Identity(SameDiff sd,
SDVariable input) |
Max(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2) |
Min(SameDiff sameDiff,
SDVariable i_v,
SDVariable i_v2) |
Negative(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
OneMinus(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Reciprocal(SameDiff sameDiff,
SDVariable in,
boolean inPlace) |
Round(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Sign(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Square(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
TimesOneMinus(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
UnsortedSegmentSqrtN.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentMean.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentSum.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentProd.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentMin.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentMax.doDiff(List<SDVariable> gradients) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
UnsortedSegmentSqrtN.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentMean.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentSum.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentProd.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentMin.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
UnsortedSegmentMax.doDiff(List<SDVariable> gradients) |
Constructor and Description |
---|
UnsortedSegmentMax(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds,
int numSegments) |
UnsortedSegmentMean(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds,
int numSegments) |
UnsortedSegmentMin(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds,
int numSegments) |
UnsortedSegmentProd(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds,
int numSegments) |
UnsortedSegmentSqrtN(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds,
int numSegments) |
UnsortedSegmentSum(SameDiff sameDiff,
SDVariable data,
SDVariable segmentIds,
int numSegments) |
Constructor and Description |
---|
ACos(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
ACosh(SameDiff sameDiff,
SDVariable i_v) |
ACosh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
ASin(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
ASinh(SameDiff sameDiff,
SDVariable i_v) |
ASinh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
ATan(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
ATanh(SameDiff sameDiff,
SDVariable i_v) |
ATanh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Cos(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Cosh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
ELU(SameDiff sameDiff,
SDVariable i_v) |
Erf(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Erfc(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Exp(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Expm1(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
GELU(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
boolean precise) |
GELUDerivative(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
HardSigmoid(SameDiff sameDiff,
SDVariable in,
boolean inPlace) |
HardTanh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Log(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Log1p(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
LogSigmoid(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Mish(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
MishDerivative(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
MishDerivative(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
MishDerivative(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
PreciseGELU(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
boolean precise) |
PreciseGELUDerivative(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
boolean precise) |
RationalTanh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
RectifiedTanh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Rint(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
SELU(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
SetRange(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double min,
double max) |
Sigmoid(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
SigmoidDerivative(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace)
Deprecated.
|
SigmoidDerivative(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2)
Deprecated.
|
SigmoidDerivative(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace)
Deprecated.
|
Sin(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Sinh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
SoftPlus(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
SoftSign(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Stabilize(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace,
double realMin,
double cutOff,
double k) |
Swish(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
SwishDerivative(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
SwishDerivative(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2) |
SwishDerivative(SameDiff sameDiff,
SDVariable i_v1,
SDVariable i_v2,
boolean inPlace) |
Tan(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
Tanh(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace) |
TanhDerivative(SameDiff sameDiff,
SDVariable i_v,
boolean inPlace)
Deprecated.
|
Constructor and Description |
---|
BaseRandomOp(SameDiff sameDiff,
SDVariable i_v) |
Constructor and Description |
---|
RandomStandardNormal(SameDiff sameDiff,
SDVariable[] args) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
RandomBernoulli.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
RandomNormal.doDiff(List<SDVariable> grad) |
List<SDVariable> |
DistributionUniform.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
RandomExponential.doDiff(List<SDVariable> gradients) |
Modifier and Type | Method and Description |
---|---|
List<SDVariable> |
RandomBernoulli.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
RandomNormal.doDiff(List<SDVariable> grad) |
List<SDVariable> |
DistributionUniform.doDiff(List<SDVariable> gradients) |
List<SDVariable> |
RandomExponential.doDiff(List<SDVariable> gradients) |
Constructor and Description |
---|
DistributionUniform(SameDiff sd,
SDVariable shape,
double min,
double max) |
DistributionUniform(SameDiff sd,
SDVariable shape,
double min,
double max,
DataType dataType) |
RandomBernoulli(SameDiff sd,
SDVariable shape,
double p) |
RandomExponential(SameDiff sd,
SDVariable shape,
double lambda) |
RandomNormal(SameDiff sameDiff,
SDVariable shape,
double mean,
double stdev) |
Constructor and Description |
---|
DropOut(SameDiff sameDiff,
SDVariable input,
double p) |
DropOutInverted(SameDiff sameDiff,
SDVariable input,
double p) |
Range(SameDiff sd,
SDVariable from,
SDVariable to,
SDVariable step,
DataType dataType) |
Copyright © 2019. All rights reserved.