public class BiasAddGrad extends DynamicCustomOp
DynamicCustomOp.DynamicCustomOpsBuilder
Modifier and Type | Field and Description |
---|---|
protected boolean |
nchw |
axis, bArguments, dArguments, iArguments, inplaceCall, inputArguments, outputArguments, outputVariables, tArguments
dimensions, extraArgs, inPlace, ownName, ownNameSetWithDefault, sameDiff, scalarValue
Constructor and Description |
---|
BiasAddGrad(@NonNull INDArray input,
@NonNull INDArray bias,
@NonNull INDArray gradient) |
BiasAddGrad(@NonNull INDArray input,
@NonNull INDArray bias,
@NonNull INDArray gradient,
boolean nchw) |
BiasAddGrad(@NonNull INDArray input,
@NonNull INDArray bias,
@NonNull INDArray gradient,
INDArray output) |
BiasAddGrad(SameDiff sameDiff,
SDVariable input,
SDVariable bias,
SDVariable gradient,
boolean nchw) |
Modifier and Type | Method and Description |
---|---|
List<DataType> |
calculateOutputDataTypes(List<DataType> inputDataTypes)
Calculate the data types for the output arrays.
|
List<SDVariable> |
doDiff(List<SDVariable> f1)
The actual implementation for automatic differentiation.
|
String |
onnxName()
The opName of this function in onnx
|
String |
opName()
This method returns op opName as string
|
int |
opNum()
The number of the op (mainly for old legacy XYZ ops
like
Op ) |
addBArgument, addDArgument, addIArgument, addIArgument, addInputArgument, addOutputArgument, addTArgument, assertValidForExecution, bArgs, builder, calculateOutputShape, calculateOutputShape, clearArrays, dArgs, getBArgument, getDescriptor, getIArgument, getInputArgument, getOutputArgument, getTArgument, iArgs, initFromOnnx, initFromTensorFlow, inputArguments, numBArguments, numDArguments, numIArguments, numInputArguments, numOutputArguments, numTArguments, opHash, opType, outputArguments, outputVariables, outputVariables, removeIArgument, removeInputArgument, removeOutputArgument, removeTArgument, setInputArgument, setInputArguments, setOutputArgument, tArgs, tensorflowName, toString, wrapFilterNull, wrapOrNull, wrapOrNull
arg, arg, argNames, args, attributeAdaptersForFunction, configFieldName, diff, dup, equals, getNumOutputs, getValue, hashCode, isConfigProperties, larg, mappingsForFunction, onnxNames, outputs, outputVariable, outputVariablesNames, propertiesForFunction, rarg, replaceArg, setInstanceId, setPropertiesForFunction, setValueFor, tensorflowNames
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
isInplaceCall
public BiasAddGrad(SameDiff sameDiff, SDVariable input, SDVariable bias, SDVariable gradient, boolean nchw)
public BiasAddGrad(@NonNull @NonNull INDArray input, @NonNull @NonNull INDArray bias, @NonNull @NonNull INDArray gradient, INDArray output)
public BiasAddGrad(@NonNull @NonNull INDArray input, @NonNull @NonNull INDArray bias, @NonNull @NonNull INDArray gradient, boolean nchw)
public int opNum()
DifferentialFunction
Op
)opNum
in class DynamicCustomOp
public String opName()
DynamicCustomOp
opName
in interface CustomOp
opName
in class DynamicCustomOp
public List<SDVariable> doDiff(List<SDVariable> f1)
DifferentialFunction
doDiff
in class DynamicCustomOp
public String onnxName()
DifferentialFunction
onnxName
in class DynamicCustomOp
public List<DataType> calculateOutputDataTypes(List<DataType> inputDataTypes)
DifferentialFunction
DifferentialFunction.calculateOutputShape()
, this method differs in that it does not
require the input arrays to be populated.
This is important as it allows us to do greedy datatype inference for the entire net - even if arrays are not
available.calculateOutputDataTypes
in class DifferentialFunction
inputDataTypes
- The data types of the inputsCopyright © 2021. All rights reserved.