Padding mode.
$OpDocNNAddBias
$OpDocNNAddBias
Value tensor.
Bias tensor that must be one-dimensional (i.e., it must have rank 1).
Data format of the input and output tensors. With the default format NWCFormat, the
bias
tensor will be added to the last dimension of the value
tensor. Alternatively, the
format could be NCWFormat, and the bias
tensor would be added to the third-to-last
dimension.
Name for the created op.
Created op output.
$OpDocNNBatchNormalization
$OpDocNNBatchNormalization
Input tensor of arbitrary dimensionality.
Mean tensor.
Variance tensor.
Optional offset tensor, often denoted beta
in equations.
Optional scale tensor, often denoted gamma
in equations.
Small floating point number added to the variance to avoid division by zero.
Name for the created ops.
Batch-normalized tensor x
.
$OpDocNNConv2D
$OpDocNNConv2D
4-D tensor whose dimension order is interpreted according to the value of dataFormat
.
4-D tensor with shape [filterHeight, filterWidth, inChannels, outChannels]
.
Stride of the sliding window along the second dimension of input
.
Stride of the sliding window along the third dimension of input
.
Padding mode to use.
Format of the input and output data.
The dilation factor for each dimension of input. If set to k > 1
, there will be k - 1
skipped cells between each filter element on that dimension. The dimension order is
determined by the value of dataFormat
. Dilations in the batch and depth dimensions must
be set to 1
.
Boolean value indicating whether or not to use CuDNN for the created op, if its placed on a GPU, as opposed to the TensorFlow implementation.
Name for the created op.
Created op output, which is a 4-D tensor whose dimension order depends on the value of dataFormat
.
$OpDocNNConv2DBackpropFilter
$OpDocNNConv2DBackpropFilter
4-D tensor whose dimension order is interpreted according to the value of dataFormat
.
Integer vector representing the shape of the original filter, which is a 4-D tensor.
4-D tensor containing the gradients w.r.t. the output of the convolution and whose shape
depends on the value of dataFormat
.
Stride of the sliding window along the second dimension of input
.
Stride of the sliding window along the third dimension of input
.
Padding mode to use.
Format of the input and output data.
The dilation factor for each dimension of input. If set to k > 1
, there will be k - 1
skipped cells between each filter element on that dimension. The dimension order is
determined by the value of dataFormat
. Dilations in the batch and depth dimensions must
be set to 1
.
Boolean value indicating whether or not to use CuDNN for the created op, if its placed on a GPU, as opposed to the TensorFlow implementation.
Name for the created op.
Created op output, which is a 4-D tensor whose dimension order depends on the value of dataFormat
.
$OpDocNNConv2DBackpropInput
$OpDocNNConv2DBackpropInput
Integer vector representing the shape of the original input, which is a 4-D tensor.
4-D tensor with shape [filterHeight, filterWidth, inChannels, outChannels]
.
4-D tensor containing the gradients w.r.t. the output of the convolution and whose shape
depends on the value of dataFormat
.
Stride of the sliding window along the second dimension of input
.
Stride of the sliding window along the third dimension of input
.
Padding mode to use.
Format of the input and output data.
The dilation factor for each dimension of input. If set to k > 1
, there will be k - 1
skipped cells between each filter element on that dimension. The dimension order is
determined by the value of dataFormat
. Dilations in the batch and depth dimensions must
be set to 1
.
Boolean value indicating whether or not to use CuDNN for the created op, if its placed on a GPU, as opposed to the TensorFlow implementation.
Name for the created op.
Created op output, which is a 4-D tensor whose dimension order depends on the value of dataFormat
.
$OpDocNNCrelu
$OpDocNNCrelu
Input tensor.
Along along which the output values are concatenated along.
Name for the created op.
Created op output.
$OpDocNNDropout
$OpDocNNDropout
Input tensor.
Probability (i.e., number in the interval (0, 1]
) that each element is kept.
If true
, the outputs will be divided by the keep probability.
INT32 rank-1 tensor representing the shape for the randomly generated keep/drop flags.
Optional random seed, used to generate a random seed pair for the random number generator, when combined with the graph-level seed.
Name for the created op.
Created op output that has the same shape as input
.
$OpDocNNDropout
$OpDocNNDropout
Input tensor.
Probability (i.e., scalar in the interval (0, 1]
) that each element is kept.
If true
, the outputs will be divided by the keep probability.
INT32 rank-1 tensor representing the shape for the randomly generated keep/drop flags.
Optional random seed, used to generate a random seed pair for the random number generator, when combined with the graph-level seed.
Name for the created op.
Created op output that has the same shape as input
.
$OpDocNNElu
$OpDocNNElu
Name for the created op.
Created op output.
$OpDocNNFusedBatchNormalization
$OpDocNNFusedBatchNormalization
Input tensor with 4 dimensions.
Vector used for scaling.
Vector used as an added offset.
Optional population mean vector, used for inference only.
Optional population variance vector, used for inference only.
Small floating point number added to the variance to avoid division by zero.
Data format for x
.
Boolean value indicating whether the operation is used for training or inference.
Name for the created ops.
Batch normalized tensor x
, along with the a batch mean vector, and a batch variance vector.
$OpDocNNInTopK
$OpDocNNL2Loss
$OpDocNNL2Normalize
$OpDocNNL2Normalize
Input tensor.
Tensor containing the axes along which to normalize.
Lower bound value for the norm. The created op will use sqrt(epsilon)
as the divisor, if
norm < sqrt(epsilon)
.
Name for the created op.
Created op output.
$OpDocNNLinear
$OpDocNNLinear
Input tensor.
Weights tensor.
Bias tensor.
Name for the created op.
Created op output.
$OpDocNNLocalResponseNormalization
$OpDocNNLocalResponseNormalization
Input tensor with data type FLOAT16
, BFLOAT16
, or FLOAT32
.
Half-width of the 1-D normalization window.
Offset (usually positive to avoid dividing by 0).
Scale factor (usually positive).
Exponent.
Name for the created op.
Created op output.
$OpDocNNLogPoissonLoss
$OpDocNNLogPoissonLoss
Tensor containing the log-predictions.
Tensor with the same shape as logPredictions
, containing the target values.
If true
, Stirling's Approximation is used to approximate the full loss. Defaults to
false
, meaning that the constant term is ignored.
Name for the created op.
Created op output.
$OpDocNNLogSoftmax
$OpDocNNLocalResponseNormalization
$OpDocNNLocalResponseNormalization
Input tensor with data type FLOAT16
, BFLOAT16
, or FLOAT32
.
Half-width of the 1-D normalization window.
Offset (usually positive to avoid dividing by 0).
Scale factor (usually positive).
Exponent.
Name for the created op.
Created op output.
$OpDocNNMaxPool
$OpDocNNMaxPool
4-D tensor whose dimension order is interpreted according to the value of dataFormat
.
The size of the pooling window for each dimension of the input tensor.
Stride of the sliding window along the second dimension of input
.
Stride of the sliding window along the third dimension of input
.
Padding mode to use.
Format of the input and output data.
Name for the created op.
Created op output, which is a 4-D tensor whose dimension order depends on the value of dataFormat
.
$OpDocNNMaxPoolGrad
$OpDocNNMaxPoolGrad
Original input tensor.
Original output tensor.
4-D tensor containing the gradients w.r.t. the output of the max pooling and whose shape
depends on the value of dataFormat
.
The size of the pooling window for each dimension of the input tensor.
Stride of the sliding window along the second dimension of input
.
Stride of the sliding window along the third dimension of input
.
Padding mode to use.
Format of the input and output data.
Name for the created op.
Created op output, which is a 4-D tensor whose dimension order depends on the value of dataFormat
.
$OpDocNNMaxPoolGradGrad
$OpDocNNMaxPoolGradGrad
Original input tensor.
Original output tensor.
4-D tensor containing the gradients w.r.t. the output of the max pooling and whose shape
depends on the value of dataFormat
.
The size of the pooling window for each dimension of the input tensor.
Stride of the sliding window along the second dimension of input
.
Stride of the sliding window along the third dimension of input
.
Padding mode to use.
Format of the input and output data.
Name for the created op.
Created op output, which is a 4-D tensor whose dimension order depends on the value of dataFormat
.
$OpDocNNRelu
$OpDocNNRelu
Input tensor.
Slope of the negative section, also known as leakage parameter. If other than 0.0f
, the negative
part will be equal to alpha * x
instead of 0
. Defaults to 0
.
Name for the created op.
Created op output.
$OpDocNNRelu6
$OpDocNNRelu6
Name for the created op.
Created op output.
$OpDocNNSelu
$OpDocNNSelu
Name for the created op.
Created op output.
$OpDocNNSequenceLoss
$OpDocNNSequenceLoss
Tensor of shape [batchSize, sequenceLength, numClasses]
containing unscaled log
probabilities.
Tensor of shape [batchSize, sequenceLength]
containing the true label at each
time step.
Optionally, a tensor of shape [batchSize, sequenceLength]
containing weights to
use for each prediction. When using weights
as masking, set all valid time steps
to 1 and all padded time steps to 0 (e.g., a mask returned by tf.sequenceMask
).
If true
, the loss is summed across the sequence dimension and divided by the
total label weight across all time steps.
If true
, the loss is summed across the batch dimension and divided by the batch
size.
Loss function to use that takes the predicted logits and the true labels as inputs
and returns the loss value. Defaults to sparseSoftmaxCrossEntropy
.
Name prefix to use for the created ops.
Created op output.
InvalidShapeException
If any of logits
, labels
, or weights
has invalid shape.
$OpDocNNSigmoidCrossEntropy
$OpDocNNSigmoidCrossEntropy
Tensor of shape [D0, D1, ..., Dr-1, numClasses]
and data type FLOAT16, FLOAT32, or
FLOAT64, containing unscaled log probabilities.
Tensor of shape [D0, D1, ..., Dr-1, numClasses]
and data type FLOAT16, FLOAT32, or
FLOAT64, where each row must be a valid probability distribution.
Optionally, a coefficient to use for the positive examples.
Name for the created op.
Created op output, with rank one less than that of logits
and the same data type as logits
, containing
the sigmoid cross entropy loss.
$OpDocNNSoftmax
$OpDocNNSoftmaxCrossEntropy
$OpDocNNSoftmaxCrossEntropy
Tensor of shape [D0, D1, ..., Dr-1, numClasses]
and data type FLOAT16, FLOAT32, or
FLOAT64, containing unscaled log probabilities.
Tensor of shape [D0, D1, ..., Dr-1, numClasses]
and data type FLOAT16, FLOAT32, or
FLOAT64, where each row must be a valid probability distribution.
The class axis, along which the softmax is computed. Defaults to -1
, which is the last axis.
Name for the created op.
Created op output, with rank one less than that of logits
and the same data type as logits
, containing
the softmax cross entropy loss.
$OpDocNNSoftplus
$OpDocNNSoftplus
Name for the created op.
Created op output.
$OpDocNNSoftsign
$OpDocNNSoftsign
Name for the created op.
Created op output.
$OpDocNNSparseSoftmaxCrossEntropy
$OpDocNNSparseSoftmaxCrossEntropy
Tensor of shape [D0, D1, ..., Dr-1, numClasses]
(where r
is the rank of labels
and of the
result) and data type FLOAT16, FLOAT32, or FLOAT64, containing unscaled log
probabilities.
Tensor of shape [D0, D1, ..., Dr-1]
(where r
is the rank of labels
and of the result) and
data type INT32 or INT64. Each entry in labels
must be an index in [0, numClasses)
.
Other values will raise an exception when this op is run on a CPU, and return NaN
values for the
corresponding loss and gradient rows when this op is run on a GPU.
The class axis, along which the softmax is computed. Defaults to -1
, which is the last axis.
Name for the created op.
Created op output, with the same shape as labels
and the same data type as logits
, containing the
softmax cross entropy loss.
$OpDocNNTopK
$OpDocNNTopK
Input tensor whose last axis has size at least k
.
Scalar INT32 tensor containing the number of top elements to look for along the last axis of
input
.
If true
, the resulting k
elements will be sorted by their values in descending order.
Name for the created op.
Tuple containing the created op outputs: (i) values
: the k
largest elements along each last
dimensional slice, and (ii) indices
: the indices of values
within the last axis of input
.