Object

org.platanios.tensorflow.api

tfi

Related Doc: package api

Permalink

object tfi extends API with API with API

Linear Supertypes
API, API, API, API, Random, NN, Math, Cast, Basic, API, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. tfi
  2. API
  3. API
  4. API
  5. API
  6. Random
  7. NN
  8. Math
  9. Cast
  10. Basic
  11. API
  12. AnyRef
  13. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. type AbortedException = jni.AbortedException

    Permalink
    Definition Classes
    API
  2. type AlreadyExistsException = jni.AlreadyExistsException

    Permalink
    Definition Classes
    API
  3. type CancelledException = jni.CancelledException

    Permalink
    Definition Classes
    API
  4. type CheckpointNotFoundException = core.exception.CheckpointNotFoundException

    Permalink
    Definition Classes
    API
  5. type DataLossException = jni.DataLossException

    Permalink
    Definition Classes
    API
  6. type DeadlineExceededException = jni.DeadlineExceededException

    Permalink
    Definition Classes
    API
  7. type DeviceSpecification = core.DeviceSpecification

    Permalink
    Definition Classes
    API
  8. type FailedPreconditionException = jni.FailedPreconditionException

    Permalink
    Definition Classes
    API
  9. type GraphMismatchException = core.exception.GraphMismatchException

    Permalink
    Definition Classes
    API
  10. type IllegalNameException = core.exception.IllegalNameException

    Permalink
    Definition Classes
    API
  11. type InternalException = jni.InternalException

    Permalink
    Definition Classes
    API
  12. type InvalidArgumentException = jni.InvalidArgumentException

    Permalink
    Definition Classes
    API
  13. type InvalidDataTypeException = core.exception.InvalidDataTypeException

    Permalink
    Definition Classes
    API
  14. type InvalidDeviceException = core.exception.InvalidDeviceException

    Permalink
    Definition Classes
    API
  15. type InvalidIndexerException = core.exception.InvalidIndexerException

    Permalink
    Definition Classes
    API
  16. type InvalidShapeException = core.exception.InvalidShapeException

    Permalink
    Definition Classes
    API
  17. type NotFoundException = jni.NotFoundException

    Permalink
    Definition Classes
    API
  18. type OpBuilderUsedException = core.exception.OpBuilderUsedException

    Permalink
    Definition Classes
    API
  19. type OutOfRangeException = jni.OutOfRangeException

    Permalink
    Definition Classes
    API
  20. type PermissionDeniedException = jni.PermissionDeniedException

    Permalink
    Definition Classes
    API
  21. type ResourceExhaustedException = jni.ResourceExhaustedException

    Permalink
    Definition Classes
    API
  22. type ShapeMismatchException = core.exception.ShapeMismatchException

    Permalink
    Definition Classes
    API
  23. type UnauthenticatedException = jni.UnauthenticatedException

    Permalink
    Definition Classes
    API
  24. type UnavailableException = jni.UnavailableException

    Permalink
    Definition Classes
    API
  25. type UnimplementedException = jni.UnimplementedException

    Permalink
    Definition Classes
    API
  26. type UnknownException = jni.UnknownException

    Permalink
    Definition Classes
    API

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. val AbortedException: core.exception.AbortedException.type

    Permalink
    Definition Classes
    API
  5. val AlreadyExistsException: core.exception.AlreadyExistsException.type

    Permalink
    Definition Classes
    API
  6. val CancelledException: core.exception.CancelledException.type

    Permalink
    Definition Classes
    API
  7. val CheckpointNotFoundException: core.exception.CheckpointNotFoundException.type

    Permalink
    Definition Classes
    API
  8. val DataLossException: core.exception.DataLossException.type

    Permalink
    Definition Classes
    API
  9. val DeadlineExceededException: core.exception.DeadlineExceededException.type

    Permalink
    Definition Classes
    API
  10. val FailedPreconditionException: core.exception.FailedPreconditionException.type

    Permalink
    Definition Classes
    API
  11. val GraphMismatchException: core.exception.GraphMismatchException.type

    Permalink
    Definition Classes
    API
  12. val IllegalNameException: core.exception.IllegalNameException.type

    Permalink
    Definition Classes
    API
  13. val InternalException: core.exception.InternalException.type

    Permalink
    Definition Classes
    API
  14. val InvalidArgumentException: core.exception.InvalidArgumentException.type

    Permalink
    Definition Classes
    API
  15. val InvalidDataTypeException: core.exception.InvalidDataTypeException.type

    Permalink
    Definition Classes
    API
  16. val InvalidDeviceException: core.exception.InvalidDeviceException.type

    Permalink
    Definition Classes
    API
  17. val InvalidIndexerException: core.exception.InvalidIndexerException.type

    Permalink
    Definition Classes
    API
  18. val InvalidShapeException: core.exception.InvalidShapeException.type

    Permalink
    Definition Classes
    API
  19. val NotFoundException: core.exception.NotFoundException.type

    Permalink
    Definition Classes
    API
  20. val OpBuilderUsedException: core.exception.OpBuilderUsedException.type

    Permalink
    Definition Classes
    API
  21. val OutOfRangeException: core.exception.OutOfRangeException.type

    Permalink
    Definition Classes
    API
  22. val PermissionDeniedException: core.exception.PermissionDeniedException.type

    Permalink
    Definition Classes
    API
  23. val ResourceExhaustedException: core.exception.ResourceExhaustedException.type

    Permalink
    Definition Classes
    API
  24. val ShapeMismatchException: core.exception.ShapeMismatchException.type

    Permalink
    Definition Classes
    API
  25. val Timeline: core.client.Timeline.type

    Permalink
    Definition Classes
    API
  26. val UnauthenticatedException: core.exception.UnauthenticatedException.type

    Permalink
    Definition Classes
    API
  27. val UnavailableException: core.exception.UnavailableException.type

    Permalink
    Definition Classes
    API
  28. val UnimplementedException: core.exception.UnimplementedException.type

    Permalink
    Definition Classes
    API
  29. val UnknownException: core.exception.UnknownException.type

    Permalink
    Definition Classes
    API
  30. def abs[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The abs op computes the absolute value of a tensor.

    The abs op computes the absolute value of a tensor.

    Given a tensor x of real numbers, the op returns a tensor containing the absolute value of each element in x. For example, if x is an input element and y is an output element, the op computes y = |x|.

    Given a tensor x of complex numbers, the op returns a tensor of type FLOAT32 or FLOAT64 that is the magnitude value of each element in x. All elements in x must be complex numbers of the form a + bj. The magnitude is computed as \sqrt{a2 + b2}. For example:

    // Tensor 'x' is [[-2.25 + 4.75j], [-3.25 + 5.75j]]
    abs(x) ==> [5.25594902, 6.60492229]
    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  31. def acos[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The acos op computes the inverse cosine of a tensor element-wise.

    The acos op computes the inverse cosine of a tensor element-wise. I.e., y = \acos{x}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  32. def acosh[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The acosh op computes the inverse hyperbolic cosine of a tensor element-wise.

    The acosh op computes the inverse hyperbolic cosine of a tensor element-wise. I.e., y = \acosh{x}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  33. def add[D <: MathDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The add op adds two tensors element-wise.

    The add op adds two tensors element-wise. I.e., z = x + y.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  34. def addBias[D <: MathDataType](value: tensors.Tensor[D], bias: tensors.Tensor[D], cNNDataFormat: ops.NN.CNNDataFormat = CNNDataFormat.default): tensors.Tensor[D]

    Permalink

    The addBias op adds bias to value.

    The addBias op adds bias to value.

    The op is (mostly) a special case of add where bias is restricted to be one-dimensional (i.e., has rank 1). Broadcasting is supported and so value may have any number of dimensions. Unlike add, the type of biasis allowed to differ from that of value value in the case where both types are quantized.

    value

    Value tensor.

    bias

    Bias tensor that must be one-dimensional (i.e., it must have rank 1).

    cNNDataFormat

    Data format of the input and output tensors. With the default format NWCFormat, the bias tensor will be added to the last dimension of the value tensor. Alternatively, the format could be NCWFormat, and the bias tensor would be added to the third-to-last dimension.

    returns

    Result as a new tensor.

    Definition Classes
    NN
  35. def addN[D <: ReducibleDataType](inputs: Seq[tensors.Tensor[D]]): tensors.Tensor[D]

    Permalink

    The addN op adds all input tensors element-wise.

    The addN op adds all input tensors element-wise.

    inputs

    Input tensors.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  36. def all(input: tensors.Tensor[types.BOOLEAN], axes: tensors.Tensor[types.INT32] = null, keepDims: Boolean = false): tensors.Tensor[types.BOOLEAN]

    Permalink

    The all op computes the logical AND of elements across axes of a tensor.

    The all op computes the logical AND of elements across axes of a tensor.

    Reduces input along the axes given in axes. Unless keepDims is true, the rank of the tensor is reduced by 1 for each entry in axes. If keepDims is true, the reduced axes are retained with size 1.

    If axes is null, then all axes are reduced, and a tensor with a single element is returned.

    For example:

    // 'x' is [[true, true], [false, false]]
    all(x) ==> false
    all(x, 0) ==> [false, false]
    all(x, 1) ==> [true, false]
    input

    Input tensor to reduce.

    axes

    Integer tensor containing the axes to reduce. If null, then all axes are reduced.

    keepDims

    If true, retain the reduced axes.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  37. def any(input: tensors.Tensor[types.BOOLEAN], axes: tensors.Tensor[types.INT32] = null, keepDims: Boolean = false): tensors.Tensor[types.BOOLEAN]

    Permalink

    The any op computes the logical OR of elements across axes of a tensor.

    The any op computes the logical OR of elements across axes of a tensor.

    Reduces input along the axes given in axes. Unless keepDims is true, the rank of the tensor is reduced by 1 for each entry in axes. If keepDims is true, the reduced axes are retained with size 1.

    If axes is null, then all axes are reduced, and a tensor with a single element is returned.

    For example:

    // 'x' is [[true, true], [false, false]]
    any(x) ==> true
    any(x, 0) ==> [true, true]
    any(x, 1) ==> [true, false]
    input

    Input tensor to reduce.

    axes

    Integer tensor containing the axes to reduce. If null, then all axes are reduced.

    keepDims

    If true, retain the reduced axes.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  38. def approximatelyEqual[D <: ReducibleDataType](x: tensors.Tensor[D], y: tensors.Tensor[D], tolerance: Float = 0.00001f): tensors.Tensor[types.BOOLEAN]

    Permalink

    The approximatelyEqual op computes the truth value of abs(x - y) < tolerance element-wise.

    The approximatelyEqual op computes the truth value of abs(x - y) < tolerance element-wise.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    tolerance

    Comparison tolerance value.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  39. def argmax[D <: MathDataType, I <: Int32OrInt64, IR <: Int32OrInt64](input: tensors.Tensor[D], axes: tensors.Tensor[I], outputDataType: IR): tensors.Tensor[IR]

    Permalink

    The argmax op returns the indices with the largest value across axes of a tensor.

    The argmax op returns the indices with the largest value across axes of a tensor.

    Note that in case of ties the identity of the return value is not guaranteed.

    input

    Input tensor.

    axes

    Integer tensor containing the axes to reduce. If null, then all axes are reduced.

    outputDataType

    Data type for the output tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  40. def argmax[D <: MathDataType, I <: Int32OrInt64](input: tensors.Tensor[D], axes: tensors.Tensor[I]): tensors.Tensor[types.INT64]

    Permalink

    The argmax op returns the indices with the largest value across axes of a tensor.

    The argmax op returns the indices with the largest value across axes of a tensor.

    Note that in case of ties the identity of the return value is not guaranteed.

    input

    Input tensor.

    axes

    Integer tensor containing the axes to reduce. If null, then all axes are reduced.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  41. def argmin[D <: MathDataType, I <: Int32OrInt64, IR <: Int32OrInt64](input: tensors.Tensor[D], axes: tensors.Tensor[I], outputDataType: IR): tensors.Tensor[IR]

    Permalink

    The argmin op returns the indices with the smallest value across axes of a tensor.

    The argmin op returns the indices with the smallest value across axes of a tensor.

    Note that in case of ties the identity of the return value is not guaranteed.

    input

    Input tensor.

    axes

    Integer tensor containing the axes to reduce. If null, then all axes are reduced.

    outputDataType

    Data type for the output tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  42. def argmin[D <: MathDataType, I <: Int32OrInt64](input: tensors.Tensor[D], axes: tensors.Tensor[I]): tensors.Tensor[types.INT64]

    Permalink

    The argmin op returns the indices with the smallest value across axes of a tensor.

    The argmin op returns the indices with the smallest value across axes of a tensor.

    Note that in case of ties the identity of the return value is not guaranteed.

    input

    Input tensor.

    axes

    Integer tensor containing the axes to reduce. If null, then all axes are reduced.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  43. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  44. def asin[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The asin op computes the inverse sine of a tensor element-wise.

    The asin op computes the inverse sine of a tensor element-wise. I.e., y = \asin{x}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  45. def asinh[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The asinh op computes the inverse hyperbolic sine of a tensor element-wise.

    The asinh op computes the inverse hyperbolic sine of a tensor element-wise. I.e., y = \asinh{x}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  46. def atan[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The atan op computes the inverse tangent of a tensor element-wise.

    The atan op computes the inverse tangent of a tensor element-wise. I.e., y = \atan{x}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  47. def atan2[D <: Float32OrFloat64](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The atan2 op computes the inverse tangent of x / y element-wise, respecting signs of the arguments.

    The atan2 op computes the inverse tangent of x / y element-wise, respecting signs of the arguments.

    The op computes the angle \theta \in [-\pi, \pi] such that y = r \cos(\theta) and x = r \sin(\theta), where r = \sqrt(x2 + y2).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  48. def atanh[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The atanh op computes the inverse hyperbolic tangent of a tensor element-wise.

    The atanh op computes the inverse hyperbolic tangent of a tensor element-wise. I.e., y = \atanh{x}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  49. def batchToSpace[D <: types.DataType, I <: Int32OrInt64](input: tensors.Tensor[D], blockSize: Int, crops: tensors.Tensor[I]): tensors.Tensor[D]

    Permalink

    The batchToSpace op rearranges (permutes) data from batches into blocks of spatial data, followed by cropping.

    The batchToSpace op rearranges (permutes) data from batches into blocks of spatial data, followed by cropping.

    More specifically, the op outputs a copy of the input tensor where values from the batch dimension are moved in spatial blocks to the height and width dimensions, followed by cropping along the height and width dimensions. This is the reverse functionality to that of spaceToBatch.

    input is a 4-dimensional input tensor with shape [batch * blockSize * blockSize, heightPad / blockSize, widthPad / blockSize, depth].

    crops has shape [2, 2]. It specifies how many elements to crop from the intermediate result across the spatial dimensions as follows: crops = cropBottom], [cropLeft, cropRight. The shape of the output will be: [batch, heightPad - cropTom - cropBottom, widthPad - cropLeft - cropRight, depth].

    Some examples:

    // === Example #1 ===
    // input = [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]  (shape = [4, 1, 1, 1])
    // blockSize = 2
    // crops = [[0, 0], [0, 0]]
    batchToSpace(input, blockSize, crops) ==> [[[[1], [2]], [[3], [4]]]]  (shape = [1, 2, 2, 1])
    
    // === Example #2 ===
    // input = [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]  (shape = [4, 1, 1, 3])
    // blockSize = 2
    // crops = [[0, 0], [0, 0]]
    batchToSpace(input, blockSize, crops) ==>
      [[[[1, 2, 3], [4,   5,  6]],
        [[7, 8, 9], [10, 11, 12]]]]  (shape = [1, 2, 2, 3])
    
    // === Example #3 ===
    // input = [[[[1], [3]], [[ 9], [11]]],
    //          [[[2], [4]], [[10], [12]]],
    //          [[[5], [7]], [[13], [15]]],
    //          [[[6], [8]], [[14], [16]]]]  (shape = [4, 2, 2, 1])
    // blockSize = 2
    // crops = [[0, 0], [0, 0]]
    batchToSpace(input, blockSize, crops) ==>
      [[[[ 1],  [2],  [3],  [ 4]],
        [[ 5],  [6],  [7],  [ 8]],
        [[ 9], [10], [11],  [12]],
        [[13], [14], [15],  [16]]]]  (shape = [1, 4, 4, 1])
    
    // === Example #4 ===
    // input = [[[[0], [1], [3]]], [[[0], [ 9], [11]]],
    //          [[[0], [2], [4]]], [[[0], [10], [12]]],
    //          [[[0], [5], [7]]], [[[0], [13], [15]]],
    //          [[[0], [6], [8]]], [[[0], [14], [16]]]]  (shape = [8, 1, 3, 1])
    // blockSize = 2
    // crops = [[0, 0], [2, 0]]
    batchToSpace(input, blockSize, crops) ==>
      [[[[ 1],  [2],  [3],  [ 4]],
        [[ 5],  [6],  [7],  [ 8]]],
       [[[ 9], [10], [11],  [12]],
        [[13], [14], [15],  [16]]]]  (shape = [2, 2, 4, 1])
    input

    4-dimensional input tensor with shape [batch, height, width, depth].

    blockSize

    Block size which must be greater than 1.

    crops

    2-dimensional tensor containing non-negative integers with shape [2, 2].

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  50. def batchToSpaceND[D <: types.DataType, I1 <: Int32OrInt64, I2 <: Int32OrInt64](input: tensors.Tensor[D], blockShape: tensors.Tensor[I1], crops: tensors.Tensor[I2]): tensors.Tensor[D]

    Permalink

    The batchToSpaceND op reshapes the "batch" dimension 0 into M + 1 dimensions of shape blockShape + [batch] and interleaves these blocks back into the grid defined by the spatial dimensions [1, ..., M], to obtain a result with the same rank as the input.

    The batchToSpaceND op reshapes the "batch" dimension 0 into M + 1 dimensions of shape blockShape + [batch] and interleaves these blocks back into the grid defined by the spatial dimensions [1, ..., M], to obtain a result with the same rank as the input. The spatial dimensions of this intermediate result are then optionally cropped according to crops to produce the output. This is the reverse functionality to that of spaceToBatchND.

    input is an N-dimensional tensor with shape inputShape = [batch] + spatialShape + remainingShape, where spatialShape has M dimensions.

    The op is equivalent to the following steps:

    1. Reshape input to reshaped of shape:
    [blockShape(0), ..., blockShape(M-1),
    batch / product(blockShape),
    inputShape(1), ..., inputShape(N-1)]

    2. Permute dimensions of reshaped to produce permuted of shape:

    [batch / product(blockShape),
    inputShape(1), blockShape(0),
    ...,
    inputShape(N-1), blockShape(M-1),
    inputShape(M+1),
    ...,
    inputShape(N-1)]

    3. Reshape permuted to produce reshapedPermuted of shape:

    [batch / product(blockShape),
    inputShape(1) * blockShape(0),
    ...,
    inputShape(M) * blockShape(M-1),
    ...,
    inputShape(M+1),
    ...,
    inputShape(N-1)]

    4. Crop the start and end of dimensions [1, ..., M] of reshapedPermuted according to crops to produce the output of shape:

    [batch / product(blockShape),
     inputShape(1) * blockShape(0) - crops(0, 0) - crops(0, 1),
    ...,
    inputShape(M) * blockShape(M-1) - crops(M-1, 0) - crops(M-1, 1),
    inputShape(M+1),
    ...,
    inputShape(N-1)]

    Some exaples:

    // === Example #1 ===
    // input = [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]  (shape = [4, 1, 1, 1])
    // blockShape = [2, 2]
    // crops = [[0, 0], [0, 0]]
    batchToSpaceND(input, blockShape, crops) ==> [[[[1], [2]], [[3], [4]]]]  (shape = [1, 2, 2, 1])
    
    // === Example #2 ===
    // input = [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]  (shape = [4, 1, 1, 3])
    // blockShape = [2, 2]
    // crops = [[0, 0], [0, 0]]
    batchToSpaceND(input, blockShape, crops) ==>
      [[[[1, 2, 3], [ 4,  5,  6]],
        [[7, 8, 9], [10, 11, 12]]]]  (shape = [1, 2, 2, 3])
    
    // === Example #3 ===
    // input = [[[[1], [3]], [[ 9], [11]]],
    //          [[[2], [4]], [[10], [12]]],
    //          [[[5], [7]], [[13], [15]]],
    //          [[[6], [8]], [[14], [16]]]]  (shape = [4, 2, 2, 1])
    // blockShape = [2, 2]
    // crops = [[0, 0], [0, 0]]
    batchToSpaceND(input, blockShape, crops) ==>
      [[[[ 1],  [2],  [3],  [ 4]],
        [[ 5],  [6],  [7],  [ 8]],
        [[ 9], [10], [11],  [12]],
        [[13], [14], [15],  [16]]]]  (shape = [1, 4, 4, 1])
    
    // === Example #4 ===
    // input = [[[[0], [1], [3]]], [[[0], [ 9], [11]]],
    //          [[[0], [2], [4]]], [[[0], [10], [12]]],
    //          [[[0], [5], [7]]], [[[0], [13], [15]]],
    //          [[[0], [6], [8]]], [[[0], [14], [16]]]]  (shape = [8, 1, 3, 1])
    // blockShape = [2, 2]
    // crops = [[0, 0], [2, 0]]
    batchToSpaceND(input, blockShape, crops) ==>
      [[[[[ 1],  [2],  [3],  [ 4]],
         [[ 5],  [6],  [7],  [ 8]]],
        [[[ 9], [10], [11],  [12]],
         [[13], [14], [15],  [16]]]]  (shape = [2, 2, 4, 1])
    input

    N-dimensional tensor with shape inputShape = [batch] + spatialShape + remainingShape, where spatialShape has M dimensions.

    blockShape

    One-dimensional tensor with shape [M] whose elements must all be >= 1.

    crops

    Two-dimensional tensor with shape [M, 2] whose elements must all be non-negative. crops(i) = [cropStart, cropEnd] specifies the amount to crop from input dimension i + 1, which corresponds to spatial dimension i. It is required that cropStart(i) + cropEnd(i) <= blockShape(i) * inputShape(i + 1).

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  51. def binCount[D <: Int32OrInt64OrFloat32OrFloat64](input: tensors.Tensor[types.INT32], weights: tensors.Tensor[D] = null, minLength: tensors.Tensor[types.INT32] = null, maxLength: tensors.Tensor[types.INT32] = null, dataType: D = null): tensors.Tensor[D]

    Permalink

    The binCount op counts the number of occurrences of each value in an integer tensor.

    The binCount op counts the number of occurrences of each value in an integer tensor.

    If minLength and maxLength are not provided, the op returns a vector with length max(input) + 1, if input is non-empty, and length 0 otherwise.

    If weights is not null, then index i of the output stores the sum of the value in weights at each index where the corresponding value in input is equal to i.

    input

    Tensor containing non-negative values.

    weights

    If not null, this tensor must have the same shape as input. For each value in input, the corresponding bin count will be incremented by the corresponding weight instead of 1.

    minLength

    If not null, this ensures the output has length at least minLength, padding with zeros at the end, if necessary.

    maxLength

    If not null, this skips values in input that are equal or greater than maxLength, ensuring that the output has length at most maxLength.

    dataType

    If weights is null, this determines the data type used for the output tensor (i.e., the tensor containing the bin counts).

    returns

    Result as a new tensor.

    Definition Classes
    Math
  52. def bitcast[D <: ReducibleDataType, DR <: types.DataType](input: tensors.Tensor[D], dataType: DR): tensors.Tensor[DR]

    Permalink

    $OpDocMathBitcast

    $OpDocMathBitcast

    input

    Input tensor.

    dataType

    Target data type.

    returns

    Result as a new tensor.

    Definition Classes
    Cast
  53. def booleanMask[D <: types.DataType](input: tensors.Tensor[D], mask: tensors.Tensor[types.BOOLEAN]): tensors.Tensor[D]

    Permalink

    The booleanMask op applies the provided boolean mask to input.

    The booleanMask op applies the provided boolean mask to input.

    In general, 0 < mask.rank = K <= tensor.rank, and mask's shape must match the first K dimensions of tensor's shape. We then have: booleanMask(tensor, mask)(i, j1, --- , jd) = tensor(i1, --- , iK, j1, ---, jd), where (i1, ---, iK) is the ith true entry of mask (in row-major order).

    For example:

    // 1-D example
    tensor = [0, 1, 2, 3]
    mask = [True, False, True, False]
    booleanMask(tensor, mask) ==> [0, 2]
    
    // 2-D example
    tensor = [[1, 2], [3, 4], [5, 6]]
    mask = [True, False, True]
    booleanMask(tensor, mask) ==> [[1, 2], [5, 6]]
    input

    N-dimensional tensor.

    mask

    K-dimensional boolean tensor, where K <= N and K must be known statically.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  54. def bucketize[D <: Int32OrInt64OrFloat32OrFloat64](input: tensors.Tensor[D], boundaries: Seq[Float]): tensors.Tensor[D]

    Permalink

    The bucketize op bucketizes a tensor based on the provided boundaries.

    The bucketize op bucketizes a tensor based on the provided boundaries.

    For example:

    // 'input' is [[-5, 10000], [150, 10], [5, 100]]
    // 'boundaries' are [0, 10, 100]
    bucketize(input, boundaries) ==> [[0, 3], [3, 2], [1, 3]]
    input

    Numeric tensor to bucketize.

    boundaries

    Sorted sequence of numbers specifying the boundaries of the buckets.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  55. def cast[D <: types.DataType, DR <: types.DataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D], dataType: DR, truncate: Boolean = false)(implicit ev: Aux[TL, D]): TL[DR]

    Permalink

    $OpDocMathCast

    $OpDocMathCast

    x

    Tensor to cast.

    dataType

    Target data type.

    returns

    Result as a new tensor.

    Definition Classes
    Cast
  56. def ceil[D <: Float16OrFloat32OrFloat64, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The ceil op computes the smallest integer not greater than the current value of a tensor, element-wise.

    The ceil op computes the smallest integer not greater than the current value of a tensor, element-wise.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  57. def checkNumerics[D <: DecimalDataType](input: tensors.Tensor[D], message: String = ""): tensors.Tensor[D]

    Permalink

    The checkNumerics op checks a tensor for NaN and Inf values.

    The checkNumerics op checks a tensor for NaN and Inf values.

    When run, reports an InvalidArgument error if input has any values that are not-a-number (NaN) or infinity (Inf). Otherwise, it acts as an identity op and passes input to the output, as-is.

    input

    Input tensor.

    message

    Prefix to print for the error message.

    returns

    Result as a new tensor which has the same value as the input tensor.

    Definition Classes
    Basic
  58. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  59. def complex128(real: tensors.Tensor[types.FLOAT64], imag: tensors.Tensor[types.FLOAT64]): tensors.Tensor[types.COMPLEX128]

    Permalink

    The complex op converts two real tensors to a complex tensor.

    The complex op converts two real tensors to a complex tensor.

    Given a tensor real representing the real part of a complex number, and a tensor imag representing the imaginary part of a complex number, the op returns complex numbers element-wise of the form a + bj, where *a* represents the real part and *b* represents the imag part. The input tensors real and imag must have the same shape and data type.

    For example:

    // 'real' is [2.25, 3.25]
    // 'imag' is [4.75, 5.75]
    complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]]
    real

    Tensor containing the real component.

    imag

    Tensor containing the imaginary component.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  60. def complex64(real: tensors.Tensor[types.FLOAT32], imag: tensors.Tensor[types.FLOAT32]): tensors.Tensor[types.COMPLEX64]

    Permalink

    The complex op converts two real tensors to a complex tensor.

    The complex op converts two real tensors to a complex tensor.

    Given a tensor real representing the real part of a complex number, and a tensor imag representing the imaginary part of a complex number, the op returns complex numbers element-wise of the form a + bj, where *a* represents the real part and *b* represents the imag part. The input tensors real and imag must have the same shape and data type.

    For example:

    // 'real' is [2.25, 3.25]
    // 'imag' is [4.75, 5.75]
    complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]]
    real

    Tensor containing the real component.

    imag

    Tensor containing the imaginary component.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  61. def concatenate[D <: types.DataType](inputs: Seq[tensors.Tensor[D]], axis: tensors.Tensor[types.INT32] = 0): tensors.Tensor[D]

    Permalink

    The concatenate op concatenates tensors along one dimension.

    The concatenate op concatenates tensors along one dimension.

    The op concatenates the list of tensors inputs along the dimension axis. If inputs(i).shape = [D0, D1, ..., Daxis(i), ..., Dn], then the concatenated tensor will have shape [D0, D1, ..., Raxis, ..., Dn], where Raxis = sum(Daxis(i)). That is, the data from the input tensors is joined along the axis dimension.

    For example:

    // 't1' is equal to [[1, 2, 3], [4, 5, 6]]
    // 't2' is equal to [[7, 8, 9], [10, 11, 12]]
    concatenate(Array(t1, t2), 0) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
    concatenate(Array(t1, t2), 1) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]
    
    // 't3' has shape [2, 3]
    // 't4' has shape [2, 3]
    concatenate(Array(t3, t4), 0).shape ==> [4, 3]
    concatenate(Array(t3, t4), 1).shape ==> [2, 6]

    Note that, if you want to concatenate along a new axis, it may be better to use the stack op instead:

    concatenate(tensors.map(t => expandDims(t, axis)), axis) == stack(tensors, axis)
    inputs

    Input tensors to be concatenated.

    axis

    Dimension along which to concatenate the input tensors.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  62. def conjugate[D <: ComplexDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](input: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The conjugate op returns the element-wise complex conjugate of a tensor.

    The conjugate op returns the element-wise complex conjugate of a tensor.

    Given a numeric tensor input, the op returns a tensor with numbers that are the complex conjugate of each element in input. If the numbers in input are of the form a + bj, where *a* is the real part and *b* is the imaginary part, then the complex conjugate returned by this operation is of the form a - bj.

    For example:

    // 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
    conjugate(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j]

    If input is real-valued, then it is returned unchanged.

    input

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  63. def conv2D[D <: DecimalDataType](input: tensors.Tensor[D], filter: tensors.Tensor[D], stride1: Long, stride2: Long, padding: ConvPaddingMode, dataFormat: ops.NN.CNNDataFormat = CNNDataFormat.default, dilations: (Int, Int, Int, Int) = (1, 1, 1, 1), useCuDNNOnGPU: Boolean = true): tensors.Tensor[D]

    Permalink

    The conv2D op computes a 2-D convolution given 4-D input and filter tensors.

    The conv2D op computes a 2-D convolution given 4-D input and filter tensors.

    Given an input tensor of shape [batch, inHeight, inWidth, inChannels] and a filter / kernel tensor of shape [filterHeight, filterWidth, inChannels, outChannels], the op performs the following:

    1. Flattens the filter to a 2-D matrix with shape [filterHeight * filterWidth * inChannels, outputChannels]. 2. Extracts image patches from the input tensor to form a *virtual* tensor of shape [batch, outHeight, outWidth, filterHeight * filterWidth * inChannels]. 3. For each patch, right-multiplies the filter matrix and the image patch vector.

    For example, for the default NWCFormat:

    output(b,i,j,k) = sum_{di,dj,q} input(b, stride1 * i + di, stride2 * j + dj, q) * filter(di,dj,q,k).

    Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertices strides, strides = [1, stride, stride, 1].

    input

    4-D tensor whose dimension order is interpreted according to the value of dataFormat.

    filter

    4-D tensor with shape [filterHeight, filterWidth, inChannels, outChannels].

    stride1

    Stride of the sliding window along the second dimension of input.

    stride2

    Stride of the sliding window along the third dimension of input.

    padding

    Padding mode to use.

    dataFormat

    Format of the input and output data.

    dilations

    The dilation factor for each dimension of input. If set to k > 1, there will be k - 1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of dataFormat. Dilations in the batch and depth dimensions must be set to 1.

    useCuDNNOnGPU

    Boolean value indicating whether or not to use CuDNN for the created op, if its placed on a GPU, as opposed to the TensorFlow implementation.

    returns

    Result as a new 4-D tensor whose dimension order depends on the value of dataFormat.

    Definition Classes
    NN
  64. def conv2DBackpropFilter[D <: DecimalDataType](input: tensors.Tensor[D], filterSizes: tensors.Tensor[types.INT32], outputGradient: tensors.Tensor[D], stride1: Long, stride2: Long, padding: ConvPaddingMode, dataFormat: ops.NN.CNNDataFormat = CNNDataFormat.default, dilations: (Int, Int, Int, Int) = (1, 1, 1, 1), useCuDNNOnGPU: Boolean = true): tensors.Tensor[D]

    Permalink

    The conv2DBackpropFilter op computes the gradient of the conv2D op with respect to its filter tensor.

    The conv2DBackpropFilter op computes the gradient of the conv2D op with respect to its filter tensor.

    input

    4-D tensor whose dimension order is interpreted according to the value of dataFormat.

    filterSizes

    Integer vector representing the shape of the original filter, which is a 4-D tensor.

    outputGradient

    4-D tensor containing the gradients w.r.t. the output of the convolution and whose shape depends on the value of dataFormat.

    stride1

    Stride of the sliding window along the second dimension of input.

    stride2

    Stride of the sliding window along the third dimension of input.

    padding

    Padding mode to use.

    dataFormat

    Format of the input and output data.

    dilations

    The dilation factor for each dimension of input. If set to k > 1, there will be k - 1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of dataFormat. Dilations in the batch and depth dimensions must be set to 1.

    useCuDNNOnGPU

    Boolean value indicating whether or not to use CuDNN for the created op, if its placed on a GPU, as opposed to the TensorFlow implementation.

    returns

    Result as a new 4-D tensor whose dimension order depends on the value of dataFormat.

    Definition Classes
    NN
  65. def conv2DBackpropInput[D <: DecimalDataType](inputSizes: tensors.Tensor[types.INT32], filter: tensors.Tensor[D], outputGradient: tensors.Tensor[D], stride1: Long, stride2: Long, padding: ConvPaddingMode, dataFormat: ops.NN.CNNDataFormat = CNNDataFormat.default, dilations: (Int, Int, Int, Int) = (1, 1, 1, 1), useCuDNNOnGPU: Boolean = true): tensors.Tensor[D]

    Permalink

    The conv2DBackpropInput op computes the gradient of the conv2D op with respect to its input tensor.

    The conv2DBackpropInput op computes the gradient of the conv2D op with respect to its input tensor.

    inputSizes

    Integer vector representing the shape of the original input, which is a 4-D tensor.

    filter

    4-D tensor with shape [filterHeight, filterWidth, inChannels, outChannels].

    outputGradient

    4-D tensor containing the gradients w.r.t. the output of the convolution and whose shape depends on the value of dataFormat.

    stride1

    Stride of the sliding window along the second dimension of input.

    stride2

    Stride of the sliding window along the third dimension of input.

    padding

    Padding mode to use.

    dataFormat

    Format of the input and output data.

    dilations

    The dilation factor for each dimension of input. If set to k > 1, there will be k - 1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of dataFormat. Dilations in the batch and depth dimensions must be set to 1.

    useCuDNNOnGPU

    Boolean value indicating whether or not to use CuDNN for the created op, if its placed on a GPU, as opposed to the TensorFlow implementation.

    returns

    Result as a new 4-D tensor whose dimension order depends on the value of dataFormat.

    Definition Classes
    NN
  66. def cos[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The cos op computes the cosine of a tensor element-wise.

    The cos op computes the cosine of a tensor element-wise. I.e., y = \cos{x}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  67. def cosh[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The cosh op computes the hyperbolic cosine of a tensor element-wise.

    The cosh op computes the hyperbolic cosine of a tensor element-wise. I.e., y = \cosh{x}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  68. def countNonZero[D <: ReducibleDataType](input: tensors.Tensor[D], axes: tensors.Tensor[types.INT32] = null, keepDims: Boolean = false): tensors.Tensor[types.INT64]

    Permalink

    The countNonZero op computes the number of non-zero elements across axes of a tensor.

    The countNonZero op computes the number of non-zero elements across axes of a tensor.

    Reduces input along the axes given in axes. Unless keepDims is true, the rank of the tensor is reduced by 1 for each entry in axes. If keepDims is true, the reduced axes are retained with size 1.

    If axes is null, then all axes are reduced, and a tensor with a single element is returned.

    IMPORTANT NOTE: Floating point comparison to zero is done by exact floating point equality check. Small values are not rounded to zero for the purposes of the non-zero check.

    For example:

    // 'x' is [[0, 1, 0], [1, 1, 0]]
    countNonZero(x) ==> 3
    countNonZero(x, 0) ==> [1, 2, 0]
    countNonZero(x, 1) ==> [1, 2]
    countNonZero(x, 1, keepDims = true) ==> [[1], [2]]
    countNonZero(x, [0, 1]) ==> 3

    IMPORTANT NOTE: Strings are compared against zero-length empty string "". Any string with a size greater than zero is already considered as nonzero.

    For example:

    // 'x' is ["", "a", "  ", "b", ""]
    countNonZero(x) ==> 3 // "a", "  ", and "b" are treated as nonzero strings.
    input

    Input tensor to reduce.

    axes

    Integer array containing the axes to reduce. If null, then all axes are reduced.

    keepDims

    If true, retain the reduced axes.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  69. def crelu[D <: RealDataType](x: tensors.Tensor[D], axis: tensors.Tensor[types.INT32] = 1): tensors.Tensor[D]

    Permalink

    The crelu op computes the concatenated rectified linear unit activation function.

    The crelu op computes the concatenated rectified linear unit activation function.

    The op concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the *negative* part of the activation. Note that as a result this non-linearity doubles the depth of the activations.

    Source: [Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units](https://arxiv.org/abs/1603.05201)

    x

    Input tensor.

    axis

    Along along which the output values are concatenated along.

    returns

    Result as a new tensor.

    Definition Classes
    NN
  70. def cross[D <: MathDataType](a: tensors.Tensor[D], b: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The cross op computes the pairwise cross product between two tensors.

    The cross op computes the pairwise cross product between two tensors.

    a and b must have the same shape; they can either be simple 3-element vectors, or have any shape where the innermost dimension size is 3. In the latter case, each pair of corresponding 3-element vectors is cross-multiplied independently.

    a

    First input tensor.

    b

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  71. def cumprod[D <: MathDataType](input: tensors.Tensor[D], axis: tensors.Tensor[types.INT32] = 0, exclusive: Boolean = false, reverse: Boolean = false): tensors.Tensor[D]

    Permalink

    The cumprod op computes the cumulative product of the tensor along an axis.

    The cumprod op computes the cumulative product of the tensor along an axis.

    By default, the op performs an inclusive cumulative product, which means that the first element of the input is identical to the first element of the output:

    cumprod([a, b, c]) ==> [a, a * b, a * b * c]

    By setting the exclusive argument to true, an exclusive cumulative product is performed instead:

    cumprod([a, b, c], exclusive = true) ==> [0, a, a * b]

    By setting the reverse argument to true, the cumulative product is performed in the opposite direction:

    cumprod([a, b, c], reverse = true) ==> [a * b * c, b * c, c]

    This is more efficient than using separate Basic.reverse ops.

    The reverse and exclusive arguments can also be combined:

    cumprod([a, b, c], exclusive = true, reverse = true) ==> [b * c, c, 0]
    input

    Input tensor.

    axis

    INT32 tensor containing the axis along which to perform the cumulative product.

    exclusive

    Boolean value indicating whether to perform an exclusive cumulative product.

    reverse

    Boolean value indicating whether to perform a reverse cumulative product.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  72. def cumsum[D <: MathDataType](input: tensors.Tensor[D], axis: tensors.Tensor[types.INT32] = 0, exclusive: Boolean = false, reverse: Boolean = false): tensors.Tensor[D]

    Permalink

    The cumsum op computes the cumulative sum of the tensor along an axis.

    The cumsum op computes the cumulative sum of the tensor along an axis.

    By default, the op performs an inclusive cumulative sum, which means that the first element of the input is identical to the first element of the output:

    cumsum([a, b, c]) ==> [a, a + b, a + b + c]

    By setting the exclusive argument to true, an exclusive cumulative sum is performed instead:

    cumsum([a, b, c], exclusive = true) ==> [0, a, a + b]

    By setting the reverse argument to true, the cumulative sum is performed in the opposite direction:

    cumsum([a, b, c], reverse = true) ==> [a + b + c, b + c, c]

    This is more efficient than using separate Basic.reverse ops.

    The reverse and exclusive arguments can also be combined:

    cumsum([a, b, c], exclusive = true, reverse = true) ==> [b + c, c, 0]
    input

    Input tensor.

    axis

    Tensor containing the axis along which to perform the cumulative sum.

    exclusive

    Boolean value indicating whether to perform an exclusive cumulative sum.

    reverse

    Boolean value indicating whether to perform a reverse cumulative sum.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  73. def dataType(name: String): types.DataType

    Permalink
    Definition Classes
    API
    Annotations
    @throws( ... )
  74. def dataType(cValue: Int): types.DataType

    Permalink
    Definition Classes
    API
    Annotations
    @throws( ... )
  75. def dataTypeOf[T, D <: types.DataType](value: T)(implicit evSupportedType: Aux[T, D]): D

    Permalink
    Definition Classes
    API
    Annotations
    @inline()
  76. def depthToSpace[D <: types.DataType](input: tensors.Tensor[D], blockSize: Int, dataFormat: ops.NN.CNNDataFormat = CNNDataFormat.default): tensors.Tensor[D]

    Permalink

    The depthToSpace op rearranges data from depth into blocks of spatial data.

    The depthToSpace op rearranges data from depth into blocks of spatial data.

    More specifically, the op outputs a copy of the input tensor where values from the depth dimension are moved in spatial blocks to the height and width dimensions. blockSize indicates the input block size and how the data us moved:

    • Chunks of data of size blockSize * blockSize from depth are rearranged into non-overlapping blocks of size blockSize x blockSize.
    • The width the output tensor is inputDepth * blockSize, whereas the height is inputHeight * blockSize.
    • The depth of the input tensor must be divisible by blockSize * blockSize.

    That is, assuming that input is in the shape [batch, height, width, depth], the shape of the output will be: [batch, height * blockSize, width * blockSize, depth / (block_size * block_size)].

    This op is useful for resizing the activations between convolutions (but keeping all data), e.g., instead of pooling. It is also useful for training purely convolutional models.

    Some examples:

    // === Example #1 ===
    // input = [[[[1, 2, 3, 4]]]]  (shape = [1, 1, 1, 4])
    // blockSize = 2
    depthToSpace(input, blockSize) ==> [[[[1], [2]], [[3], [4]]]]  (shape = [1, 2, 2, 1])
    
    // === Example #2 ===
    // input =  [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]  (shape = [1, 1, 1, 12])
    // blockSize = 2
    depthToSpace(input, blockSize) ==>
      [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]  (shape = [1, 2, 2, 3])
    
    // === Example #3 ===
    // input = [[[[ 1,  2,  3,  4],
    //            [ 5,  6,  7,  8]],
    //           [[ 9, 10, 11, 12],
    //            [13, 14, 15, 16]]]]  (shape = [1, 2, 2, 4])
    // blockSize = 2
    depthToSpace(input, blockSize) ==>
      [[[[ 1], [ 2], [ 5], [ 6]],
        [[ 3], [ 4], [ 7], [ 8]],
        [[ 9], [10], [13], [14]],
        [[11], [12], [15], [16]]]]  (shape = [1, 4, 4, 1,])
    input

    4-dimensional input tensor with shape [batch, height, width, depth].

    blockSize

    Block size which must be greater than 1.

    dataFormat

    Format of the input and output data.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  77. def diag[D <: MathDataType](diagonal: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The diag op constructs a diagonal tensor using the provided diagonal values.

    The diag op constructs a diagonal tensor using the provided diagonal values.

    Given a diagonal, the op returns a tensor with that diagonal and everything else padded with zeros. The diagonal is computed as follows:

    Assume that diagonal has shape [D1,..., DK]. Then the output tensor, output, is a rank-2K tensor with shape [D1, ..., DK, D1, ..., DK], where output(i1, ..., iK, i1, ..., iK) = diagonal(i1, ..., iK) and 0 everywhere else.

    For example:

    // 'diagonal' is [1, 2, 3, 4]
    diag(diagonal) ==> [[1, 0, 0, 0], [0, 2, 0, 0], [0, 0, 3, 0], [0, 0, 0, 4]]

    This op is the inverse of diagPart.

    diagonal

    Diagonal values, represented as a rank-K tensor, where K can be at most 3.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  78. def diagPart[D <: MathDataType](input: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The diagPart op returns the diagonal part of a tensor.

    The diagPart op returns the diagonal part of a tensor.

    The op returns a tensor with the diagonal part of the input. The diagonal part is computed as follows:

    Assume input has shape [D1, ..., DK, D1, ..., DK]. Then the output is a rank-K tensor with shape [D1,..., DK], where diagonal(i1, ..., iK) = output(i1, ..., iK, i1, ..., iK).

    For example:

    // 'input' is [[1, 0, 0, 0], [0, 2, 0, 0], [0, 0, 3, 0], [0, 0, 0, 4]]
    diagPart(input) ==> [1, 2, 3, 4]

    This op is the inverse of diag.

    input

    Rank-K input tensor, where K is either 2, 4, or 6.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  79. def digamma[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The digamma op computes the derivative of the logarithm of the absolute value of the Gamma function applied element-wise on a tensor (i.e., the digamma or Psi function).

    The digamma op computes the derivative of the logarithm of the absolute value of the Gamma function applied element-wise on a tensor (i.e., the digamma or Psi function). I.e., y = \partial\log{|\Gamma{x}|}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  80. def divide[D <: MathDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The divide op divides two tensors element-wise.

    The divide op divides two tensors element-wise. I.e., z = x / y.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  81. def dropout[D <: Float16OrFloat32OrFloat64](input: tensors.Tensor[D], keepProbability: Float, scaleOutput: Boolean = true, noiseShape: tensors.Tensor[types.INT32] = null, seed: Option[Int] = None): tensors.Tensor[D]

    Permalink

    The dropout op computes a dropout layer.

    The dropout op computes a dropout layer.

    With probability keepProbability, the op outputs the input element scaled up by 1 / keepProbability, otherwise it outputs 0. The scaling is such that the expected sum remains unchanged.

    By default, each element is kept or dropped independently. If noiseShape is specified, it must be [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to the shape of input, and only dimensions with noiseShape(i) == x.shape(i) will make independent decisions. For example, if x.shape = [k, l, m, n] and noiseShape = [k, 1, 1, n], each k and n component will be kept independently and each l and m component will be kept or not kept together.

    input

    Input tensor.

    keepProbability

    Probability (i.e., number in the interval (0, 1]) that each element is kept.

    scaleOutput

    If true, the outputs will be divided by the keep probability.

    noiseShape

    Rank-1 tensor representing the shape for the randomly generated keep/drop flags.

    seed

    Optional random seed, used to generate a random seed pair for the random number generator, when combined with the graph-level seed.

    returns

    Result as a new tensor that has the same shape as input.

    Definition Classes
    NN
  82. def editDistance[D <: types.DataType](hypothesis: tensors.SparseTensor[D], truth: tensors.SparseTensor[D], normalize: Boolean = true): tensors.Tensor[types.FLOAT32]

    Permalink

    The editDistance op computes the Levenshtein distance between sequences.

    The editDistance op computes the Levenshtein distance between sequences.

    The op takes variable-length sequences (hypothesis and truth), each provided as a SparseTensor, and computes the Levenshtein distance between them. The op can also normalize the edit distance using the length of truth by setting normalize to true.

    For example:

    // 'hypothesis' is a tensor of shape `[2, 1]` with variable-length values:
    //   [0, 0] = ["a"]
    //   [0, 1] = ["b"]
    val hypothesis = SparseOutput(Tensor(Tensor(0, 0, 0), Tensor(1, 0, 0)), Tensor("a", "b"), Tensor(2, 1, 1))
    // 'truth' is a tensor of shape `[2, 2]` with variable-length values:
    //   [0, 0] = []
    //   [0, 1] = ["a"]
    //   [1, 0] = ["b", "c"]
    //   [1, 1] = ["a"]
    val truth = SparseOutput(
        Tensor(Tensor(0, 1, 0), Tensor(1, 0, 0), Tensor(1, 0, 1), Tensor(1, 1, 0)),
        Tensor("a", "b", "c", "a"),
        Tensor(2, 2, 2))
    val normalize = true
    
    // 'output' is a tensor of shape `[2, 2]` with edit distances normalized by the `truth` lengths, and contains
    // the values `[[inf, 1.0], [0.5, 1.0]]`. The reason behind each value is:
    //   - (0, 0): no truth,
    //   - (0, 1): no hypothesis,
    //   - (1, 0): addition,
    //   - (1, 1): no hypothesis.
    val output = editDistance(hypothesis, truth, normalize)
    hypothesis

    Sparse tensor that contains the hypothesis sequences.

    truth

    Sparse tensor that contains the truth sequences.

    normalize

    Optional boolean value indicating whether to normalize the Levenshtein distance by the length of truth.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  83. def elu[D <: DecimalDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The elu op computes the exponential linear unit activation function.

    The elu op computes the exponential linear unit activation function.

    The exponential linear unit activation function is defined as elu(x) = x, if x > 0, and elu(x) = exp(x) - 1, otherwise.

    Source: [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)](http://arxiv.org/abs/1511.07289)

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    NN
  84. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  85. def equal[D <: ReducibleDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[types.BOOLEAN]

    Permalink

    The equal op computes the truth value of x == y element-wise.

    The equal op computes the truth value of x == y element-wise.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  86. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  87. def erf[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The erf op computes the Gaussian error function element-wise on a tensor.

    The erf op computes the Gaussian error function element-wise on a tensor.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  88. def erfc[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The erfc op computes the complementary Gaussian error function element-wise on a tensor.

    The erfc op computes the complementary Gaussian error function element-wise on a tensor.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  89. def exp[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The exp op computes the exponential of a tensor element-wise.

    The exp op computes the exponential of a tensor element-wise. I.e., y = \exp{x} = e^x.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  90. def expandDims[D <: types.DataType](input: tensors.Tensor[D], axis: tensors.Tensor[types.INT32]): tensors.Tensor[D]

    Permalink

    The expandDims op inserts a dimension of size 1 into the tensor's shape and returns the result as a new tensor.

    The expandDims op inserts a dimension of size 1 into the tensor's shape and returns the result as a new tensor.

    Given a tensor input, the op inserts a dimension of size 1 at the dimension index axis of the tensor's shape. The dimension index axis starts at zero; if you specify a negative number for axis it is counted backwards from the end.

    This op is useful if you want to add a batch dimension to a single element. For example, if you have a single image of shape [height, width, channels], you can make it a batch of 1 image with expandDims(image, 0), which will make the shape equal to [1, height, width, channels].

    For example:

    *   // 't1' is a tensor of shape [2]
         t1.expandDims(0).shape == Shape(1, 2)
         t1.expandDims(1).shape == Shape(2, 1)
         t1.expandDims(-1).shape == Shape(2, 1)
    
         // 't2' is a tensor of shape [2, 3, 5]
         t2.expandDims(0).shape == Shape(1, 2, 3, 5)
         t2.expandDims(2).shape == Shape(2, 3, 1, 5)
         t2.expandDims(3).shape == Shape(2, 3, 5, 1)

    This op requires that -1 - input.rank <= axis <= input.rank.

    This is related to squeeze, which removes dimensions of size 1.

    input

    Input tensor.

    axis

    Dimension index at which to expand the shape of input.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  91. def expm1[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The expm1 op computes the exponential of a tensor minus 1 element-wise.

    The expm1 op computes the exponential of a tensor minus 1 element-wise. I.e., y = \exp{x} - 1.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  92. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  93. def floor[D <: Float16OrFloat32OrFloat64, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The floor op computes the largest integer not greater than the current value of a tensor, element-wise.

    The floor op computes the largest integer not greater than the current value of a tensor, element-wise.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  94. def floorMod[D <: MathDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The floorMod op computes the remainder of the division between two tensors element-wise.

    The floorMod op computes the remainder of the division between two tensors element-wise.

    When x < 0 xor y < 0 is true, the op follows Python semantics in that the result here is consistent with a flooring divide. E.g., floor(x / y) * y + mod(x, y) = x.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  95. def gather[D <: types.DataType, I <: Int32OrInt64](input: tensors.Tensor[D], indices: tensors.Tensor[I], axis: tensors.Tensor[I] = null): tensors.Tensor[D]

    Permalink

    The gather op gathers slices from input axis axis, according to indices.

    The gather op gathers slices from input axis axis, according to indices.

    indices must be an integer tensor of any dimension (usually 0-D or 1-D). The op produces an output tensor with shape input.shape[::axis] + indices.shape + input.shape(axis + 1::), where:

    // Scalar indices (output has rank = rank(input) - 1)
    output(a_0, ..., a_n, b_0, ..., b_n) = input(a_0, ..., a_n, indices, b_0, ..., b_n)
    
    // Vector indices (output has rank = rank(input))
    output(a_0, ..., a_n, i, b_0, ..., b_n) = input(a_0, ..., a_n, indices(i), b_0, ..., b_n)
    
    // Higher rank indices (output has rank = rank(input) + rank(indices) - 1)
    output(a_0, ..., a_n, i, ..., j, b_0, ..., b_n) = input(a_0, ..., a_n, indices(i, ..., j), b_0, ..., b_n)

    If indices is a permutation and indices.length == input.shape(0), then this op will permute input accordingly.

    input

    Tensor from which to gather values.

    indices

    Tensor containing indices to gather.

    axis

    Tensor containing the axis along which to gather.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  96. def gatherND[D <: types.DataType, I <: Int32OrInt64](input: tensors.Tensor[D], indices: tensors.Tensor[I]): tensors.Tensor[D]

    Permalink

    The gatherND op gathers values or slices from input according to indices.

    The gatherND op gathers values or slices from input according to indices.

    indices is an integer tensor containing indices into input. The last dimension of indices can be equal to at most the rank of input, indices.shape(-1) <= input.rank. The last dimension of indices corresponds to elements (if indices.shape(-1) == input.rank), or slices (if indices.shape(-1) < input.rank) along dimension indices.shape(-1) of input. The output has shape indices.shape(::-1) + input.shape(indices.shape(-1)::).

    Some examples follow.

    Simple indexing into a matrix:

    input   = [['a', 'b'], ['c', 'd']]
    indices = [[0, 0], [1, 1]]
    output  = ['a', 'd']

    Slice indexing into a matrix:

    input   = [['a', 'b'], ['c', 'd']]
    indices = [[1], [0]]
    output  = [['c', 'd'], ['a', 'b']]

    Indexing into a three-dimensional tensor:

    input   = [[['a0', 'b0'], ['c0', 'd0']],
               [['a1', 'b1'], ['c1', 'd1']]]
    indices = [[1]]
    output  = [[['a1', 'b1'], ['c1', 'd1']]]
    
    input   = [[['a0', 'b0'], ['c0', 'd0']],
               [['a1', 'b1'], ['c1', 'd1']]]
    indices = [[0, 1], [1, 0]]
    output  = [['c0', 'd0'], ['a1', 'b1']]
    
    input   = [[['a0', 'b0'], ['c0', 'd0']],
               [['a1', 'b1'], ['c1', 'd1']]]
    indices = [[0, 0, 1], [1, 0, 1]]
    output  = ['b0', 'b1']

    Batched indexing into a matrix:

    input   = [['a', 'b'], ['c', 'd']]
    indices = [[[0, 0]], [[0, 1]]]
    output  = [['a'], ['b']]

    Batched slice indexing into a matrix:

    input   = [['a', 'b'], ['c', 'd']]
    indices = [[[1]], [[0]]]
    output  = [[['c', 'd']], [['a', 'b']]]

    Batched indexing into a three-dimensional tensor:

    input   = [[['a0', 'b0'], ['c0', 'd0']],
               [['a1', 'b1'], ['c1', 'd1']]]
    indices = [[[1]], [[0]]]
    output  = [[[['a1', 'b1'], ['c1', 'd1']]],
               [[['a0', 'b0'], ['c0', 'd0']]]]
    
    input   = [[['a0', 'b0'], ['c0', 'd0']],
               [['a1', 'b1'], ['c1', 'd1']]]
    indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]]
    output  = [[['c0', 'd0'], ['a1', 'b1']],
              [['a0', 'b0'], ['c1', 'd1']]]
    
    input   = [[['a0', 'b0'], ['c0', 'd0']],
               [['a1', 'b1'], ['c1', 'd1']]]
    indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]]
    output  = [['b0', 'b1'], ['d0', 'c1']]
    input

    Tensor from which to gather values.

    indices

    Tensor containing indices to gather.

    returns

    Result as a new tensor which contains the values from input gathered from indices given by indices, with shape indices.shape(::-1) + input.shape(indices.shape(-1)::).

    Definition Classes
    Basic
  97. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  98. def greater[D <: ReducibleDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[types.BOOLEAN]

    Permalink

    OpDocMathGreater

    OpDocMathGreater

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  99. def greaterEqual[D <: ReducibleDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[types.BOOLEAN]

    Permalink

    OpDocMathGreaterEqual

    OpDocMathGreaterEqual

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  100. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  101. def igamma[D <: Float32OrFloat64](a: tensors.Tensor[D], x: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The igamma op computes the lower regularized incomplete Gamma function Q(a, x).

    The igamma op computes the lower regularized incomplete Gamma function Q(a, x).

    The lower regularized incomplete Gamma function is defined as:

    P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x), where:

    Gamma(a, x) = \int_{0}{x} t{a-1} exp(-t) dt

    is the lower incomplete Gamma function.

    Note that, above, Q(a, x) (Igammac) is the upper regularized complete Gamma function.

    a

    First input tensor.

    x

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  102. def igammac[D <: Float32OrFloat64](a: tensors.Tensor[D], x: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The igammac op computes the upper regularized incomplete Gamma function Q(a, x).

    The igammac op computes the upper regularized incomplete Gamma function Q(a, x).

    The upper regularized incomplete Gamma function is defined as:

    Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x), where:

    Gamma(a, x) = \int_{x}{\infty} t{a-1} exp(-t) dt

    is the upper incomplete Gama function.

    Note that, above, P(a, x) (Igamma) is the lower regularized complete Gamma function.

    a

    First input tensor.

    x

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  103. def inTopK[I <: Int32OrInt64](predictions: tensors.Tensor[types.FLOAT32], targets: tensors.Tensor[I], k: tensors.Tensor[I]): tensors.Tensor[types.BOOLEAN]

    Permalink

    The inTopK op checks whether the targets are in the top K predictions.

    The inTopK op checks whether the targets are in the top K predictions.

    The op outputs a boolean tensor with shape [batchSize], with entry output(i) being true if the target class is among the top k predictions, among all predictions for example i. Note that the behavior of inTopK differs from topK in its handling of ties; if multiple classes have the same prediction value and straddle the top-k boundary, then all of those classes are considered to be in the top k.

    More formally, let:

    • predictions(i, ::) be the predictions for all classes for example i,
    • targets(i) be the target class for example i, and
    • output(i) be the output for example i. Then output(i) = predictions(i, targets(i)) \in TopKIncludingTies(predictions(i)).
    predictions

    Tensor containing the predictions.

    targets

    Tensor containing the targets.

    k

    Scalar tensor containing the number of top elements to look at.

    returns

    Result as a new tensor.

    Definition Classes
    NN
  104. def incompleteBeta[D <: Float32OrFloat64](a: tensors.Tensor[D], b: tensors.Tensor[D], x: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The incompleteBeta op computes the regularized incomplete beta integral I_x(a, b).

    The incompleteBeta op computes the regularized incomplete beta integral I_x(a, b).

    The regularized incomplete beta integral is defined as:

    I_x(a, b) = \frac{B(x; a, b)}{B(a, b)}, where:

    B(x; a, b) = \int_0x t{a-1} (1 - t)^{b-1} dt

    is the incomplete beta function and B(a, b) is the *complete* beta function.

    a

    First input tensor.

    b

    Second input tensor.

    x

    Third input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  105. def indexedSlicesMask[D <: types.DataType](input: tensors.TensorIndexedSlices[D], maskIndices: tensors.Tensor[types.INT32]): tensors.TensorIndexedSlices[D]

    Permalink

    The indexedSlicesMask op masks elements of indexed slices tensors.

    The indexedSlicesMask op masks elements of indexed slices tensors.

    Given an indexed slices tensor instance input, this function returns another indexed slices tensor that contains a subset of the slices of input. Only the slices at indices not specified in maskIndices are returned.

    This is useful when you need to extract a subset of slices from an indexed slices tensor.

    For example:

    // 'input' contains slices at indices [12, 26, 37, 45] from a large tensor with shape [1000, 10]
    input.indices ==> [12, 26, 37, 45]
    input.values.shape ==> [4, 10]
    
    // `output` will be the subset of `input` slices at its second and third indices, and so we want to mask its
    // first and last indices (which are at absolute indices 12 and 45)
    val output = tf.indexedSlicesMask(input, [12, 45])
    output.indices ==> [26, 37]
    output.values.shape ==> [2, 10]
    input

    Input indexed slices.

    maskIndices

    One-dimensional tensor containing the indices of the elements to mask.

    returns

    Result as a new tensor indexed slices object.

    Definition Classes
    Basic
    Annotations
    @throws( ... )
  106. def invertPermutation[I <: Int32OrInt64](input: tensors.Tensor[I]): tensors.Tensor[I]

    Permalink

    The invertPermutation op computes the inverse permutation of a tensor.

    The invertPermutation op computes the inverse permutation of a tensor.

    This op computes the inverse of an index permutation. It takes a one-dimensional integer tensor input, which represents indices of a zero-based array, and swaps each value with its index position. In other words, for an output tensor y and an input tensor x, this op computes y(x(i)) = i, for i in [0, 1, ..., x.length - 1].

    For example:

    // Tensor 't' is [3, 4, 0, 2, 1]
    invertPermutation(t) ==> [2, 4, 3, 0, 1]
    input

    One-dimensional input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  107. def isFinite[D <: Float16OrFloat32OrFloat64, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[types.BOOLEAN]

    Permalink

    The isFinite op returns a boolean tensor indicating which elements of a tensor are finite-valued.

    The isFinite op returns a boolean tensor indicating which elements of a tensor are finite-valued.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  108. def isInf[D <: Float16OrFloat32OrFloat64, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[types.BOOLEAN]

    Permalink

    The isInf op returns a boolean tensor indicating which elements of a tensor are Inf-valued.

    The isInf op returns a boolean tensor indicating which elements of a tensor are Inf-valued.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  109. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  110. def isNaN[D <: Float16OrFloat32OrFloat64, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[types.BOOLEAN]

    Permalink

    The isNaN op returns a boolean tensor indicating which elements of a tensor are NaN-valued.

    The isNaN op returns a boolean tensor indicating which elements of a tensor are NaN-valued.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  111. def l2Loss[D <: DecimalDataType](input: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The l2Loss op computes half of the L2 norm of a tensor without the square root.

    The l2Loss op computes half of the L2 norm of a tensor without the square root.

    The output is equal to sum(input^2) / 2.

    input

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    NN
  112. def l2Normalize[D <: Float32OrFloat64](x: tensors.Tensor[D], axes: tensors.Tensor[types.INT32], epsilon: Float = 1e-12f): tensors.Tensor[D]

    Permalink

    The l2Normalize op normalizes along axes axes using an L2 norm.

    The l2Normalize op normalizes along axes axes using an L2 norm.

    For a 1-D tensor with axes = 0, the op computes: output = x / sqrt(max(sum(x^2), epsilon))

    For higher-dimensional x, the op independently normalizes each 1-D slice along axes axes.

    x

    Input tensor.

    axes

    Tensor containing the axes along which to normalize.

    epsilon

    Lower bound value for the norm. The created op will use sqrt(epsilon) as the divisor, if norm < sqrt(epsilon).

    returns

    Result as a new tensor.

    Definition Classes
    NN
  113. def leastPreciseDataType(dataTypes: types.DataType*): types.DataType

    Permalink
    Definition Classes
    API
  114. def less[D <: ReducibleDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[types.BOOLEAN]

    Permalink

    OpDocMathLess

    OpDocMathLess

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  115. def lessEqual[D <: ReducibleDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[types.BOOLEAN]

    Permalink

    OpDocMathLessEqual

    OpDocMathLessEqual

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  116. def linear[D <: MathDataType](x: tensors.Tensor[D], weights: tensors.Tensor[D], bias: tensors.Tensor[D] = null): tensors.Tensor[D]

    Permalink

    The linear op computes x * weights + bias.

    The linear op computes x * weights + bias.

    x

    Input tensor.

    weights

    Weights tensor.

    bias

    Bias tensor.

    returns

    Result as a new tensor.

    Definition Classes
    NN
  117. def linspace[D <: BFloat16OrFloat32OrFloat64, I <: Int32OrInt64](start: tensors.Tensor[D], stop: tensors.Tensor[D], numberOfValues: tensors.Tensor[I]): tensors.Tensor[D]

    Permalink

    The linspace op generates values in an interval.

    The linspace op generates values in an interval.

    The op generates a sequence of numberOfValues evenly-spaced values beginning at start. If numberOfValues > 1, the values in the sequence increase by (stop - start) / (numberOfValues - 1), so that the last value is exactly equal to stop.

    For example:

        linspace(10.0, 12.0, 3) ==> [10.0  11.0  12.0]
      }}
    
    
    @group MathOps
    @param  start          Rank 0 (i.e., scalar) tensor that contains the starting value of the number sequence.
    @param  stop           Rank 0 (i.e., scalar) tensor that contains the ending value (inclusive) of the number
                           sequence.
    @param  numberOfValues Rank 0 (i.e., scalar) tensor that contains the number of values in the number sequence.
    @return Result as a new tensor.
    Definition Classes
    Math
  118. def listDiff[D <: types.DataType, I <: Int32OrInt64](x: tensors.Tensor[D], y: tensors.Tensor[D], indicesDataType: I): (tensors.Tensor[D], tensors.Tensor[I])

    Permalink

    The listDiff op computes the difference between two lists of numbers or strings.

    The listDiff op computes the difference between two lists of numbers or strings.

    Given a list x and a list y, the op returns a list out that represents all values that are in x but not in y. The returned list output is sorted in the same order that the numbers appear in x (duplicates are preserved). The op also returns a list indices that represents the position of each out element in x. In other words, output(i) = x(indices(i)), for i in [0, 1, ..., output.length - 1].

    For example, given inputs x = [1, 2, 3, 4, 5, 6] and y = [1, 3, 5], this op would return output = [2, 4, 6] and indices = [1, 3, 5].

    x

    One-dimensional tensor containing the values to keep.

    y

    One-dimensional tensor containing the values to remove.

    indicesDataType

    Data type to use for the output indices of this op.

    returns

    Tuple containing output and indices, from the method description.

    Definition Classes
    Basic
  119. def listDiff[D <: types.DataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): (tensors.Tensor[D], tensors.Tensor[types.INT32])

    Permalink

    The listDiff op computes the difference between two lists of numbers or strings.

    The listDiff op computes the difference between two lists of numbers or strings.

    Given a list x and a list y, the op returns a list out that represents all values that are in x but not in y. The returned list output is sorted in the same order that the numbers appear in x (duplicates are preserved). The op also returns a list indices that represents the position of each out element in x. In other words, output(i) = x(indices(i)), for i in [0, 1, ..., output.length - 1].

    For example, given inputs x = [1, 2, 3, 4, 5, 6] and y = [1, 3, 5], this op would return output = [2, 4, 6] and indices = [1, 3, 5].

    x

    One-dimensional tensor containing the values to keep.

    y

    One-dimensional tensor containing the values to remove.

    returns

    Tuple containing output and indices, from the method description.

    Definition Classes
    Basic
  120. def localResponseNormalization[D <: BFloat16OrFloat16OrFloat32](input: tensors.Tensor[D], depthRadius: Int = 5, bias: Float = 1.0f, alpha: Float = 1.0f, beta: Float = 0.5f): tensors.Tensor[D]

    Permalink

    The localResponseNormalization op treats the input 4-D tensor as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently.

    The localResponseNormalization op treats the input 4-D tensor as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of the inputs within depthRadius. In detail:

    sqrSum[a, b, c, d] = sum(input[a, b, c, d - depthRadius : d + depthRadius + 1] **   2)
    output = input / (bias + alpha *   sqrSum) **   beta

    For details, see Krizhevsky et al., ImageNet Classification with Deep Convolutional Neural Networks (NIPS 2012).

    input

    Input tensor.

    depthRadius

    Half-width of the 1-D normalization window.

    bias

    Offset (usually positive to avoid dividing by 0).

    alpha

    Scale factor (usually positive).

    beta

    Exponent.

    returns

    Created op output.

    Definition Classes
    NN
  121. def log[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The log op computes the logarithm of a tensor element-wise.

    The log op computes the logarithm of a tensor element-wise. I.e., y = \log{x}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  122. def log1p[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The log1p op computes the logarithm of a tensor plus 1 element-wise.

    The log1p op computes the logarithm of a tensor plus 1 element-wise. I.e., y = \log{1 + x}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  123. def logGamma[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The logGamma op computes the logarithm of the absolute value of the Gamma function applied element-wise on a tensor.

    The logGamma op computes the logarithm of the absolute value of the Gamma function applied element-wise on a tensor. I.e., y = \log{|\Gamma{x}|}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  124. def logPoissonLoss[D <: DecimalDataType](logPredictions: tensors.Tensor[D], targets: tensors.Tensor[D], computeFullLoss: Boolean = false): tensors.Tensor[D]

    Permalink

    The logPoissonLoss op computes the log-Poisson loss between logPredictions and targets.

    The logPoissonLoss op computes the log-Poisson loss between logPredictions and targets.

    The op computes the log-likelihood loss between the predictions and the targets under the assumption that the targets have a Poisson distribution. **Caveat:** By default, this is not the exact loss, but the loss minus a constant term (log(z!)). That has no effect for optimization purposes, but it does not play well with relative loss comparisons. To compute an approximation of the log factorial term, please set computeFullLoss to true, to enable Stirling's Approximation.

    For brevity, let c = log(x) = logPredictions, z = targets. The log-Poisson loss is defined as: -log(exp(-x) * (xz) / z!) = -log(exp(-x) * (xz)) + log(z!) ~ -log(exp(-x)) - log(x^z) [z * log(z) - z + 0.5 * log(2 * pi * z)] (Note that the second term is Stirling's Approximation for log(z!). It is invariant to x and does not affect optimization, though it is important for correct relative loss comparisons. It is only computed when computeFullLoss == true) = x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)] = exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)].

    logPredictions

    Tensor containing the log-predictions.

    targets

    Tensor with the same shape as logPredictions, containing the target values.

    computeFullLoss

    If true, Stirling's Approximation is used to approximate the full loss. Defaults to false, meaning that the constant term is ignored.

    returns

    Result as a new tensor.

    Definition Classes
    NN
  125. def logSigmoid[D <: RealDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The logSigmoid op computes the log-sigmoid function element-wise on a tensor.

    The logSigmoid op computes the log-sigmoid function element-wise on a tensor.

    Specifically, y = log(1 / (1 + exp(-x))). For numerical stability, we use y = -tf.nn.softplus(-x).

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  126. def logSoftmax[D <: DecimalDataType](logits: tensors.Tensor[D], axis: Int = 1): tensors.Tensor[D]

    Permalink

    The logSoftmax op computes log-softmax activations.

    The logSoftmax op computes log-softmax activations.

    For each batch i and class j we have log_softmax = logits - log(sum(exp(logits), axis)), where axis indicates the axis the log-softmax should be performed on.

    logits

    Tensor containing the logits.

    axis

    Axis along which to perform the log-softmax. Defaults to -1 denoting the last axis.

    returns

    Result as a new tensor.

    Definition Classes
    NN
  127. def logSumExp[D <: MathDataType](input: tensors.Tensor[D], axes: Seq[Int] = null, keepDims: Boolean = false): tensors.Tensor[D]

    Permalink

    The logSumExp op computes the log-sum-exp of elements across axes of a tensor.

    The logSumExp op computes the log-sum-exp of elements across axes of a tensor.

    Reduces input along the axes given in axes. Unless keepDims is true, the rank of the tensor is reduced by 1 for each entry in axes. If keepDims is true, the reduced axes are retained with size 1.

    If axes is null, then all axes are reduced, and a tensor with a single element is returned.

    For example:

    // 'x' is [[0, 0, 0], [0, 0, 0]]
    logSumExp(x) ==> log(6)
    logSumExp(x, 0) ==> [log(2), log(2), log(2)]
    logSumExp(x, 1) ==> [log(3), log(3)]
    logSumExp(x, 1, keepDims = true) ==> [[log(3)], [log(3)]]
    logSumExp(x, [0, 1]) ==> log(6)
    input

    Input tensor to reduce.

    axes

    Integer sequence containing the axes to reduce. If null, then all axes are reduced.

    keepDims

    If true, retain the reduced axes.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  128. def logicalAnd(x: tensors.Tensor[types.BOOLEAN], y: tensors.Tensor[types.BOOLEAN]): tensors.Tensor[types.BOOLEAN]

    Permalink

    The logicalAnd op computes the truth value of x && y element-wise.

    The logicalAnd op computes the truth value of x && y element-wise.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  129. def logicalNot(x: tensors.Tensor[types.BOOLEAN]): tensors.Tensor[types.BOOLEAN]

    Permalink

    The logicalNot op computes the truth value of !x element-wise.

    The logicalNot op computes the truth value of !x element-wise.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  130. def logicalOr(x: tensors.Tensor[types.BOOLEAN], y: tensors.Tensor[types.BOOLEAN]): tensors.Tensor[types.BOOLEAN]

    Permalink

    The logicalOr op computes the truth value of x || y element-wise.

    The logicalOr op computes the truth value of x || y element-wise.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  131. def logicalXOr(x: tensors.Tensor[types.BOOLEAN], y: tensors.Tensor[types.BOOLEAN]): tensors.Tensor[types.BOOLEAN]

    Permalink

    The logicalXOr op computes the truth value of (x || y) && !(x && y) element-wise.

    The logicalXOr op computes the truth value of (x || y) && !(x && y) element-wise.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  132. def lrn[D <: BFloat16OrFloat16OrFloat32](input: tensors.Tensor[D], depthRadius: Int = 5, bias: Float = 1.0f, alpha: Float = 1.0f, beta: Float = 0.5f): tensors.Tensor[D]

    Permalink

    The localResponseNormalization op treats the input 4-D tensor as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently.

    The localResponseNormalization op treats the input 4-D tensor as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of the inputs within depthRadius. In detail:

    sqrSum[a, b, c, d] = sum(input[a, b, c, d - depthRadius : d + depthRadius + 1] **   2)
    output = input / (bias + alpha *   sqrSum) **   beta

    For details, see Krizhevsky et al., ImageNet Classification with Deep Convolutional Neural Networks (NIPS 2012).

    input

    Input tensor.

    depthRadius

    Half-width of the 1-D normalization window.

    bias

    Offset (usually positive to avoid dividing by 0).

    alpha

    Scale factor (usually positive).

    beta

    Exponent.

    returns

    Created op output.

    Definition Classes
    NN
  133. def matmul[D <: MathDataType](a: tensors.Tensor[D], b: tensors.Tensor[D], transposeA: Boolean = false, transposeB: Boolean = false, conjugateA: Boolean = false, conjugateB: Boolean = false, aIsSparse: Boolean = false, bIsSparse: Boolean = false): tensors.Tensor[D]

    Permalink

    The matmul op multiples two matrices.

    The matmul op multiples two matrices.

    The inputs must, following any transpositions, be tensors of rank >= 2, where the inner 2 dimensions specify valid matrix multiplication arguments and any further outer dimensions match.

    Note that this op corresponds to a matrix product and not an element-wise product. For example: output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j]), for all indices i and j.

    Both matrices must be of the same data type. The supported types are: BFLOAT16, FLOAT16, FLOAT32, FLOAT64, INT32, COMPLEX64, and COMPLEX128.

    Either matrix can be transposed and/or conjugated on the fly by setting one of the corresponding flags to true. These are set to false by default.

    If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding aIsSparse or bIsSparse flag to true. These are also set to false by default. This optimization is only available for plain matrices (i.e., rank-2 tensors) with data type BFLOAT16 or FLOAT32. The break-even for using this versus a dense matrix multiply on one platform was 30% zero values in the sparse matrix. The gradient computation of the sparse op will only take advantage of sparsity in the input gradient when that gradient comes from a ReLU.

    For example:

    // 2-D tensor 'a' is [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]
    
    // 2-D tensor 'b' is [[7.0, 8.0], [9.0, 10.0], [11.0, 12.0]]
    
    matmul(a, b) ==> [[58.0, 64.0], [139.0, 154.0]]
    
    // 3-D tensor 'a' is [[[ 1.0,  2.0,  3.0],
    //                     [ 4.0,  5.0,  6.0]],
    //                    [[ 7.0,  8.0,  9.0],
    //                     [10.0, 11.0, 12.0]]]
    
    // 3-D tensor 'b' is [[[13.0, 14.0],
    //                     [15.0, 16.0],
    //                     [17.0, 18.0]],
    //                    [[19.0, 20.0],
    //                     [21.0, 22.0],
    //                     [23.0, 24.0]]]
    
    matmul(a, b) ==> [[[ 94.0, 100.0], [229.0, 244.0]],
                      [[508.0, 532.0], [697.0, 730.0]]]
    a

    First input tensor.

    b

    Second input tensor.

    transposeA

    If true, a is transposed before the multiplication.

    transposeB

    If true, b is transposed before the multiplication.

    conjugateA

    If true, a is conjugated before the multiplication.

    conjugateB

    If true, b is conjugated before the multiplication.

    aIsSparse

    If true, a is treated as a sparse matrix (i.e., it is assumed it contains many zeros).

    bIsSparse

    If true, b is treated as a sparse matrix (i.e., it is assumed it contains many zeros).

    returns

    Result as a new tensor.

    Definition Classes
    Math
  134. def matrixBandPart[D <: MathDataType, I <: Int32OrInt64](input: tensors.Tensor[D], numSubDiagonals: tensors.Tensor[I], numSuperDiagonals: tensors.Tensor[I]): tensors.Tensor[D]

    Permalink

    The matrixBandPart op copies a tensor, while setting everything outside a central band in each innermost matrix of the tensor, to zero.

    The matrixBandPart op copies a tensor, while setting everything outside a central band in each innermost matrix of the tensor, to zero.

    Assuming that input has k dimensions, [I, J, K, ..., M, N], the output is a tensor with the same shape, where band[i, j, k, ..., m, n] == indicatorBand(m, n) * input[i, j, k, ..., m, n]. The indicator function is defined as:

    indicatorBand(m, n) = (numSubDiagonals < 0 || m - n <= numSubDiagonals) &&
                          (numSuperDiagonals < 0 || n - m <= numSuperDiagonals)

    For example:

    // 'input' is:
    //   [[ 0,  1,  2, 3]
    //    [-1,  0,  1, 2]
    //    [-2, -1,  0, 1]
    //    [-3, -2, -1, 0]]
    matrixBandPart(input, 1, -1) ==> [[ 0,  1,  2, 3]
                                      [-1,  0,  1, 2]
                                      [ 0, -1,  0, 1]
                                      [ 0,  0, -1, 0]]
    matrixBandPart(input, 2, 1) ==>  [[ 0,  1,  0, 0]
                                      [-1,  0,  1, 0]
                                      [-2, -1,  0, 1]
                                      [ 0, -2, -1, 0]]

    Useful special cases:

    matrixBandPart(input, 0, -1) ==> Upper triangular part
    matrixBandPart(input, -1, 0) ==> Lower triangular part
    matrixBandPart(input, 0, 0)  ==> Diagonal
    input

    Input tensor.

    numSubDiagonals

    Scalar tensor that contains the number of sub-diagonals to keep. If negative, the entire lower triangle is kept.

    numSuperDiagonals

    Scalar tensor that contains the number of super-diagonals to keep. If negative, the entire upper triangle is kept.

    returns

    Result as a new tensor containing the expected banded tensor and has rank K and same shape as input.

    Definition Classes
    Math
  135. def matrixDiag[D <: MathDataType](diagonal: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The matrixDiag op returns a batched diagonal tensor with the provided batched diagonal values.

    The matrixDiag op returns a batched diagonal tensor with the provided batched diagonal values.

    Given a diagonal, the op returns a tensor with that diagonal and everything else padded with zeros. Assuming that diagonal has k dimensions [I, J, K, ..., N], the output is a tensor of rank k + 1 with dimensions [I, J, K, ..., N, N], where: output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n].

    For example:

    // 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]] (shape = [2, 4])
    matrixDiag(diagonal) ==> [[[1, 0, 0, 0]
                               [0, 2, 0, 0]
                               [0, 0, 3, 0]
                               [0, 0, 0, 4]],
                              [[5, 0, 0, 0]
                               [0, 6, 0, 0]
                               [0, 0, 7, 0]
                               [0, 0, 0, 8]]]  // with shape [2, 4, 4]
    diagonal

    Rank-K input tensor, where K >= 1.

    returns

    Result as a new tensor with rank equal to K + 1 and shape equal to the shape of diagonal, with its last dimension duplicated.

    Definition Classes
    Math
  136. def matrixDiagPart[D <: MathDataType](input: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The matrixDiagPart op returns the batched diagonal part of a batched tensor.

    The matrixDiagPart op returns the batched diagonal part of a batched tensor.

    The op returns a tensor with the diagonal part of the batched input. Assuming that input has k dimensions, [I, J, K, ..., M, N], then the output is a tensor of rank k - 1 with dimensions [I, J, K, ..., min(M, N)], where diagonal[i, j, k, ..., n] == input[i, j, k, ..., n, n].

    Note that input must have rank of at least 2.

    For example:

    // 'input' is:
    //   [[[1, 0, 0, 0]
    //     [0, 2, 0, 0]
    //     [0, 0, 3, 0]
    //     [0, 0, 0, 4]],
    //    [[5, 0, 0, 0]
    //     [0, 6, 0, 0]
    //     [0, 0, 7, 0]
    //     [0, 0, 0, 8]]]  with shape [2, 4, 4]
    matrixDiagPart(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]]  // with shape [2, 4]
    input

    Rank-K tensor, where K >= 2.

    returns

    Result as a new tensor containing the diagonal(s) and having shape equal to input.shape[:-2] + [min(input.shape[-2:])].

    Definition Classes
    Math
  137. def matrixSetDiag[D <: MathDataType](input: tensors.Tensor[D], diagonal: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The matrixSetDiag op returns a batched matrix tensor with new batched diagonal values.

    The matrixSetDiag op returns a batched matrix tensor with new batched diagonal values.

    Given input and diagonal, the op returns a tensor with the same shape and values as input, except for the main diagonal of its innermost matrices. These diagonals will be overwritten by the values in diagonal. Assuming that input has k + 1 dimensions, [I, J, K, ..., M, N], and diagonal has k dimensions, [I, J, K, ..., min(M, N)], then the output is a tensor of rank k + 1 with dimensions [I, J, K, ..., M, N], where:

    • output[i, j, k, ..., m, n] == diagonal[i, j, k, ..., n], for m == n, and
    • output[i, j, k, ..., m, n] == input[i, j, k, ..., m, n], for m != n.
    input

    Rank-K+1 tensor, where K >= 2.

    diagonal

    Rank-K tensor, where K >= 1.

    returns

    Result as a new tensor with rank equal to K + 1 and shape equal to the shape of input.

    Definition Classes
    Math
  138. def matrixTranspose[D <: types.DataType](input: tensors.Tensor[D], conjugate: Boolean = false): tensors.Tensor[D]

    Permalink

    The matrixTranpose op transposes the last two dimensions of tensor input.

    The matrixTranpose op transposes the last two dimensions of tensor input.

    For example:

    // Tensor 'x' is [[1, 2, 3], [4, 5, 6]]
    matrixTranspose(x) ==> [[1, 4], [2, 5], [3, 6]]
    
    // Tensor 'x' has shape [1, 2, 3, 4]
    matrixTranspose(x).shape ==> [1, 2, 4, 3]

    Note that Math.matmul provides named arguments allowing for transposing the matrices involved in the multiplication. This is done with minimal cost, and is preferable to using this function. For example:

    matmul(a, b, transposeB = true) // is preferable to:
    matmul(a, matrixTranspose(b))
    input

    Input tensor to transpose.

    conjugate

    If true, then the complex conjugate of the transpose result is returned.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
    Annotations
    @throws( ... )
    Exceptions thrown

    InvalidShapeException If the input tensor has rank <= 2.

  139. def max[D <: ReducibleDataType](input: tensors.Tensor[D], axes: tensors.Tensor[types.INT32] = null, keepDims: Boolean = false): tensors.Tensor[D]

    Permalink

    The max op computes the maximum of elements across axes of a tensor.

    The max op computes the maximum of elements across axes of a tensor.

    Reduces input along the axes given in axes. Unless keepDims is true, the rank of the tensor is reduced by 1 for each entry in axes. If keepDims is true, the reduced axes are retained with size 1.

    If axes is null, then all axes are reduced, and a tensor with a single element is returned.

    For example:

    // 'x' is [[1.0, 1.0], [2.0, 2.0]]
    max(x) ==> 2.0
    max(x, 0) ==> [2.0, 2.0]
    max(x, 1) ==> [1.0, 2.0]
    input

    Input tensor to reduce.

    axes

    Integer tensor containing the axes to reduce. If null, then all axes are reduced.

    keepDims

    If true, retain the reduced axes.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  140. def maxPool[D <: MathDataType](input: tensors.Tensor[D], windowSize: Seq[Int], stride1: Int, stride2: Int, padding: ConvPaddingMode, dataFormat: ops.NN.CNNDataFormat = CNNDataFormat.default): tensors.Tensor[D]

    Permalink

    The maxPool op performs max pooling on the input tensor.

    The maxPool op performs max pooling on the input tensor.

    input

    4-D tensor whose dimension order is interpreted according to the value of dataFormat.

    windowSize

    The size of the pooling window for each dimension of the input tensor.

    stride1

    Stride of the sliding window along the second dimension of input.

    stride2

    Stride of the sliding window along the third dimension of input.

    padding

    Padding mode to use.

    dataFormat

    Format of the input and output data.

    returns

    Result as a new 4-D tensor whose dimension order depends on the value of dataFormat.

    Definition Classes
    NN
  141. def maxPoolGrad[D <: RealDataType](originalInput: tensors.Tensor[D], originalOutput: tensors.Tensor[D], outputGradient: tensors.Tensor[D], windowSize: Seq[Int], stride1: Int, stride2: Int, padding: ConvPaddingMode, dataFormat: ops.NN.CNNDataFormat = CNNDataFormat.default): tensors.Tensor[D]

    Permalink

    The maxPoolGrad op computes the gradient of the maxPool op.

    The maxPoolGrad op computes the gradient of the maxPool op.

    originalInput

    Original input tensor.

    originalOutput

    Original output tensor.

    outputGradient

    4-D tensor containing the gradients w.r.t. the output of the max pooling and whose shape depends on the value of dataFormat.

    windowSize

    The size of the pooling window for each dimension of the input tensor.

    stride1

    Stride of the sliding window along the second dimension of input.

    stride2

    Stride of the sliding window along the third dimension of input.

    padding

    Padding mode to use.

    dataFormat

    Format of the input and output data.

    returns

    Result as a new 4-D tensor whose dimension order depends on the value of dataFormat.

    Definition Classes
    NN
  142. def maxPoolGradGrad[D <: RealDataType](originalInput: tensors.Tensor[D], originalOutput: tensors.Tensor[D], outputGradient: tensors.Tensor[D], windowSize: Seq[Int], stride1: Int, stride2: Int, padding: ConvPaddingMode, dataFormat: ops.NN.CNNDataFormat = CNNDataFormat.default): tensors.Tensor[D]

    Permalink

    The maxPoolGradGrad op computes the gradient of the maxPoolGrad op.

    The maxPoolGradGrad op computes the gradient of the maxPoolGrad op.

    originalInput

    Original input tensor.

    originalOutput

    Original output tensor.

    outputGradient

    4-D tensor containing the gradients w.r.t. the output of the max pooling and whose shape depends on the value of dataFormat.

    windowSize

    The size of the pooling window for each dimension of the input tensor.

    stride1

    Stride of the sliding window along the second dimension of input.

    stride2

    Stride of the sliding window along the third dimension of input.

    padding

    Padding mode to use.

    dataFormat

    Format of the input and output data.

    returns

    Result as a new 4-D tensor whose dimension order depends on the value of dataFormat.

    Definition Classes
    NN
  143. def maximum[D <: MathDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The maximum op returns the element-wise maximum between two tensors.

    The maximum op returns the element-wise maximum between two tensors. I.e., z = x > y ? x : y.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  144. def mean[D <: ReducibleDataType](input: tensors.Tensor[D], axes: tensors.Tensor[types.INT32] = null, keepDims: Boolean = false): tensors.Tensor[D]

    Permalink

    The mean op computes the mean of elements across axes of a tensor.

    The mean op computes the mean of elements across axes of a tensor.

    Reduces input along the axes given in axes. Unless keepDims is true, the rank of the tensor is reduced by 1 for each entry in axes. If keepDims is true, the reduced axes are retained with size 1.

    If axes is null, then all axes are reduced, and a tensor with a single element is returned.

    For example:

    // 'x' is [[1.0, 1.0], [2.0, 2.0]]
    mean(x) ==> 1.5
    mean(x, 0) ==> [1.5, 1.5]
    mean(x, 1) ==> [1.0, 2.0]
    input

    Input tensor to reduce.

    axes

    Integer tensor containing the axes to reduce. If null, then all axes are reduced.

    keepDims

    If true, retain the reduced axes.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  145. def min[D <: ReducibleDataType](input: tensors.Tensor[D], axes: tensors.Tensor[types.INT32] = null, keepDims: Boolean = false): tensors.Tensor[D]

    Permalink

    The min op computes the minimum of elements across axes of a tensor.

    The min op computes the minimum of elements across axes of a tensor.

    Reduces input along the axes given in axes. Unless keepDims is true, the rank of the tensor is reduced by 1 for each entry in axes. If keepDims is true, the reduced axes are retained with size 1.

    If axes is null, then all axes are reduced, and a tensor with a single element is returned.

    For example:

    // 'x' is [[1.0, 1.0], [2.0, 2.0]]
    min(x) ==> 1.0
    min(x, 0) ==> [1.0, 1.0]
    min(x, 1) ==> [1.0, 2.0]
    input

    Input tensor to reduce.

    axes

    Integer tensor containing the axes to reduce. If null, then all axes are reduced.

    keepDims

    If true, retain the reduced axes.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  146. def minimum[D <: MathDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The minimum op returns the element-wise minimum between two tensors.

    The minimum op returns the element-wise minimum between two tensors. I.e., z = x < y ? x : y.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  147. def mod[D <: MathDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The mod op computes the remainder of the division between two tensors element-wise.

    The mod op computes the remainder of the division between two tensors element-wise.

    The op emulates C semantics in that the result is consistent with a truncating divide. E.g., truncate(x / y) * y + truncateMod(x, y) = x.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  148. def mostPreciseDataType(dataTypes: types.DataType*): types.DataType

    Permalink
    Definition Classes
    API
  149. def multiply[D <: MathDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The multiply op multiplies two tensors element-wise.

    The multiply op multiplies two tensors element-wise. I.e., z = x * y.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  150. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  151. def negate[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The negate op computes the numerical negative value of a tensor element-wise.

    The negate op computes the numerical negative value of a tensor element-wise. I.e., y = -x.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  152. def notEqual[D <: ReducibleDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[types.BOOLEAN]

    Permalink

    The notEqual op computes the truth value of x != y element-wise.

    The notEqual op computes the truth value of x != y element-wise.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  153. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  154. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  155. def oneHot[D <: types.DataType, I <: UInt8OrInt32OrInt64](indices: tensors.Tensor[I], depth: tensors.Tensor[types.INT32], onValue: tensors.Tensor[D] = null, offValue: tensors.Tensor[D] = null, axis: Int = 1, dataType: types.DataType = null): tensors.Tensor[D]

    Permalink

    The oneHot op returns a one-hot tensor.

    The oneHot op returns a one-hot tensor.

    The locations represented by indices in indices take value onValue, while all other locations take value offValue. onValue and offValue must have matching data types. If dataType is also provided, they must be the same data type as specified by dataType.

    If the input indices is rank N, the output will have rank N+1. The new axis is created at dimension axis (which defaults to the last axis).

    If indices is a scalar the output shape will be a vector of length depth.

    If indices is a vector of length features, the output shape will be:

    • [features, depth], if axis == -1, and
    • [depth, features], if axis == 0.

    If indices is a matrix (batch) with shape [batch, features], the output shape will be:

    • [batch, features, depth], if axis == -1,
    • [batch, depth, features], if axis == 1, and
    • [depth, batch, features], if axis == 0.

    If dataType is not provided, the function will attempt to assume the data type of onValue or offValue, if one or both are passed in. If none of onValue, offValue, or dataType are provided, dataType will default to the FLOAT32 data type.

    Note: If a non-numeric data type output is desired (e.g., STRING or BOOLEAN), both onValue and offValue **must** be provided to oneHot.

    For example:

    // 'indices' = [0, 2, -1, 1]
    // 'depth' = 3
    // 'onValue' = 5.0
    // 'offValue' = 0.0
    // 'axis' = -1
    // The output tensor has shape [4, 3]
    oneHot(indices, depth, onValue, offValue, axis) ==>
      [[5.0, 0.0, 0.0],  // oneHot(0)
       [0.0, 0.0, 5.0],  // oneHot(2)
       [0.0, 0.0, 0.0],  // oneHot(-1)
       [0.0, 5.0, 0.0]]  // oneHot(1)
    
    // 'indices' = [[0, 2], [1, -1]]
    // 'depth' = 3
    // 'onValue' = 1.0
    // 'offValue' = 0.0
    // 'axis' = -1
    // The output tensor has shape [2, 2, 3]
    oneHot(indices, depth, onValue, offValue, axis) ==>
      [[[1.0, 0.0, 0.0],   // oneHot(0)
        [0.0, 0.0, 1.0]],  // oneHot(2)
       [[0.0, 1.0, 0.0],   // oneHot(1)
        [0.0, 0.0, 0.0]]]  // oneHot(-1)
    indices

    Tensor containing the indices for the "on" values.

    depth

    Scalar tensor defining the depth of the one-hot dimension.

    onValue

    Scalar tensor defining the value to fill in the output ith value, when indices[j] = i. Defaults to the value 1 with type dataType.

    offValue

    Scalar tensor defining the value to fill in the output ith value, when indices[j] != i. Defaults to the value 0 with type dataType.

    axis

    Axis to fill. Defaults to -1, representing the last axis.

    dataType

    Data type of the output tensor. If not provided, the function will attempt to assume the data type of onValue or offValue, if one or both are passed in. If none of onValue, offValue, or dataType are provided, dataType will default to the FLOAT32 data type.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  156. def pad[D <: types.DataType, I <: Int32OrInt64](input: tensors.Tensor[D], paddings: tensors.Tensor[I], mode: ops.Basic.PaddingMode = ConstantPadding(Some(Tensor(0)))): tensors.Tensor[D]

    Permalink

    The pad op pads a tensor with zeros.

    The pad op pads a tensor with zeros.

    The op pads input with values specified by the padding mode, mode, according to the paddings you specify.

    paddings is an integer tensor with shape [n, 2], where n is the rank of input. For each dimension D of input, paddings(D, 0) indicates how many zeros to add before the contents of input in that dimension, and paddings(D, 1) indicates how many zeros to add after the contents of input in that dimension.

    If mode is ReflectivePadding then both paddings(D, 0) and paddings(D, 1) must be no greater than input.shape(D) - 1. If mode is SymmetricPadding then both paddings(D, 0) and paddings(D, 1) must be no greater than input.shape(D).

    The padded size of each dimension D of the output is equal to paddings(D, 0) + input.shape(D) + paddings(D, 1).

    For example:

    // 'input' = [[1, 2, 3], [4, 5, 6]]
    // 'paddings' = [[1, 1], [2, 2]]
    
    pad(input, paddings, ConstantPadding(0)) ==>
      [[0, 0, 0, 0, 0, 0, 0],
       [0, 0, 1, 2, 3, 0, 0],
       [0, 0, 4, 5, 6, 0, 0],
       [0, 0, 0, 0, 0, 0, 0]]
    
    pad(input, paddings, ReflectivePadding) ==>
      [[6, 5, 4, 5, 6, 5, 4],
       [3, 2, 1, 2, 3, 2, 1],
       [6, 5, 4, 5, 6, 5, 4],
       [3, 2, 1, 2, 3, 2, 1]]
    
    pad(input, paddings, SymmetricPadding) ==>
      [[2, 1, 1, 2, 3, 3, 2],
       [2, 1, 1, 2, 3, 3, 2],
       [5, 4, 4, 5, 6, 6, 5],
       [5, 4, 4, 5, 6, 6, 5]]
    input

    Input tensor to be padded.

    paddings

    Tensor containing the paddings.

    mode

    Padding mode to use.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  157. def parallelStack[D <: types.DataType](inputs: Array[tensors.Tensor[D]]): tensors.Tensor[D]

    Permalink

    The parallelStack op stacks a list of rank-R tensors into one rank-(R+1) tensor, in parallel.

    The parallelStack op stacks a list of rank-R tensors into one rank-(R+1) tensor, in parallel.

    The op packs the list of tensors in inputs into a tensor with rank one higher than each tensor in inputs, by packing them along the first dimension. Given a list of N tensors of shape [A, B, C], the output tensor will have shape [N, A, B, C].

    For example:

    // 'x' is [1, 4]
    // 'y' is [2, 5]
    // 'z' is [3, 6]
    parallelStack(Array(x, y, z)) ==> [[1, 4], [2, 5], [3, 6]]

    The op requires that the shape of all input tensors is known at graph construction time.

    The difference between stack and parallelStack is that stack requires all of the inputs be computed before the operation will begin executing, but does not require that the input shapes be known during graph construction. parallelStack will copy pieces of the input into the output as they become available. In some situations this can provide a performance benefit.

    inputs

    Input tensors to be stacked.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  158. def polygamma[D <: Float32OrFloat64](n: tensors.Tensor[D], x: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The polygamma op computes the polygamma function \psi^{(n)}(x).

    The polygamma op computes the polygamma function \psi^{(n)}(x).

    The polygamma function is defined as:

    \psi{(n)}(x) = \frac{dn}{dx^n} \psi(x), where \psi(x) is the digamma function.

    n

    First input tensor.

    x

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  159. def pow[D <: MathDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The pow op computes the power of one tensor raised to another, element-wise.

    The pow op computes the power of one tensor raised to another, element-wise.

    Given a tensor x and a tensor y, the op computes x^y for the corresponding elements in x and y.

    For example:

    // Tensor 'x' is [[2, 2], [3, 3]]
    // Tensor 'y' is [[8, 16], [2, 3]]
    pow(x, y) ==> [[256, 65536], [9, 27]]
    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  160. def preventGradient[D <: types.DataType](input: tensors.Tensor[D], message: String = ""): tensors.Tensor[D]

    Permalink

    The preventGradient op triggers an error if a gradient is requested.

    The preventGradient op triggers an error if a gradient is requested.

    When executed in a graph, this op outputs its input tensor as-is.

    When building ops to compute gradients, the TensorFlow gradient system ill return an error when trying to lookup the gradient of this op, because no gradient must ever be registered for this function. This op exists to prevent subtle bugs from silently returning unimplemented gradients in some corner cases.

    input

    Input tensor.

    message

    Message to print along with the error.

    returns

    Result as a new tensor which has the same value as the input tensor.

    Definition Classes
    Basic
  161. def prod[D <: ReducibleDataType](input: tensors.Tensor[D], axes: tensors.Tensor[types.INT32] = null, keepDims: Boolean = false): tensors.Tensor[D]

    Permalink

    The prod op computes the product of elements across axes of a tensor.

    The prod op computes the product of elements across axes of a tensor.

    Reduces input along the axes given in axes. Unless keepDims is true, the rank of the tensor is reduced by 1 for each entry in axes. If keepDims is true, the reduced axes are retained with size 1.

    If axes is null, then all axes are reduced, and a tensor with a single element is returned.

    For example:

    // 'x' is [[1, 1, 1]], [1, 1, 1]]
    prod(x) ==> 1
    prod(x, 0) ==> [1, 1, 1]
    prod(x, 1) ==> [1, 1]
    prod(x, 1, keepDims = true) ==> [[1], [1]]
    prod(x, [0, 1]) ==> 1
    input

    Input tensor to reduce.

    axes

    Integer tensor containing the axes to reduce. If null, then all axes are reduced.

    keepDims

    If true, retain the reduced axes.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  162. def randomNormal[D <: Float16OrFloat32OrFloat64, I <: Int32OrInt64](dataType: D, shape: tensors.Tensor[I])(mean: tensors.Tensor[D] = Tensor.zeros(dataType, Shape()), standardDeviation: tensors.Tensor[D] = Tensor.ones(dataType, Shape()), seed: Option[Int] = None): tensors.Tensor[D]

    Permalink

    The randomNormal op outputs random values drawn from a Normal distribution.

    The randomNormal op outputs random values drawn from a Normal distribution.

    The generated values follow a Normal distribution with mean mean and standard deviation standardDeviation.

    dataType

    Data type for the output tensor.

    shape

    Rank-1 tensor containing the shape of the output tensor. Defaults to a scalar tensor.

    mean

    Scalar tensor containing the mean of the Normal distribution. Defaults to 0.

    standardDeviation

    Scalar tensor containing the standard deviation of the Normal distribution. Defaults to 1.

    seed

    Optional random seed, used to generate a random seed pair for the random number generator, when combined with the graph-level seed.

    returns

    Result as a new tensor.

    Definition Classes
    Random
  163. def randomShuffle[D <: types.DataType](value: tensors.Tensor[D], seed: Option[Int] = None): tensors.Tensor[D]

    Permalink

    The randomShuffle op randomly shuffles a tensor along its first axis.

    The randomShuffle op randomly shuffles a tensor along its first axis.

    The tensor is shuffled along axis 0, such that each value(j) is mapped to one and only one output(i). For example, a mapping that might occur for a 3x2 tensor is:

    [[1, 2],       [[5, 6],
     [3, 4],  ==>   [1, 2],
     [5, 6]]        [3, 4]]
    value

    Tensor to be shuffled.

    seed

    Optional random seed, used to generate a random seed pair for the random number generator, when combined with the graph-level seed.

    returns

    Result as a new tensor.

    Definition Classes
    Random
  164. def randomTruncatedNormal[D <: Float16OrFloat32OrFloat64, I <: Int32OrInt64](dataType: D, shape: tensors.Tensor[I])(mean: tensors.Tensor[D] = Tensor.zeros(dataType, Shape()), standardDeviation: tensors.Tensor[D] = Tensor.ones(dataType, Shape()), seed: Option[Int] = None): tensors.Tensor[D]

    Permalink

    The randomTruncatedNormal op outputs random values drawn from a truncated Normal distribution.

    The randomTruncatedNormal op outputs random values drawn from a truncated Normal distribution.

    The generated values follow a Normal distribution with mean mean and standard deviation standardDeviation, except that values whose magnitude is more than two standard deviations from the mean are dropped and resampled.

    dataType

    Data type for the output tensor.

    shape

    Rank-1 tensor containing the shape of the output tensor. Defaults to a scalar tensor.

    mean

    Scalar tensor containing the mean of the Normal distribution. Defaults to 0.

    standardDeviation

    Scalar tensor containing the standard deviation of the Normal distribution. Defaults to 1.

    seed

    Optional random seed, used to generate a random seed pair for the random number generator, when combined with the graph-level seed.

    returns

    Result as a new tensor.

    Definition Classes
    Random
  165. def randomUniform[D <: Int32OrInt64OrFloat16OrFloat32OrFloat64, I <: Int32OrInt64](dataType: D, shape: tensors.Tensor[I])(minValue: tensors.Tensor[D] = Tensor.zeros(dataType, Shape()), maxValue: tensors.Tensor[D] = Tensor.ones(dataType, Shape()), seed: Option[Int] = None): tensors.Tensor[D]

    Permalink

    The randomUniform op outputs random values drawn from a uniform distribution.

    The randomUniform op outputs random values drawn from a uniform distribution.

    The generated values follow a uniform distribution in the range [minValue, maxValue). The lower bound minValue is included in the range, while the upper bound maxValue is not.

    In the integer case, the random integers are slightly biased unless maxValue - minValue is an exact power of two. The bias is small for values of maxValue - minValue significantly smaller than the range of the output (either 232 or 264, depending on the data type).

    dataType

    Data type for the output tensor.

    shape

    Rank-1 tensor containing the shape of the output tensor. Defaults to a scalar tensor.

    minValue

    Scalar tensor containing the inclusive lower bound on the random of random values to generate. Defaults to 0.

    maxValue

    Scalar tensor containing the exclusive upper bound on the random of random values to generate. Defaults to 1.

    seed

    Optional random seed, used to generate a random seed pair for the random number generator, when combined with the graph-level seed.

    returns

    Result as a new tensor.

    Definition Classes
    Random
  166. def range[D <: NumericDataType](start: tensors.Tensor[D], limit: tensors.Tensor[D], delta: tensors.Tensor[D] = null): tensors.Tensor[D]

    Permalink

    The range op constructs a sequence of numbers.

    The range op constructs a sequence of numbers.

    The op creates a sequence of numbers that begins at start and extends by increments of delta up to but not including limit. The data type of the resulting tensor is inferred from the inputs unless it is provided explicitly.

    For example:

    // 'start' is 3
    // 'limit' is 18
    // 'delta' is 3
    range(start, limit, delta) ==> [3, 6, 9, 12, 15]
    
    // 'start' is 3
    // 'limit' is 1
    // 'delta' is -0.5
    range(start, limit, delta) ==> [3.0, 2.5, 2.0, 1.5]
    start

    Rank 0 (i.e., scalar) tensor that contains the starting value of the number sequence.

    limit

    Rank 0 (i.e., scalar) tensor that contains the ending value (exclusive) of the number sequence.

    delta

    Rank 0 (i.e., scalar) tensor that contains the difference between consecutive numbers in the sequence.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  167. def rank[T <: tensors.TensorLike[_]](input: T): tensors.Tensor[types.INT32]

    Permalink

    The rank op returns the rank of a tensor.

    The rank op returns the rank of a tensor.

    The op returns an integer representing the rank of input.

    For example:

    // 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
    // 't' has shape [2, 2, 3]
    rank(t) ==> 3

    Note that the rank of a tensor is not the same as the rank of a matrix. The rank of a tensor is the number of indices required to uniquely select each element of the tensor. Rank is also known as order, degree, or number of dimensions.

    input

    Tensor whose rank to return.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  168. def realDivide[D <: MathDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The realDivide op divides two real tensors element-wise.

    The realDivide op divides two real tensors element-wise.

    If x and y are real-valued tensors, the op will return the floating-point division.

    I.e., z = x / y, for x and y being real tensors.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  169. def reciprocal[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The reciprocal op computes the reciprocal value of a tensor element-wise.

    The reciprocal op computes the reciprocal value of a tensor element-wise. I.e., y = 1 / x.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  170. def relu[D <: RealDataType](x: tensors.Tensor[D], alpha: Float = 0.0f): tensors.Tensor[D]

    Permalink

    The relu op computes the rectified linear unit activation function.

    The relu op computes the rectified linear unit activation function.

    The rectified linear unit activation function is defined as relu(x) = max(x, 0).

    x

    Input tensor.

    alpha

    Slope of the negative section, also known as leakage parameter. If other than 0.0f, the negative part will be equal to alpha * x instead of 0. Defaults to 0.

    returns

    Result as a new tensor.

    Definition Classes
    NN
  171. def relu6[D <: RealDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The relu6 op computes the rectified linear unit 6 activation function.

    The relu6 op computes the rectified linear unit 6 activation function.

    The rectified linear unit 6 activation function is defined as relu6(x) = min(max(x, 0), 6).

    Source: [Convolutional Deep Belief Networks on CIFAR-10. A. Krizhevsky](http://www.cs.utoronto.ca/~kriz/conv-cifar10-aug2010.pdf)

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    NN
  172. def requiredSpaceToBatchPaddingsAndCrops(inputShape: tensors.Tensor[types.INT32], blockShape: tensors.Tensor[types.INT32], basePaddings: tensors.Tensor[types.INT32] = null): (tensors.Tensor[types.INT32], tensors.Tensor[types.INT32])

    Permalink

    The requiredSpaceToBatchPaddingsAndCrops op calculates the paddings and crops required to make blockShape divide inputShape.

    The requiredSpaceToBatchPaddingsAndCrops op calculates the paddings and crops required to make blockShape divide inputShape.

    This function can be used to calculate a suitable paddings/crops argument for use with the spaceToBatchND/batchToSpaceND functions.

    The returned tensors, paddings and crops satisfy:

    • paddings(i, 0) == basePaddings(i, 0),
    • 0 <= paddings(i, 1) - basePaddings(i, 1) < blockShape(i),
    • (inputShape(i) + paddings(i, 0) + paddings(i, 1)) % blockShape(i) == 0,
    • crops(i, 0) == 0, and
    • crops(i, 1) == paddings(i, 1) - basePaddings(i, 1).
    inputShape

    Tensor with shape [N].

    blockShape

    Tensor with shape [N].

    basePaddings

    Optional tensor with shape [N, 2] that specifies the minimum amount of padding to use. All elements must be non-negative. Defaults to a tensor containing all zeros.

    returns

    Tuple containing the paddings and crops required.

    Definition Classes
    Basic
    Annotations
    @throws( ... )
    Exceptions thrown

    InvalidShapeException If inputShape, blockShape, or basePaddings, has invalid shape.

  173. def reshape[D <: types.DataType, I <: Int32OrInt64](input: tensors.Tensor[D], shape: tensors.Tensor[I]): tensors.Tensor[D]

    Permalink

    The reshape op reshapes a tensor.

    The reshape op reshapes a tensor.

    Given input, the op returns a tensor that has the same values as input but has shape shape. If one component of shape is the special value -1, then the size of that dimension is computed so that the total size remains constant. In particular, a shape of [-1] flattens a tensor into a one-dimensional tensor. At most one component of shape can be set to -1.

    If shape is a one-dimensional or higher tensor, then the operation returns a tensor with shape shape filled with the values of input. In this case, the number of elements implied by shape must be the same as the number of elements in input.

    For example:

    // Tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9] => It has shape [9]
    reshape(t, [3, 3]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
    
    // Tensor 't' is [[[1, 1], [2, 2]],
    //                [[3, 3], [4, 4]]] => It has shape [2, 2, 2]
    reshape(t, [2, 4] ==> [[1, 1, 2, 2],
                           [3, 3, 4, 4]]
    
    // Tensor 't' is [[[1, 1, 1],
                       [2, 2, 2]],
                      [[3, 3, 3],
                       [4, 4, 4]],
                      [[5, 5, 5],
                       [6, 6, 6]]] => It has shape [3, 2, 3]
    reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6]
    
    // '-1' can also be used to infer the shape. Some examples follow.
    
    // '-1' is inferred to be 9:
    reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3],
                             [4, 4, 4, 5, 5, 5, 6, 6, 6]]
    
    // '-1' is inferred to be 2:
    reshape(t, [-1, 9]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3],
                             [4, 4, 4, 5, 5, 5, 6, 6, 6]]
    
    // '-1' is inferred to be 3:
    reshape(t, [ 2, -1, 3]) ==> [[[1, 1, 1],
                                  [2, 2, 2],
                                  [3, 3, 3]],
                                 [[4, 4, 4],
                                  [5, 5, 5],
                                  [6, 6, 6]]]
    
    // Tensor 't' is [7]
    // An empty shape passed to 'reshape' will result in a scalar
    reshape(t, []) ==> 7
    input

    Input tensor.

    shape

    Shape of the output tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  174. def reverse[D <: types.DataType, I <: Int32OrInt64](input: tensors.Tensor[D], axes: tensors.Tensor[I]): tensors.Tensor[D]

    Permalink

    The reverse op reverses specific dimensions of a tensor.

    The reverse op reverses specific dimensions of a tensor.

    Given an input tensor, and an integer array of axes representing the set of dimensions of input to reverse, this op reverses each dimension i of input, for which there exists j such that axes(j) == i.

    input can have up to 8 dimensions. The number of dimensions specified in axes may be 0 or more entries. If an index is specified more than once, an 'InvalidArgument' error will be raised.

    For example:

    // Tensor 't' is [[[[ 0,  1,  2,  3],
    //                  [ 4,  5,  6,  7],
    //                  [ 8,  9, 10, 11]],
    //                 [[12, 13, 14, 15],
    //                  [16, 17, 18, 19],
    //                  [20, 21, 22, 23]]]] => It has shape [1, 2, 3, 4]
    
    // 'axes' is [3] or [-1]
    reverse(t, axes) ==> [[[[ 3,  2,  1,  0],
                            [ 7,  6,  5,  4],
                            [ 11, 10, 9,  8]],
                           [[15, 14, 13, 12],
                            [19, 18, 17, 16],
                            [23, 22, 21, 20]]]]
    
    // 'axes' is [1] or [-3]
    reverse(t, axes) ==> [[[[12, 13, 14, 15],
                            [16, 17, 18, 19],
                            [20, 21, 22, 23]],
                           [[ 0,  1,  2,  3],
                            [ 4,  5,  6,  7],
                            [ 8,  9, 10, 11]]]]
    
    // 'axes' is [2] or [-2]
    reverse(t, axes) ==> [[[[ 8,  9, 10, 11],
                            [ 4,  5,  6,  7],
                            [ 0,  1,  2,  3]],
                           [[20, 21, 22, 23],
                            [16, 17, 18, 19],
                            [12, 13, 14, 15]]]]
    input

    Input tensor to reverse. It must have rank at most 8.

    axes

    Dimensions of the input tensor to reverse.

    returns

    Result as a new tensor which has the same shape as input.

    Definition Classes
    Basic
  175. def reverseSequence[D <: types.DataType, I <: Int32OrInt64](input: tensors.Tensor[D], sequenceLengths: tensors.Tensor[I], sequenceAxis: Int, batchAxis: Int = 0): tensors.Tensor[D]

    Permalink

    The reverseSequence op reverses variable length slices.

    The reverseSequence op reverses variable length slices.

    The op first slices input along the dimension batchAxis, and for each slice i, it reverses the first sequenceLengths(i) elements along the dimension sequenceAxis.

    The elements of sequenceLengths must obey sequenceLengths(i) <= input.shape(sequenceAxis), and it must be a vector of length input.shape(batchAxis).

    The output slice i along dimension batchAxis is then given by input slice i, with the first sequenceLengths(i) slices along dimension sequenceAxis reversed.

    For example:

    // Given:
    // sequenceAxis = 1
    // batchAxis = 0
    // input.shape = [4, 8, ...]
    // sequenceLengths = [7, 2, 3, 5]
    // slices of 'input' are reversed on 'sequenceAxis', but only up to 'sequenceLengths':
    output(0, 0::7, ---) == input(0, 6::-1::, ---)
    output(1, 0::2, ---) == input(1, 1::-1::, ---)
    output(2, 0::3, ---) == input(2, 2::-1::, ---)
    output(3, 0::5, ---) == input(3, 4::-1::, ---)
    // while entries past 'sequenceLengths' are copied through:
    output(0, 7::, ---) == input(0, 7::, ---)
    output(1, 7::, ---) == input(1, 7::, ---)
    output(2, 7::, ---) == input(2, 7::, ---)
    output(3, 7::, ---) == input(3, 7::, ---)
    
    // In contrast, given:
    // sequenceAxis = 0
    // batchAxis = 2
    // input.shape = [8, ?, 4, ...]
    // sequenceLengths = [7, 2, 3, 5]
    // slices of 'input' are reversed on 'sequenceAxis', but only up to 'sequenceLengths':
    output(0::7, ::, 0, ---) == input(6::-1::, ::, 0, ---)
    output(0::2, ::, 1, ---) == input(1::-1::, ::, 1, ---)
    output(0::3, ::, 2, ---) == input(2::-1::, ::, 2, ---)
    output(0::5, ::, 3, ---) == input(4::-1::, ::, 3, ---)
    // while entries past 'sequenceLengths' are copied through:
    output(7::, ::, 0, ---) == input(7::, ::, 0, ---)
    output(2::, ::, 1, ---) == input(2::, ::, 1, ---)
    output(3::, ::, 2, ---) == input(3::, ::, 2, ---)
    output(5::, ::, 3, ---) == input(5::, ::, 3, ---)
    input

    Input tensor to reverse.

    sequenceLengths

    One-dimensional tensor with length input.shape(batchAxis) and max(sequenceLengths) <= input.shape(sequenceAxis).

    sequenceAxis

    Tensor dimension which is partially reversed.

    batchAxis

    Tensor dimension along which the reversal is performed.

    returns

    Result as a new tensor which has the same shape as input.

    Definition Classes
    Basic
  176. def round[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The round op computes the round value of a tensor element-wise.

    The round op computes the round value of a tensor element-wise.

    Rounds half to even. Also known as bankers rounding. If you want to round according to the current system rounding mode use the roundInt op instead.

    For example:

    	// 'a' is [0.9, 2.5, 2.3, 1.5, -4.5]
    	round(a) ==> [1.0, 2.0, 2.0, 2.0, -4.0]
    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  177. def roundInt[D <: Float16OrFloat32OrFloat64, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The roundInt op computes the round value of a tensor element-wise.

    The roundInt op computes the round value of a tensor element-wise.

    If the result is midway between two representable values, the even representable is chosen.

    For example:

    	roundInt(-1.5) ==> -2.0
    	roundInt(0.5000001) ==> 1.0
    	roundInt([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) ==> [-2., -2., -0., 0., 2., 2., 2.]
    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  178. def rsqrt[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The rsqrt op computes the reciprocal of the square root of a tensor element-wise.

    The rsqrt op computes the reciprocal of the square root of a tensor element-wise. I.e., y = 1 / \sqrt{x} = 1 / x^{1/2}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  179. def scalarMul[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](scalar: tensors.Tensor[D], tensor: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The scalarMul op multiplies a scalar tensor with another, potentially sparse, tensor.

    The scalarMul op multiplies a scalar tensor with another, potentially sparse, tensor.

    This function is intended for use in gradient code which might deal with OutputIndexedSlices objects, which are easy to multiply by a scalar but more expensive to multiply with arbitrary tensors.

    scalar

    Scalar tensor.

    tensor

    Tensor to multiply the scalar tensor with.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  180. def scatterND[D <: types.DataType, I <: Int32OrInt64](indices: tensors.Tensor[I], updates: tensors.Tensor[D], shape: tensors.Tensor[I]): tensors.Tensor[D]

    Permalink

    The scatterND op scatters updates into a new (initially zero-valued) tensor, according to indices.

    The scatterND op scatters updates into a new (initially zero-valued) tensor, according to indices.

    The op creates a new tensor by applying sparse updates to individual values or slices within a zero-valued tensor of the given shape, according to indices. It is the inverse of the gatherND op, which extracts values or slices from a given tensor.

    WARNING: The order in which the updates are applied is non-deterministic, and so the output will be non-deterministic if indices contains duplicates.

    indices is an integer tensor containing indices into a new tensor of shape shape. The last dimension of indices can be at most the rank of shape: indices.shape(-1) <= shape.rank. The last dimension of indices corresponds to indices into elements (if indices.shape(-1) == shape.rank) or slices (if indices.shape(-1) < shape.rank) along dimension indices.shape(-1) of shape.

    updates is a tensor with shape indices.shape(::-1) + shape(indices.shape(-1)::).

    The simplest form of scatter is to insert individual elements in a tensor by index. For example, say we want to insert 4 scattered elements in a rank-1 tensor with 8 elements.

    In Scala, this scatter operation would look like this:

    val indices = constant(Tensor(Tensor(4), Tensor(3), Tensor(1), Tensor(7)))
    val updates = constant(Tensor(9, 10, 11, 12))
    val shape = constant(Tensor(8))
    scatterND(indices, updates, shape) ==> [0, 11, 0, 10, 9, 0, 0, 12]

    We can also, insert entire slices of a higher rank tensor all at once. For example, say we want to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values.

    In Scala, this scatter operation would look like this:

    val indices = constant(Tensor(Tensor(0), Tensor(2)))
    val updates = constant(Tensor(Tensor(Tensor(5, 5, 5, 5), Tensor(6, 6, 6, 6),
                                         Tensor(7, 7, 7, 7), Tensor(8, 8, 8, 8))
                                  Tensor(Tensor(5, 5, 5, 5), Tensor(6, 6, 6, 6),
                                         Tensor(7, 7, 7, 7), Tensor(8, 8, 8, 8))))
    val shape = constant(Tensor(4, 4, 4))
    scatterND(indices, updates, shape) ==>
      [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
       [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
       [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
       [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]
    indices

    Indices tensor.

    updates

    Updates to scatter into the output tensor.

    shape

    One-dimensional tensor specifying the shape of the output tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  181. def segmentMax[D <: ReducibleDataType, I <: Int32OrInt64](data: tensors.Tensor[D], segmentIndices: tensors.Tensor[I]): tensors.Tensor[D]

    Permalink

    The segmentMax op computes the max along segments of a tensor.

    The segmentMax op computes the max along segments of a tensor.

    The op computes a tensor such that output(i) = \max_{j...} data(j,...) where the max is over all j such that segmentIndices(j) == i. Unlike unsortedSegmentMax, segmentIndices need be sorted.

    If the max if empty for a given segment index i, output(i) is set to 0.

    The result tensor has the same data type as data, but its first dimension size is equal to the number of distinct segment indices.

    data

    Data (must have a numeric data type -- i.e., representing a number).

    segmentIndices

    Segment indices. Values should be sorted and can be repeated.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  182. def segmentMean[D <: ReducibleDataType, I <: Int32OrInt64](data: tensors.Tensor[D], segmentIndices: tensors.Tensor[I]): tensors.Tensor[D]

    Permalink

    The segmentMean op computes the mean along segments of a tensor.

    The segmentMean op computes the mean along segments of a tensor.

    The op computes a tensor such that output(i) = \frac{sum_{j...} data(j,...)}{N} where the sum is over all j such that segmentIndices(j) == i and N is the total number of values being summed. Unlike unsortedSegmentMean, segmentIndices need to be sorted.

    If the sum if empty for a given segment index i, output(i) is set to 0.

    The result tensor has the same data type as data, but its first dimension size is equal to the number of distinct segment indices.

    data

    Data (must have a numeric data type -- i.e., representing a number).

    segmentIndices

    Segment indices. Values should be sorted and can be repeated.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  183. def segmentMin[D <: ReducibleDataType, I <: Int32OrInt64](data: tensors.Tensor[D], segmentIndices: tensors.Tensor[I]): tensors.Tensor[D]

    Permalink

    The segmentMin op computes the min along segments of a tensor.

    The segmentMin op computes the min along segments of a tensor.

    The op computes a tensor such that output(i) = \min_{j...} data(j,...) where the min is over all j such that segmentIndices(j) == i. Unlike unsortedSegmentMin, segmentIndices need be sorted.

    If the min if empty for a given segment index i, output(i) is set to 0.

    The result tensor has the same data type as data, but its first dimension size is equal to the number of distinct segment indices.

    data

    Data (must have a numeric data type -- i.e., representing a number).

    segmentIndices

    Segment indices. Values should be sorted and can be repeated.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  184. def segmentProd[D <: ReducibleDataType, I <: Int32OrInt64](data: tensors.Tensor[D], segmentIndices: tensors.Tensor[I]): tensors.Tensor[D]

    Permalink

    The segmentProd op computes the product along segments of a tensor.

    The segmentProd op computes the product along segments of a tensor.

    The op computes a tensor such that output(i) = \prod_{j...} data(j,...) where the product is over all j such that segmentIndices(j) == i. Unlike unsortedSegmentProd, segmentIndices need be sorted.

    If the product if empty for a given segment index i, output(i) is set to 1.

    The result tensor has the same data type as data, but its first dimension size is equal to the number of distinct segment indices.

    data

    Data (must have a numeric data type -- i.e., representing a number).

    segmentIndices

    Segment indices. Values should be sorted and can be repeated.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  185. def segmentSum[D <: ReducibleDataType, I <: Int32OrInt64](data: tensors.Tensor[D], segmentIndices: tensors.Tensor[I]): tensors.Tensor[D]

    Permalink

    The segmentSum op computes the sum along segments of a tensor.

    The segmentSum op computes the sum along segments of a tensor.

    The op computes a tensor such that output(i) = \sum_{j...} data(j,...) where the sum is over all j such that segmentIndices(j) == i. Unlike unsortedSegmentSum, segmentIndices need be sorted.

    If the sum if empty for a given segment index i, output(i) is set to 0.

    The result tensor has the same data type as data, but its first dimension size is equal to the number of distinct segment indices.

    data

    Data (must have a numeric data type -- i.e., representing a number).

    segmentIndices

    Segment indices. Values should be sorted and can be repeated.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  186. def select[D <: types.DataType](condition: tensors.Tensor[types.BOOLEAN], x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The select op selects elements from x or y, depending on condition.

    The select op selects elements from x or y, depending on condition.

    The x, and y tensors must have the same shape. The output tensor will also have the same shape.

    The condition tensor must be a scalar if x and y are scalars. If x and y are vectors or higher rank, then condition must be either a scalar, or a vector with size matching the first dimension of x, or it must have the same shape as x.

    The condition tensor acts as a mask that chooses, based on the value at each element, whether the corresponding element / row in the output should be taken from x (if true) or y (if false).

    If condition is a vector and x and y are higher rank matrices, then it chooses which row (outer dimension) to copy from x and y. If condition has the same shape as x and y, then it chooses which element to copy from x and y.

    For example:

    // 'condition' tensor is [[true,  false], [false, true]]
    // 'x' is [[1, 2], [3, 4]]
    // 'y' is [[5, 6], [7, 8]]
    select(condition, x, y) ==> [[1, 6], [7, 4]]
    
    // 'condition' tensor is [true, false]
    // 'x' is [[1, 2], [3, 4]]
    // 'y' is [[5, 6], [7, 8]]
    select(condition, x, y) ==> [[1, 2], [7, 8]]
    condition

    Boolean condition tensor.

    x

    Tensor which may have the same shape as condition. If condition has rank 1, then t may have a higher rank, but its first dimension must match the size of condition.

    y

    Tensor with the same data type and shape as t.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  187. def selu[D <: DecimalDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The selu op computes the scaled exponential linear unit activation function.

    The selu op computes the scaled exponential linear unit activation function.

    The scaled exponential linear unit activation function is defined as selu(x) = scale * x, if x > 0, and elu(x) = scale * alpha * (exp(x) - 1), otherwise, where scale = 1.0507 and alpha = 1.7581.

    Source: [Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515)

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    NN
  188. def sequenceLoss[D <: DecimalDataType, I <: Int32OrInt64](logits: tensors.Tensor[D], labels: tensors.Tensor[I], weights: tensors.Tensor[D] = null, averageAcrossTimeSteps: Boolean = true, averageAcrossBatch: Boolean = true, lossFn: (tensors.Tensor[D], tensors.Tensor[I]) ⇒ tensors.Tensor[D] = ...): tensors.Tensor[D]

    Permalink

    The sequenceLoss op computes an optionally weighted loss for a sequence of predicted logits.

    The sequenceLoss op computes an optionally weighted loss for a sequence of predicted logits.

    Depending on the values of averageAcrossTimeSteps and averageAcrossBatch, the returned tensor will have rank 0, 1, or 2 as these arguments reduce the cross-entropy each label, which has shape [batchSize, sequenceLength], over their respective dimensions. For examplem if averageAcrossTimeSteps is true and averageAcrossBatch is false, then the returned tensor will have shape [batchSize].

    logits

    Tensor of shape [batchSize, sequenceLength, numClasses] containing unscaled log probabilities.

    labels

    Tensor of shape [batchSize, sequenceLength] containing the true label at each time step.

    weights

    Optionally, a tensor of shape [batchSize, sequenceLength] containing weights to use for each prediction. When using weights as masking, set all valid time steps to 1 and all padded time steps to 0 (e.g., a mask returned by tf.sequenceMask).

    averageAcrossTimeSteps

    If true, the loss is summed across the sequence dimension and divided by the total label weight across all time steps.

    averageAcrossBatch

    If true, the loss is summed across the batch dimension and divided by the batch size.

    lossFn

    Loss function to use that takes the predicted logits and the true labels as inputs and returns the loss value. Defaults to sparseSoftmaxCrossEntropy.

    returns

    Result as a new tensor.

    Definition Classes
    NN
    Annotations
    @throws( ... )
    Exceptions thrown

    InvalidShapeException If any of logits, labels, or weights has invalid shape.

  189. def sequenceMask[D <: NumericDataType](lengths: tensors.Tensor[D], maxLength: tensors.Tensor[D] = null): tensors.Tensor[types.BOOLEAN]

    Permalink

    The sequenceMask op returns a mask tensor representing the first N positions of each row of a matrix.

    The sequenceMask op returns a mask tensor representing the first N positions of each row of a matrix.

    For example:

    // 'lengths' = [1, 3, 2]
    // 'maxLength' = 5
    sequenceMask(lengths, maxLength) ==>
      [[true, false, false, false, false],
       [true,  true,  true, false, false],
       [true,  true, false, false, false]]
    lengths

    One-dimensional integer tensor containing the lengths to keep for each row. If maxLength is provided, then all values in lengths must be smaller than maxLength.

    maxLength

    Scalar integer tensor representing the maximum length of each row. Defaults to the maximum value in lengths.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
    Annotations
    @throws( ... )
    Exceptions thrown

    IllegalArgumentException If maxLength is not a scalar.

  190. def shape[T <: tensors.TensorLike[_], DR <: types.DataType](input: T, dataType: DR): tensors.Tensor[DR]

    Permalink

    The shape op returns the shape of a tensor.

    The shape op returns the shape of a tensor.

    The op returns a one-dimensional tensor representing the shape of input.

    For example:

    // 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
    shape(t) ==> [2, 2, 3]
    input

    Tensor whose shape to return.

    dataType

    Optional data type to use for the output of this op.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  191. def shape[T <: tensors.TensorLike[_]](input: T): tensors.Tensor[types.INT64]

    Permalink

    The shape op returns the shape of a tensor.

    The shape op returns the shape of a tensor.

    The op returns a one-dimensional tensor representing the shape of input.

    For example:

    // 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
    shape(t) ==> [2, 2, 3]
    input

    Tensor whose shape to return.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  192. def shapeN[DR <: types.DataType](inputs: Seq[tensors.Tensor[_]], dataType: DR): Seq[tensors.Tensor[DR]]

    Permalink

    The shapeN op returns the shape of an array of tensors.

    The shapeN op returns the shape of an array of tensors.

    The op returns an array of one-dimensional tensors, each one representing the shape of the corresponding tensor in inputs.

    inputs

    Tensors whose shapes to return.

    dataType

    Optional data type to use for the outputs of this op.

    returns

    Result as a sequence of new tensors.

    Definition Classes
    Basic
  193. def shapeN(inputs: Seq[tensors.Tensor[_]]): Seq[tensors.Tensor[types.INT64]]

    Permalink

    The shapeN op returns the shape of an array of tensors.

    The shapeN op returns the shape of an array of tensors.

    The op returns an array of one-dimensional tensors, each one representing the shape of the corresponding tensor in inputs.

    inputs

    Tensors whose shapes to return.

    returns

    Result as a sequence of new tensors.

    Definition Classes
    Basic
  194. def sigmoid[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The sigmoid op computes the sigmoid function element-wise on a tensor.

    The sigmoid op computes the sigmoid function element-wise on a tensor.

    Specifically, y = 1 / (1 + exp(-x)).

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  195. def sigmoidCrossEntropy[D <: DecimalDataType](logits: tensors.Tensor[D], labels: tensors.Tensor[D], weights: tensors.Tensor[D] = null): tensors.Tensor[D]

    Permalink

    The sigmoidCrossEntropy op computes the sigmoid cross entropy between logits and labels.

    The sigmoidCrossEntropy op computes the sigmoid cross entropy between logits and labels.

    The op measures the probability error in discrete classification tasks in which each class is independent and not mutually exclusive. For instance, one could perform multi-label classification where a picture can contain both an elephant and a dog at the same time.

    For brevity, let x = logits and z = labels. The sigmoid cross entropy (also known as logistic loss) is defined as: z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) = z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) = z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) = (1 - z) * x + log(1 + exp(-x)) = x - x * z + log(1 + exp(-x))

    For x < 0, to avoid numerical overflow in exp(-x), we reformulate the above to: x - x * z + log(1 + exp(-x)) = log(exp(x)) - x * z + log(1 + exp(-x)) = - x * z + log(1 + exp(x))

    Hence, to ensure stability and avoid numerical overflow, the implementation uses this equivalent formulation: max(x, 0) - x * z + log(1 + exp(-abs(x)))

    If weights is not null, then the positive examples are weighted. A value weights > 1 decreases the false negative count, hence increasing recall. Conversely setting weights < 1 decreases the false positive count and increases precision. This can be seen from the fact that weight is introduced as a multiplicative coefficient for the positive targets term in the loss expression (where q = weights, for brevity): qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x)) = qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x))) = qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x))) = qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x)) = (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x)) = (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x))

    Setting l = 1 + (q - 1) * z, to ensure stability and avoid numerical overflow, the implementation uses this equivalent formulation: (1 - z) * x + l * (max(-x, 0) + log(1 + exp(-abs(x))))

    logits and labels must have the same shape.

    logits

    Tensor of shape [D0, D1, ..., Dr-1, numClasses], containing unscaled log probabilities.

    labels

    Tensor of shape [D0, D1, ..., Dr-1, numClasses]], where each row must be a valid probability distribution.

    weights

    Optionally, a coefficient to use for the positive examples.

    returns

    Result as a new tensor, with rank one less than that of logits and the same data type as logits, containing the sigmoid cross entropy loss.

    Definition Classes
    NN
  196. def sign[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The sign op computes an element-wise indication of the sign of a tensor.

    The sign op computes an element-wise indication of the sign of a tensor.

    I.e., y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0.

    Zero is returned for NaN inputs.

    For complex numbers, y = sign(x) = x / |x| if x != 0, otherwise y = 0.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  197. def sin[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The sin op computes the sine of a tensor element-wise.

    The sin op computes the sine of a tensor element-wise. I.e., y = \sin{x}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  198. def sinh[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The sinh op computes the hyperbolic sine of a tensor element-wise.

    The sinh op computes the hyperbolic sine of a tensor element-wise. I.e., y = \sinh{x}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  199. def size[T <: tensors.TensorLike[_], DR <: ReducibleDataType](input: T, dataType: DR): tensors.Tensor[DR]

    Permalink

    The size op returns the size of a tensor.

    The size op returns the size of a tensor.

    The op returns a number representing the number of elements in input.

    For example:

    // 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]
    size(t) ==> 12
    input

    Tensor whose size to return.

    dataType

    Optional data type to use for the output of this op.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  200. def size[T <: tensors.TensorLike[_]](input: T): tensors.Tensor[types.INT64]

    Permalink

    The size op returns the size of a tensor.

    The size op returns the size of a tensor.

    The op returns a number representing the number of elements in input.

    For example:

    // 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]
    size(t) ==> 12
    input

    Tensor whose size to return.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  201. def softmax[D <: DecimalDataType](logits: tensors.Tensor[D], axis: Int = 1): tensors.Tensor[D]

    Permalink

    The softmax op computes softmax activations.

    The softmax op computes softmax activations.

    For each batch i and class j we have softmax = exp(logits) / sum(exp(logits), axis), where axis indicates the axis the softmax should be performed on.

    logits

    Tensor containing the logits.

    axis

    Axis along which to perform the softmax. Defaults to -1 denoting the last axis.

    returns

    Result as a new tensor.

    Definition Classes
    NN
  202. def softmaxCrossEntropy[D <: DecimalDataType](logits: tensors.Tensor[D], labels: tensors.Tensor[D], axis: Int = 1): tensors.Tensor[D]

    Permalink

    The softmaxCrossEntropy op computes the softmax cross entropy between logits and labels.

    The softmaxCrossEntropy op computes the softmax cross entropy between logits and labels.

    The op measures the probabilistic error in discrete classification tasks in which the classes are mutually exclusive (each entry belongs to exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

    Back-propagation will happen into both logits and labels. To disallow back-propagation into labels, pass the label tensors through a stopGradients op before feeding it to this function.

    NOTE: While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of labels is a valid probability distribution. If they are not, the computation of the gradient will be incorrect. If using exclusive labels (wherein one and only one class is true at a time), see sparseSoftmaxCrossEntropy.

    WARNING: The op expects unscaled logits, since it performs a softmax on logits internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results.

    logits and labels must have the same shape. A common use case if to have logits and labels of shape [batchSize, numClasses], but higher dimensions are also supported.

    logits and labels must have data type FLOAT16, FLOAT32, or FLOAT64.

    logits

    Tensor of shape [D0, D1, ..., Dr-1, numClasses], containing unscaled log probabilities.

    labels

    Tensor of shape [D0, D1, ..., Dr-1, numClasses], where each row must be a valid probability distribution.

    axis

    The class axis, along which the softmax is computed. Defaults to -1, which is the last axis.

    returns

    Result as a new tensor, with rank one less than that of logits and the same data type as logits, containing the softmax cross entropy loss.

    Definition Classes
    NN
  203. def softplus[D <: RealDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The softplus op computes the softplus activation function.

    The softplus op computes the softplus activation function.

    The softplus activation function is defined as softplus(x) = log(exp(x) + 1).

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    NN
  204. def softsign[D <: RealDataType](input: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The softsign op computes the softsign activation function.

    The softsign op computes the softsign activation function.

    The softsign activation function is defined as softsign(x) = x / (abs(x) + 1).

    input

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    NN
  205. def spaceToBatch[D <: types.DataType, I <: Int32OrInt64](input: tensors.Tensor[D], blockSize: Int, paddings: tensors.Tensor[I]): tensors.Tensor[D]

    Permalink

    The spaceToBatch op zero-pads and then rearranges (permutes) blocks of spatial data into batches.

    The spaceToBatch op zero-pads and then rearranges (permutes) blocks of spatial data into batches.

    More specifically, the op outputs a copy of the input tensor where values from the height and width dimensions are moved to the batch dimension. After the zero-padding, both height and width of the input must be divisible by blockSize (which must be greater than 1). This is the reverse functionality to that of batchToSpace.

    input is a 4-dimensional input tensor with shape [batch, height, width, depth].

    paddings has shape [2, 2]. It specifies the padding of the input with zeros across the spatial dimensions as follows: paddings = padBottom], [padLeft, padRight. The effective spatial dimensions of the zero-padded input tensor will be:

    • heightPad = padTop + height + padBottom
    • widthPad = padLeft + width + padRight

    blockSize indicates the block size:

    • Non-overlapping blocks of size blockSize x blockSize in the height and width dimensions are rearranged into the batch dimension at each location.
    • The batch dimension size of the output tensor is batch * blockSize * blockSize.
    • Both heightPad and widthPad must be divisible by blockSize.

    The shape of the output will be: [batch * blockSize * blockSize, heightPad / blockSize, widthPad / blockSize, depth]

    Some examples:

    // === Example #1 ===
    // input = [[[[1], [2]], [[3], [4]]]]  (shape = [1, 2, 2, 1])
    // blockSize = 2
    // paddings = [[0, 0], [0, 0]]
    spaceToBatch(input, blockSize, paddings) ==> [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]  (shape = [4, 1, 1, 1])
    
    // === Example #2 ===
    // input = [[[[1, 2, 3], [4,   5,  6]],
    //           [[7, 8, 9], [10, 11, 12]]]]  (shape = [1, 2, 2, 3])
    // blockSize = 2
    // paddings = [[0, 0], [0, 0]]
    spaceToBatch(input, blockSize, paddings) ==>
      [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]  (shape = [4, 1, 1, 3])
    
    // === Example #3 ===
    // input = [[[[ 1],  [2],  [3],  [ 4]],
    //           [[ 5],  [6],  [7],  [ 8]],
    //           [[ 9], [10], [11],  [12]],
    //           [[13], [14], [15],  [16]]]]  (shape = [1, 4, 4, 1])
    // blockSize = 2
    // paddings = [[0, 0], [0, 0]]
    spaceToBatch(input, blockSize, paddings) ==>
      [[[[1], [3]], [[ 9], [11]]],
       [[[2], [4]], [[10], [12]]],
       [[[5], [7]], [[13], [15]]],
       [[[6], [8]], [[14], [16]]]]  (shape = [4, 2, 2, 1])
    
    // === Example #4 ===
    // input = [[[[ 1],  [2],  [3],  [ 4]],
    //           [[ 5],  [6],  [7],  [ 8]]],
    //          [[[ 9], [10], [11],  [12]],
    //           [[13], [14], [15],  [16]]]]  (shape = [2, 2, 4, 1])
    // blockSize = 2
    // paddings = [[0, 0], [2, 0]]
    spaceToBatch(input, blockSize, paddings) ==>
      [[[[0], [1], [3]]], [[[0], [ 9], [11]]],
       [[[0], [2], [4]]], [[[0], [10], [12]]],
       [[[0], [5], [7]]], [[[0], [13], [15]]],
       [[[0], [6], [8]]], [[[0], [14], [16]]]]  (shape = [8, 1, 3, 1])
    input

    4-dimensional input tensor with shape [batch, height, width, depth].

    blockSize

    Block size which must be greater than 1.

    paddings

    2-dimensional tensor containing non-negative integers with shape [2, 2].

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  206. def spaceToBatchND[D <: types.DataType, I1 <: Int32OrInt64, I2 <: Int32OrInt64](input: tensors.Tensor[D], blockShape: tensors.Tensor[I1], paddings: tensors.Tensor[I2]): tensors.Tensor[D]

    Permalink

    The spaceToBatchND op divides "spatial" dimensions [1, ..., M] of input into a grid of blocks with shape blockShape, and interleaves these blocks with the "batch" dimension (0) such that, in the output, the spatial dimensions [1, ..., M] correspond to the position within the grid, and the batch dimension combines both the position within a spatial block and the original batch position.

    The spaceToBatchND op divides "spatial" dimensions [1, ..., M] of input into a grid of blocks with shape blockShape, and interleaves these blocks with the "batch" dimension (0) such that, in the output, the spatial dimensions [1, ..., M] correspond to the position within the grid, and the batch dimension combines both the position within a spatial block and the original batch position. Prior to division into blocks, the spatial dimensions of the input are optionally zero padded according to paddings. This is the reverse functionality to that of batchToSpaceND.

    input is an N-dimensional tensor with shape inputShape = [batch] + spatialShape + remainingShape, where spatialShape has M dimensions.

    The op is equivalent to the following steps:

    1. Zero-pad the st of shape paddedShape. 2. Reshape padded to reshapedPadded of shape:
    [batch] +
    [[paddedShape(1) / blockShape(0), blockShape(0), ..., paddedShape(M) / blockShape(M-1), blockShape(M-1)]` +
    remainingShape

    3. Permute the dimensions of reshapedPadded to produce permutedReshapedPadded of shape:

    blockShape +
    [batch] +
    [paddedShape(1) / blockShape(0), ..., paddedShape(M) / blockShape(M-1)] +
    remainingShape

    4. Reshape permutedReshapedPadded to flatten blockShape into the batch dimension, producing an output tensor of shape:

    [batch *   product(blockShape)] +
    [paddedShape(1) / blockShape(0), ..., paddedShape(M) / blockShape(M-1)] +
    remainingShape

    Among others, this op is useful for reducing atrous convolution to regular convolution.

    Some examples:

    // === Example #1 ===
    // input = [[[[1], [2]], [[3], [4]]]]  (shape = [1, 2, 2, 1])
    // blockShape = [2, 2]
    // paddings = [[0, 0], [0, 0]]
    spaceToBatchND(input, blockShape, paddings) ==>
      [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]  (shape = [4, 1, 1, 1])
    
    // === Example #2 ===
    // input = [[[[1, 2, 3], [4, 5, 6]],
    //           [[7, 8, 9], [10, 11, 12]]]]  (shape = [1, 2, 2, 3])
    // blockShape = [2, 2]
    // paddings = [[0, 0], [0, 0]]
    spaceToBatchND(input, blockShape, paddings) ==>
      [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]  (shape = [4, 1, 1, 3])
    
    // === Example #3 ===
    // input = [[[[ 1],  [2],  [3],  [ 4]],
    //           [[ 5],  [6],  [7],  [ 8]],
    //           [[ 9], [10], [11],  [12]],
    //           [[13], [14], [15],  [16]]]]  (shape = [1, 4, 4, 1])
    // blockShape = [2, 2]
    // paddings = [[0, 0], [0, 0]]
    spaceToBatchND(input, blockShape, paddings) ==>
      [[[[1], [3]], [[ 9], [11]]],
       [[[2], [4]], [[10], [12]]],
       [[[5], [7]], [[13], [15]]],
       [[[6], [8]], [[14], [16]]]]  (shape = [4, 2, 2, 1])
    
    // === Example #4 ===
    // input = [[[[ 1],  [2],  [3],  [ 4]],
    //           [[ 5],  [6],  [7],  [ 8]]],
    //          [[[ 9], [10], [11],  [12]],
    //           [[13], [14], [15],  [16]]]]  (shape = [2, 2, 4, 1])
    // blockShape = [2, 2]
    // paddings = [[0, 0], [2, 0]]
    spaceToBatchND(input, blockShape, paddings) ==>
      [[[[0], [1], [3]]], [[[0], [ 9], [11]]],
       [[[0], [2], [4]]], [[[0], [10], [12]]],
       [[[0], [5], [7]]], [[[0], [13], [15]]],
       [[[0], [6], [8]]], [[[0], [14], [16]]]]  (shape = [8, 1, 3, 1])
    input

    N-dimensional tensor with shape inputShape = [batch] + spatialShape + remainingShape, where spatialShape has M dimensions.

    blockShape

    One-dimensional tensor with shape [M] whose elements must all be >= 1.

    paddings

    Two-dimensional tensor with shape [M, 2] whose elements must all be non-negative. paddings(i) = [padStart, padEnd] specifies the padding for input dimension i + 1, which corresponds to spatial dimension i. It is required that blockShape(i) divides inputShape(i + 1) + padStart + padEnd.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  207. def spaceToDepth[D <: types.DataType](input: tensors.Tensor[D], blockSize: Int, dataFormat: ops.NN.CNNDataFormat = CNNDataFormat.default): tensors.Tensor[D]

    Permalink

    The spaceToDepth op that rearranges blocks of spatial data, into depth.

    The spaceToDepth op that rearranges blocks of spatial data, into depth.

    More specifically, the op outputs a copy of the input tensor where values from the height and width dimensions are moved to the depth dimension. blockSize indicates the input block size and how the data is moved:

    • Non-overlapping blocks of size blockSize x blockSize in the height and width dimensions are rearranged into the depth dimension at each location.
    • The depth of the output tensor is inputDepth * blockSize * blockSize.
    • The input tensor's height and width must be divisible by blockSize.

    That is, assuming that input is in the shape [batch, height, width, depth], the shape of the output will be: [batch, height / blockSize, width / blockSize, depth * block_size * block_size].

    This op is useful for resizing the activations between convolutions (but keeping all data), e.g., instead of pooling. It is also useful for training purely convolutional models.

    Some examples:

    // === Example #1 ===
    // input = [[[[1], [2]], [[3], [4]]]]  (shape = [1, 2, 2, 1])
    // blockSize = 2
    spaceToDepth(input, blockSize) ==> [[[[1, 2, 3, 4]]]]  (shape = [1, 1, 1, 4])
    
    // === Example #2 ===
    // input =  [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]  (shape = [1, 2, 2, 3])
    // blockSize = 2
    spaceToDepth(input, blockSize) ==>
      [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]  (shape = [1, 1, 1, 12])
    
    // === Example #3 ===
    // input = [[[[ 1], [ 2], [ 5], [ 6]],
    //           [[ 3], [ 4], [ 7], [ 8]],
    //           [[ 9], [10], [13], [14]],
    //           [[11], [12], [15], [16]]]]  (shape = [1, 4, 4, 1])
    // blockSize = 2
    spaceToDepth(input, blockSize) ==>
      [[[[ 1,  2,  3,  4],
         [ 5,  6,  7,  8]],
        [[ 9, 10, 11, 12],
         [13, 14, 15, 16]]]]  (shape = [1, 2, 2, 4])
    input

    4-dimensional input tensor with shape [batch, height, width, depth].

    blockSize

    Block size which must be greater than 1.

    dataFormat

    Format of the input and output data.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  208. def sparseSegmentMean[D <: types.DataType, I1 <: Int32OrInt64, I2 <: Int32OrInt64](data: tensors.Tensor[D], indices: tensors.Tensor[I1], segmentIndices: tensors.Tensor[I2], numSegments: tensors.Tensor[types.INT32] = null): tensors.Tensor[D]

    Permalink

    The sparseSegmentMean op computes the mean along sparse segments of a tensor.

    The sparseSegmentMean op computes the mean along sparse segments of a tensor.

    The op is similar to that of segmentMean, with the difference that segmentIndices can have rank less than data's first dimension, selecting a subset of dimension 0, specified by indices. segmentIndices is allowed to have missing indices, in which case the output will be zeros at those indices. In those cases, numSegments is used to determine the size of the output.

    For example:

    // 'c' is [[1, 2, 3, 4], [-1, -2, -3, -4], [5, 6, 7, 8]]
    
    // Select two rows, one segment.
    sparseSegmentMean(c, Tensor(0, 1), Tensor(0, 0)) ==> [[0, 0, 0, 0]]
    
    // Select two rows, two segments.
    sparseSegmentMean(c, Tensor(0, 1), Tensor(0, 1)) ==> [[1, 2, 3, 4], [-1, -2, -3, -4]]
    
    // Select all rows, two segments.
    sparseSegmentMean(c, Tensor(0, 1, 2), Tensor(0, 0, 1)) ==> [[0, 0, 0, 0], [5, 6, 7, 8]]
    // which is equivalent to:
    segmentMean(c, Tensor(0, 0, 1))

    The result tensor has the same data type as data, but its first dimension size is equal to the number of distinct segment indices.

    data

    Data (must have a numeric data type -- i.e., representing a number).

    indices

    One-dimensional tensor with rank equal to that of segmentIndices.

    segmentIndices

    Segment indices. Values should be sorted and can be repeated.

    numSegments

    Optional scalar indicating the size of the output tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  209. def sparseSegmentSum[D <: types.DataType, I1 <: Int32OrInt64, I2 <: Int32OrInt64](data: tensors.Tensor[D], indices: tensors.Tensor[I1], segmentIndices: tensors.Tensor[I2], numSegments: tensors.Tensor[types.INT32] = null): tensors.Tensor[D]

    Permalink

    The sparseSegmentSum op computes the sum along sparse segments of a tensor.

    The sparseSegmentSum op computes the sum along sparse segments of a tensor.

    The op is similar to that of segmentSum, with the difference that segmentIndices can have rank less than data's first dimension, selecting a subset of dimension 0, specified by indices. segmentIndices is allowed to have missing indices, in which case the output will be zeros at those indices. In those cases, numSegments is used to determine the size of the output.

    For example:

    // 'c' is [[1, 2, 3, 4], [-1, -2, -3, -4], [5, 6, 7, 8]]
    
    // Select two rows, one segment.
    sparseSegmentSum(c, Tensor(0, 1), Tensor(0, 0)) ==> [[0, 0, 0, 0]]
    
    // Select two rows, two segments.
    sparseSegmentSum(c, Tensor(0, 1), Tensor(0, 1)) ==> [[1, 2, 3, 4], [-1, -2, -3, -4]]
    
    // Select all rows, two segments.
    sparseSegmentSum(c, Tensor(0, 1, 2), Tensor(0, 0, 1)) ==> [[0, 0, 0, 0], [5, 6, 7, 8]]
    // which is equivalent to:
    segmentSum(c, Tensor(0, 0, 1))

    The result tensor has the same data type as data, but its first dimension size is equal to the number of distinct segment indices.

    data

    Data (must have a numeric data type -- i.e., representing a number).

    indices

    One-dimensional tensor with rank equal to that of segmentIndices.

    segmentIndices

    Segment indices. Values should be sorted and can be repeated.

    numSegments

    Optional scalar indicating the size of the output tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  210. def sparseSegmentSumSqrtN[D <: types.DataType, I1 <: Int32OrInt64, I2 <: Int32OrInt64](data: tensors.Tensor[D], indices: tensors.Tensor[I1], segmentIndices: tensors.Tensor[I2], numSegments: tensors.Tensor[types.INT32] = null): tensors.Tensor[D]

    Permalink

    The sparseSegmentSumSqrtN op computes the sum along sparse segments of a tensor, divided by the square root of the number of elements being summed.

    The sparseSegmentSumSqrtN op computes the sum along sparse segments of a tensor, divided by the square root of the number of elements being summed. segmentIndices is allowed to have missing indices, in which case the output will be zeros at those indices. In those cases, numSegments is used to determine the size of the output.

    Similar to sparseSegmentSum.

    The result tensor has the same data type as data, but its first dimension size is equal to the number of distinct segment indices.

    data

    Data (must have a numeric data type -- i.e., representing a number).

    indices

    One-dimensional tensor with rank equal to that of segmentIndices.

    segmentIndices

    Segment indices. Values should be sorted and can be repeated.

    numSegments

    Optional scalar indicating the size of the output tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  211. def sparseSoftmaxCrossEntropy[D <: DecimalDataType, I <: Int32OrInt64](logits: tensors.Tensor[D], labels: tensors.Tensor[I], axis: Int = 1): tensors.Tensor[D]

    Permalink

    The sparseSoftmaxCrossEntropy op computes the sparse softmax cross entropy between logits and labels.

    The sparseSoftmaxCrossEntropy op computes the sparse softmax cross entropy between logits and labels.

    The op measures the probabilistic error in discrete classification tasks in which the classes are mutually exclusive (each entry belongs to exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

    NOTE: For the op, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the labels vector must provide a single specific index for the true class for each row of logits (i.e., each batch instance). For soft softmax classification with a probability distribution for each entry, see softmaxCrossEntropy.

    WARNING: The op expects unscaled logits, since it performs a softmax on logits internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results.

    A common use case if to have logits of shape [batchSize, numClasses] and labels of shape [batchSize], but higher dimensions are also supported.

    logits must have data type FLOAT16, FLOAT32, or FLOAT64, and labels must have data type INT32 or INT64.

    logits

    Tensor of shape [D0, D1, ..., Dr-1, numClasses] (where r is the rank of labels and of the result), containing unscaled log probabilities.

    labels

    Tensor of shape [D0, D1, ..., Dr-1] (where r is the rank of labels and of the result). Each entry in labels must be an index in [0, numClasses). Other values will raise an exception when this op is run on a CPU, and return NaN values for the corresponding loss and gradient rows when this op is run on a GPU.

    axis

    The class axis, along which the softmax is computed. Defaults to -1, which is the last axis.

    returns

    Result as a new tensor, with the same shape as labels and the same data type as logits, containing the softmax cross entropy loss.

    Definition Classes
    NN
  212. def split[D <: types.DataType, I <: IntOrUInt](input: tensors.Tensor[D], splitSizes: tensors.Tensor[I], axis: tensors.Tensor[types.INT32] = 0): Seq[tensors.Tensor[D]]

    Permalink

    The split op splits a tensor into sub-tensors.

    The split op splits a tensor into sub-tensors.

    The op splits input along dimension axis into splitSizes.length smaller tensors. The shape of the i-th smaller tensor has the same size as the input except along dimension axis where the size is equal to splitSizes(i).

    For example:

    // 't' is a tensor with shape [5, 30]
    // Split 't' into 3 tensors with sizes [4, 5, 11] along dimension 1:
    val splits = split(t, splitSizes = [4, 15, 11], axis = 1)
    splits(0).shape ==> [5, 4]
    splits(1).shape ==> [5, 15]
    splits(2).shape ==> [5, 11]
    input

    Input tensor to split.

    splitSizes

    Sizes for the splits to obtain.

    axis

    Dimension along which to split the input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  213. def splitEvenly[D <: types.DataType](input: tensors.Tensor[D], numSplits: Int, axis: tensors.Tensor[types.INT32] = 0): Seq[tensors.Tensor[D]]

    Permalink

    The splitEvenly op splits a tensor into sub-tensors.

    The splitEvenly op splits a tensor into sub-tensors.

    The op splits input along dimension axis into numSplits smaller tensors. It requires that numSplits evenly splits input.shape(axis).

    For example:

    // 't' is a tensor with shape [5, 30]
    // Split 't' into 3 tensors along dimension 1:
    val splits = split(t, numSplits = 3, axis = 1)
    splits(0).shape ==> [5, 10]
    splits(1).shape ==> [5, 10]
    splits(2).shape ==> [5, 10]
    input

    Input tensor to split.

    numSplits

    Number of splits to obtain along the axis dimension.

    axis

    Dimension along which to split the input tensor.

    returns

    Result as a sequence of new tensors.

    Definition Classes
    Basic
  214. def sqrt[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The sqrt op computes the square root of a tensor element-wise.

    The sqrt op computes the square root of a tensor element-wise. I.e., y = \sqrt{x} = x^{1/2}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  215. def square[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The square op computes the square of a tensor element-wise.

    The square op computes the square of a tensor element-wise. I.e., y = x * x = x^2.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  216. def squaredDifference[D <: MathDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The squaredDifference op computes the squared difference between two tensors element-wise.

    The squaredDifference op computes the squared difference between two tensors element-wise. I.e., z = (x - y) * (x - y).

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  217. def squeeze[D <: types.DataType](input: tensors.Tensor[D], axes: Seq[Int] = null): tensors.Tensor[D]

    Permalink

    The squeeze op removes dimensions of size 1 from the shape of a tensor and returns the result as a new tensor.

    The squeeze op removes dimensions of size 1 from the shape of a tensor and returns the result as a new tensor.

    Given a tensor input, the op returns a tensor of the same data type, with all dimensions of size 1 removed. If axes is specified, then only the dimensions specified by that array will be removed. In that case, all these dimensions need to have size 1.

    For example:

    // 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
    t.squeeze().shape == Shape(2, 3)
    t.squeeze(Array(2, 4)).shape == Shape(1, 2, 3, 1)
    input

    Input tensor.

    axes

    Dimensions of size 1 to squeeze. If this argument is not provided, then all dimensions of size 1 will be squeezed.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  218. def stack[D <: types.DataType](inputs: Seq[tensors.Tensor[D]], axis: Int = 0): tensors.Tensor[D]

    Permalink

    The stack op stacks a list of rank-R tensors into one rank-(R+1) tensor.

    The stack op stacks a list of rank-R tensors into one rank-(R+1) tensor.

    The op packs the list of tensors in inputs into a tensor with rank one higher than each tensor in inputs, by packing them along the axis dimension. Given a list of N tensors of shape [A, B, C]:

    • If axis == 0, then the output tensor will have shape [N, A, B, C].
    • If axis == 1, then the output tensor will have shape [A, N, B, C].
    • If axis == -1, then the output tensor will have shape [A, B, C, N].
    • etc.

    For example:

    // 'x' is [1, 4]
    // 'y' is [2, 5]
    // 'z' is [3, 6]
    stack(Array(x, y, z)) ==> [[1, 4], [2, 5], [3, 6]]          // Packed along the first dimension.
    stack(Array(x, y, z), axis = 1) ==> [[1, 2, 3], [4, 5, 6]]  // Packed along the second dimension.

    This op is the opposite of unstack.

    inputs

    Input tensors to be stacked.

    axis

    Dimension along which to stack the input tensors.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  219. def stopGradient[D <: types.DataType](input: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The stopGradient op stops gradient execution, but otherwise acts as an identity op.

    The stopGradient op stops gradient execution, but otherwise acts as an identity op.

    When executed in a graph, this op outputs its input tensor as-is.

    When building ops to compute gradients, this op prevents the contribution of its inputs to be taken into account. Normally, the gradient generator adds ops to a graph to compute the derivatives of a specified 'loss' by recursively finding out inputs that contributed to its computation. If you insert this op in the graph its inputs are masked from the gradient generator. They are not taken into account for computing gradients.

    This is useful any time you want to compute a value with TensorFlow but need to pretend that the value was a constant. Some examples include:

    • The EM algorithm where the M-step should not involve backpropagation through the output of the E-step.
    • Contrastive divergence training of Boltzmann machines where, when differentiating the energy function, the training must not backpropagate through the graph that generated the samples from the model.
    • Adversarial training, where no backprop should happen through the adversarial example generation process.
    input

    Input tensor.

    returns

    Result as a new tensor which has the same value as the input tensor.

    Definition Classes
    Basic
  220. def subtract[D <: MathDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The subtract op subtracts two tensors element-wise.

    The subtract op subtracts two tensors element-wise. I.e., z = x - y.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  221. def sum[D <: ReducibleDataType](input: tensors.Tensor[D], axes: tensors.Tensor[types.INT32] = null, keepDims: Boolean = false): tensors.Tensor[D]

    Permalink

    The sum op computes the sum of elements across axes of a tensor.

    The sum op computes the sum of elements across axes of a tensor.

    Reduces input along the axes given in axes. Unless keepDims is true, the rank of the tensor is reduced by 1 for each entry in axes. If keepDims is true, the reduced axes are retained with size 1.

    If axes is null, then all axes are reduced, and a tensor with a single element is returned.

    For example:

    // 'x' is [[1, 1, 1]], [1, 1, 1]]
    sum(x) ==> 6
    sum(x, 0) ==> [2, 2, 2]
    sum(x, 1) ==> [3, 3]
    sum(x, 1, keepDims = true) ==> [[3], [3]]
    sum(x, [0, 1]) ==> 6
    input

    Input tensor to reduce.

    axes

    Integer tensor containing the axes to reduce. If null, then all axes are reduced.

    keepDims

    If true, retain the reduced axes.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  222. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  223. def tan[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The tan op computes the tangent of a tensor element-wise.

    The tan op computes the tangent of a tensor element-wise. I.e., y = \tan{x}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  224. def tanh[D <: MathDataType, TL[DD <: types.DataType] <: tensors.TensorLike[DD]](x: TL[D])(implicit ev: Aux[TL, D]): TL[D]

    Permalink

    The tanh op computes the hyperbolic tangent of a tensor element-wise.

    The tanh op computes the hyperbolic tangent of a tensor element-wise. I.e., y = \tanh{x}.

    x

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  225. def tensorDot[D <: MathDataType](a: tensors.Tensor[D], b: tensors.Tensor[D], axesA: tensors.Tensor[types.INT32], axesB: tensors.Tensor[types.INT32]): tensors.Tensor[D]

    Permalink

    Dynamic version (i.e., where axesA and axesB may be tensors) of the tensorDot op.

    Dynamic version (i.e., where axesA and axesB may be tensors) of the tensorDot op.

    The tensorDot op computes the tensor contraction of two tensors along the specified axes.

    A tensor contraction sums the product of elements from a and b over the indices specified by axesA and axesB. The axis axesA(i) of a must have the same dimension as the axis axesB(i) of b for all i in [0, aAxes.size). The tensors/sequences (depending on whether the dynamic version of the op is being used) axesA and axesB must have identical length and consist of unique integers that specify valid axes for each of the tensors. This operation corresponds to numpy.tensordot(a, b, axes) in Python.

    If numAxes is provided instead of axesA and axesB, then the contraction is performed over the last numAxes axes of a and the first numAxes axes of b, in order.

    Example 1: When a and b are matrices (rank 2), the case numAxes = 1 is equivalent to matrix multiplication. Example 2: When a and b are matrices (rank 2), the case axesA = [1] and axesB = [0] is equivalent to matrix multiplication. Example 3: Suppose that a_{ijk} and b_{lmn} represent two tensors of rank 3. Then, the case axesA = [0] and axesB = [2] results in the rank 4 tensor c_{jklm} whose entry corresponding to the indices (j, k, l, m) is given by: c_{jklm} = \sum_i a_{ijk} b_{lmi}. In general, rank(result) = rank(a) + rank(b) - 2 * axesA.size.

    a

    First tensor.

    b

    Second tensor.

    axesA

    Axes to contract in a.

    axesB

    Axes to contract in b.

    returns

    Created op output.

    Definition Classes
    Math
    Annotations
    @throws( ... )
    Exceptions thrown

    InvalidShapeException If axesA or axesB is not a scalar.

  226. def tensorDot[D <: MathDataType](a: tensors.Tensor[D], b: tensors.Tensor[D], numAxes: tensors.Tensor[types.INT32]): tensors.Tensor[D]

    Permalink

    Dynamic version (i.e., where numAxes may be a tensor) of the tensorDot op.

    Dynamic version (i.e., where numAxes may be a tensor) of the tensorDot op.

    The tensorDot op computes the tensor contraction of two tensors along the specified axes.

    A tensor contraction sums the product of elements from a and b over the indices specified by axesA and axesB. The axis axesA(i) of a must have the same dimension as the axis axesB(i) of b for all i in [0, aAxes.size). The tensors/sequences (depending on whether the dynamic version of the op is being used) axesA and axesB must have identical length and consist of unique integers that specify valid axes for each of the tensors. This operation corresponds to numpy.tensordot(a, b, axes) in Python.

    If numAxes is provided instead of axesA and axesB, then the contraction is performed over the last numAxes axes of a and the first numAxes axes of b, in order.

    Example 1: When a and b are matrices (rank 2), the case numAxes = 1 is equivalent to matrix multiplication. Example 2: When a and b are matrices (rank 2), the case axesA = [1] and axesB = [0] is equivalent to matrix multiplication. Example 3: Suppose that a_{ijk} and b_{lmn} represent two tensors of rank 3. Then, the case axesA = [0] and axesB = [2] results in the rank 4 tensor c_{jklm} whose entry corresponding to the indices (j, k, l, m) is given by: c_{jklm} = \sum_i a_{ijk} b_{lmi}. In general, rank(result) = rank(a) + rank(b) - 2 * axesA.size.

    a

    First tensor.

    b

    Second tensor.

    numAxes

    Number of axes to contract.

    returns

    Created op output.

    Definition Classes
    Math
    Annotations
    @throws( ... )
    Exceptions thrown

    InvalidShapeException If numAxes is not a scalar.

  227. def tile[D <: types.DataType, I <: Int32OrInt64](input: tensors.Tensor[D], multiples: tensors.Tensor[I]): tensors.Tensor[D]

    Permalink

    The tile op tiles the provided input tensor.

    The tile op tiles the provided input tensor.

    The op creates a new tensor by replicating input multiples times. The output tensor's ith dimension has input.shape(i) * multiples(i) elements, and the values of input are replicated multiples(i) times along the ith dimension. For example, tiling [a b c d] by [2] produces [a b c d a b c d].

    input

    Tensor to tile.

    multiples

    One-dimensional tensor containing the tiling multiples. Its length must be the same as the rank of input.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  228. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  229. def topK[D <: MathDataType](input: tensors.Tensor[D], k: tensors.Tensor[types.INT32] = 1, sorted: Boolean = true): (tensors.Tensor[D], tensors.Tensor[types.INT32])

    Permalink

    The topK op finds values and indices of the k largest entries for the last dimension of input.

    The topK op finds values and indices of the k largest entries for the last dimension of input.

    If input is a vector (i.e., rank-1 tensor), the op finds the k largest entries in the vector and outputs their values and their indices as vectors. Thus, values(j) will be the j-th largest entry in input, and indices(j) will be its index.

    For matrices (and respectively, higher rank input tensors), the op computes the top k entries in each row (i.e., vector along the last dimension of the tensor). Thus, values.shape = indices.shape = input.shape(0 :: -1) + k.

    If two elements are equal, the lower-index element appears first.

    input

    Input tensor whose last axis has size at least k.

    k

    Scalar tensor containing the number of top elements to look for along the last axis of input.

    sorted

    If true, the resulting k elements will be sorted by their values in descending order.

    returns

    Tuple containing the created tensors: (i) values: the k largest elements along each last dimensional slice, and (ii) indices: the indices of values within the last axis of input.

    Definition Classes
    NN
  230. def trace[D <: MathDataType](input: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The trace op computes the trace of a tensor.

    The trace op computes the trace of a tensor.

    The trace of a tensor is defined as the sum along the main diagonal of each inner-most matrix in it. If the tensor is of rank k with shape [I, J, K, ..., L, M, N], then output is a tensor of rank k - 2 with dimensions [I, J, K, ..., L] where: output[i, j, k, ..., l] = trace(x[i, j, i, ..., l, :, :]).

    For example:

    // 'x' is [[1, 2], [3, 4]]
    trace(x) ==> 5
    
    // 'x' is [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
    trace(x) ==> 15
    
    // 'x' is [[[ 1,  2,  3],
    //          [ 4,  5,  6],
    //          [ 7,  8,  9]],
    //         [[-1, -2, -3],
    //          [-4, -5, -6],
    //          [-7, -8, -9]]]
    trace(x) ==> [15, -15]
    input

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  231. def transpose[D <: types.DataType, I <: Int32OrInt64](input: tensors.Tensor[D], permutation: tensors.Tensor[I] = null, conjugate: Boolean = false): tensors.Tensor[D]

    Permalink

    The transpose op permutes the dimensions of a tensor according to a provided permutation.

    The transpose op permutes the dimensions of a tensor according to a provided permutation.

    The returned tensor's dimension i will correspond to input dimension permutation(i). If permutation is not provided, then it is set to (n - 1, ..., 0), where n is the rank of the input tensor. Hence by default, the op performs a regular matrix transpose on two-dimensional input tensors.

    For example:

    // Tensor 'x' is [[1, 2, 3], [4, 5, 6]]
    transpose(x) ==> [[1, 4], [2, 5], [3, 6]]
    transpose(x, permutation = Array(1, 0)) ==> [[1, 4], [2, 5], [3, 6]]
    
    // Tensor 'x' is [[[1, 2, 3],
    //                 [4, 5, 6]],
    //                [[7, 8, 9],
    //                 [10, 11, 12]]]
    transpose(x, permutation = Array(0, 2, 1)) ==> [[[1,  4], [2,  5], [3,  6]],
                                                    [[7, 10], [8, 11], [9, 12]]]
    input

    Input tensor to transpose.

    permutation

    Permutation of the input tensor dimensions.

    conjugate

    If true, then the complex conjugate of the transpose result is returned.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  232. def truncateDivide[D <: MathDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The truncateDivide op truncate-divides two tensors element-wise.

    The truncateDivide op truncate-divides two tensors element-wise.

    Truncation designates that negative numbers will round fractional quantities toward zero. I.e. -7 / 5 = 1. This matches C semantics but it is different than Python semantics. See floorDivide for a division function that matches Python semantics.

    I.e., z = x / y, for x and y being integer tensors.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  233. def truncateMod[D <: MathDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The truncateMod op computes the remainder of the division between two tensors element-wise.

    The truncateMod op computes the remainder of the division between two tensors element-wise.

    The op emulates C semantics in that the result here is consistent with a truncating divide. E.g., truncate(x / y) * y + truncateMod(x, y) = x.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  234. def unique[D <: types.DataType, I <: Int32OrInt64](input: tensors.Tensor[D], indicesDataType: I): (tensors.Tensor[D], tensors.Tensor[I])

    Permalink

    The unique op finds unique elements in a one-dimensional tensor.

    The unique op finds unique elements in a one-dimensional tensor.

    The op returns a tensor output containing all of the unique elements of input sorted in the same order that they occur in input. This op also returns a tensor indices the same size as input that contains the index of each value of input in the unique output output. In other words output(indices(i)) = input(i), for i in [0, 1, ..., input.rank - 1].

    For example:

    // Tensor 't' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
    val (output, indices) = unique(t)
    // 'output' is [1, 2, 4, 7, 8]
    // 'indices' is [0, 0, 1, 2, 2, 2, 3, 4, 4]
    input

    One-dimensional input tensor.

    indicesDataType

    Data type of the returned indices.

    returns

    Tuple containing output and indices.

    Definition Classes
    Basic
  235. def unique[D <: types.DataType](input: tensors.Tensor[D]): (tensors.Tensor[D], tensors.Tensor[types.INT32])

    Permalink

    The unique op finds unique elements in a one-dimensional tensor.

    The unique op finds unique elements in a one-dimensional tensor.

    The op returns a tensor output containing all of the unique elements of input sorted in the same order that they occur in input. This op also returns a tensor indices the same size as input that contains the index of each value of input in the unique output output. In other words output(indices(i)) = input(i), for i in [0, 1, ..., input.rank - 1].

    For example:

    // Tensor 't' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
    val (output, indices) = unique(t)
    // 'output' is [1, 2, 4, 7, 8]
    // 'indices' is [0, 0, 1, 2, 2, 2, 3, 4, 4]
    input

    One-dimensional input tensor.

    returns

    Tuple containing output and indices.

    Definition Classes
    Basic
  236. def uniqueWithCounts[D <: types.DataType, I <: Int32OrInt64](input: tensors.Tensor[D], indicesDataType: I): (tensors.Tensor[D], tensors.Tensor[I], tensors.Tensor[I])

    Permalink

    The uniqueWithCounts finds unique elements in a one-dimensional tensor.

    The uniqueWithCounts finds unique elements in a one-dimensional tensor.

    The op returns a tensor output containing all of the unique elements of input sorted in the same order that they occur in input. This op also returns a tensor indices the same size as input that contains the index of each value of input in the unique output output. Finally, it returns a third tensor counts that contains the count of each element of output in input.

    For example:

    // Tensor 't' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
    val (output, indices, counts) = uniqueWithCounts(t)
    // 'output' is [1, 2, 4, 7, 8]
    // 'indices' is [0, 0, 1, 2, 2, 2, 3, 4, 4]
    // 'counts' is [2, 1, 3, 1, 2]
    input

    One-dimensional input tensor.

    indicesDataType

    Data type of the returned indices.

    returns

    Tuple containing output, indices, and counts.

    Definition Classes
    Basic
  237. def uniqueWithCounts[D <: types.DataType](input: tensors.Tensor[D]): (tensors.Tensor[D], tensors.Tensor[types.INT32], tensors.Tensor[types.INT32])

    Permalink

    The uniqueWithCounts finds unique elements in a one-dimensional tensor.

    The uniqueWithCounts finds unique elements in a one-dimensional tensor.

    The op returns a tensor output containing all of the unique elements of input sorted in the same order that they occur in input. This op also returns a tensor indices the same size as input that contains the index of each value of input in the unique output output. Finally, it returns a third tensor counts that contains the count of each element of output in input.

    For example:

    // Tensor 't' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
    val (output, indices, counts) = uniqueWithCounts(t)
    // 'output' is [1, 2, 4, 7, 8]
    // 'indices' is [0, 0, 1, 2, 2, 2, 3, 4, 4]
    // 'counts' is [2, 1, 3, 1, 2]
    input

    One-dimensional input tensor.

    returns

    Tuple containing output, indices, and counts.

    Definition Classes
    Basic
  238. def unsortedSegmentMax[D <: types.DataType, I <: Int32OrInt64](data: tensors.Tensor[D], segmentIndices: tensors.Tensor[I], segmentsNumber: tensors.Tensor[types.INT32]): tensors.Tensor[D]

    Permalink

    The unsortedSegmentMax op computes the max along segments of a tensor.

    The unsortedSegmentMax op computes the max along segments of a tensor.

    The op computes a tensor such that output(i) = \max_{j...} data(j...) where the max is over all j such that segmentIndices(j) == i. Unlike segmentMax, segmentIndices need not be sorted and need not cover all values in the full range of valid values.

    If the max if empty for a given segment index i, output(i) is set to 0.

    segmentsNumber should equal the number of distinct segment indices.

    The result tensor has the same data type as data, but its first dimension size is equal to the number of distinct segment indices.

    data

    Data (must have a numeric data type -- i.e., representing a number).

    segmentIndices

    Segment indices.

    segmentsNumber

    Number of segments.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  239. def unsortedSegmentSum[D <: types.DataType, I <: Int32OrInt64](data: tensors.Tensor[D], segmentIndices: tensors.Tensor[I], segmentsNumber: tensors.Tensor[types.INT32]): tensors.Tensor[D]

    Permalink

    The unsortedSegmentSum op computes the sum along segments of a tensor.

    The unsortedSegmentSum op computes the sum along segments of a tensor.

    The op computes a tensor such that output(i) = \sum_{j...} data(j...) where the sum is over all j such that segmentIndices(j) == i. Unlike segmentSum, segmentIndices need not be sorted and need not cover all values in the full range of valid values.

    If the sum if empty for a given segment index i, output(i) is set to 0.

    segmentsNumber should equal the number of distinct segment indices.

    The result tensor has the same data type as data, but its first dimension size is equal to the number of distinct segment indices.

    data

    Data (must have a numeric data type -- i.e., representing a number).

    segmentIndices

    Segment indices.

    segmentsNumber

    Number of segments.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  240. def unstack[D <: types.DataType](input: tensors.Tensor[D], number: Int = 1, axis: Int = 0): Seq[tensors.Tensor[D]]

    Permalink

    The unstack op unpacks the provided dimension of a rank-R tensor into a list of rank-(R-1) tensors.

    The unstack op unpacks the provided dimension of a rank-R tensor into a list of rank-(R-1) tensors.

    The op unpacks number tensors from input by chipping it along the axis dimension. If number == -1 (i.e., unspecified), its value is inferred from the shape of input. If input.shape(axis) is not known, then an IllegalArgumentException is thrown.

    For example, given a tensor of shape [A, B, C, D]:

    • If axis == 0, then the ith tensor in the output is the slice input(i, ::, ::, ::) and each tensor in the output will have shape [B, C, D].
    • If axis == 1, then the ith tensor in the output is the slice input(::, i, ::, ::) and each tensor in the output will have shape [A, C, D].
    • If axis == -1, then the ith tensor in the output is the slice input(::, ::, ::, i) and each tensor in the output will have shape [A, B, C].
    • etc.

    This op is the opposite of stack.

    input

    Rank R > 0 Tensor to be unstacked.

    number

    Number of tensors to unstack. If set to -1 (the default value), its value will be inferred.

    axis

    Dimension along which to unstack the input tensor.

    returns

    Result as a sequence of new tensors.

    Definition Classes
    Basic
  241. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  242. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  243. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  244. def where(input: tensors.Tensor[types.BOOLEAN]): tensors.Tensor[types.INT64]

    Permalink

    The where op returns locations of true values in a boolean tensor.

    The where op returns locations of true values in a boolean tensor.

    The op returns the coordinates of true elements in input. The coordinates are returned in a 2-D tensor where the first dimension (rows) represents the number of true elements, and the second dimension (columns) represents the coordinates of the true elements. Note that the shape of the output tensor can vary depending on how many true values there are in input. Indices are output in row-major order.

    For example:

    // 'input' tensor is [[true, false]
    //                    [true, false]]
    // 'input' has two 'true' values and so the output has two coordinates
    // 'input' has rank 2 and so each coordinate has two indices
    where(input) ==> [[0, 0],
                      [1, 0]]
    
    // `input` tensor is [[[true, false]
    //                     [true, false]]
    //                    [[false, true]
    //                     [false, true]]
    //                    [[false, false]
    //                     [false, true]]]
    // 'input' has 5 'true' values and so the output has 5 coordinates
    // 'input' has rank 3 and so each coordinate has three indices
    where(input) ==> [[0, 0, 0],
                      [0, 1, 0],
                      [1, 0, 1],
                      [1, 1, 1],
                      [2, 1, 1]]
    input

    Input boolean tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Basic
  245. def zerosFraction[D <: ReducibleDataType](input: tensors.Tensor[D]): tensors.Tensor[types.FLOAT32]

    Permalink

    The zerosFraction op computes the fraction of zeros in input.

    The zerosFraction op computes the fraction of zeros in input.

    If input is empty, the result is NaN.

    This is useful in summaries to measure and report sparsity.

    input

    Input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
  246. def zeta[D <: Float32OrFloat64](x: tensors.Tensor[D], q: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The zeta op computes the Hurwitz zeta function \zeta(x, q).

    The zeta op computes the Hurwitz zeta function \zeta(x, q).

    The Hurwitz zeta function is defined as:

    \zeta(x, q) = \sum_{n=0}{\infty} (q + n){-x}.

    x

    First input tensor.

    q

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math

Deprecated Value Members

  1. def floorDivide[D <: MathDataType](x: tensors.Tensor[D], y: tensors.Tensor[D]): tensors.Tensor[D]

    Permalink

    The floorDivide op floor-divides two tensors element-wise.

    The floorDivide op floor-divides two tensors element-wise. I.e., z = x // y.

    NOTE: This op supports broadcasting. More information about broadcasting can be found [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).

    x

    First input tensor.

    y

    Second input tensor.

    returns

    Result as a new tensor.

    Definition Classes
    Math
    Annotations
    @deprecated
    Deprecated

    (Since version 0.1) Use truncateDivide instead.

Inherited from API

Inherited from API

Inherited from API

Inherited from API

Inherited from Random

Inherited from NN

Inherited from Math

Inherited from Cast

Inherited from Basic

Inherited from API

Inherited from AnyRef

Inherited from Any

Ops / Basic

Ops / Math

Ops / NN

Ops / Random

Ungrouped