Class/Object

org.platanios.tensorflow.api.learn.layers.rnn

BidirectionalRNN

Related Docs: object BidirectionalRNN | package rnn

Permalink

class BidirectionalRNN[O, OS, S, SS] extends Layer[O, (Tuple[O, S], Tuple[O, S])]

Creates a bidirectional dynamic RNN layer.

$OpDocRNNBidirectionalDynamicRNN

Linear Supertypes
Layer[O, (Tuple[O, S], Tuple[O, S])], AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. BidirectionalRNN
  2. Layer
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new BidirectionalRNN(name: String, cellFw: RNNCell[O, OS, S, SS], cellBw: RNNCell[O, OS, S, SS], initialStateFw: () ⇒ S = null, initialStateBw: () ⇒ S = null, timeMajor: Boolean = false, parallelIterations: Int = 32, swapMemory: Boolean = false, sequenceLengths: tensors.Tensor[types.DataType] = null)(implicit evO: Aux[O, OS], evS: Aux[S, SS])

    Permalink

    name

    Name scope (also acting as variable scope) for this layer.

    cellFw

    RNN cell to use for the forward direction.

    cellBw

    RNN cell to use for the backward direction.

    initialStateFw

    Initial state to use for the forward RNN, which is a structure over tensors with shapes [batchSize, stateShape(i)(0), stateShape(i)(1), ...], where i corresponds to the index of the corresponding state. Defaults to a zero state.

    initialStateBw

    Initial state to use for the backward RNN, which is a structure over tensors with shapes [batchSize, stateShape(i)(0), stateShape(i)(1), ...], where i corresponds to the index of the corresponding state. Defaults to a zero state.

    timeMajor

    Boolean value indicating whether the inputs are provided in time-major format (i.e., have shape [time, batch, depth]) or in batch-major format (i.e., have shape [batch, time, depth]).

    parallelIterations

    Number of RNN loop iterations allowed to run in parallel.

    swapMemory

    If true, GPU-CPU memory swapping support is enabled for the RNN loop.

    sequenceLengths

    Optional INT32 tensor with shape [batchSize] containing the sequence lengths for each row in the batch.

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. def +(other: Layer[O, (Tuple[O, S], Tuple[O, S])]): Concatenate[O, (Tuple[O, S], Tuple[O, S])]

    Permalink
    Definition Classes
    Layer
  4. def ++(others: Seq[Layer[O, (Tuple[O, S], Tuple[O, S])]]): Concatenate[O, (Tuple[O, S], Tuple[O, S])]

    Permalink
    Definition Classes
    Layer
  5. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  6. def >>[S](other: Layer[(Tuple[O, S], Tuple[O, S]), S]): Compose[O, (Tuple[O, S], Tuple[O, S]), S]

    Permalink
    Definition Classes
    Layer
  7. def apply(input: O)(implicit mode: Mode): (Tuple[O, S], Tuple[O, S])

    Permalink
    Definition Classes
    Layer
  8. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  9. val cellBw: RNNCell[O, OS, S, SS]

    Permalink

    RNN cell to use for the backward direction.

  10. val cellFw: RNNCell[O, OS, S, SS]

    Permalink

    RNN cell to use for the forward direction.

  11. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  12. def compose[S](other: Layer[(Tuple[O, S], Tuple[O, S]), S]): Compose[O, (Tuple[O, S], Tuple[O, S]), S]

    Permalink
    Definition Classes
    Layer
  13. def concatenate(others: Layer[O, (Tuple[O, S], Tuple[O, S])]*): Concatenate[O, (Tuple[O, S], Tuple[O, S])]

    Permalink
    Definition Classes
    Layer
  14. final def currentStep: ops.Output

    Permalink
    Definition Classes
    Layer
  15. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  16. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  17. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  18. def forward(input: O)(implicit mode: Mode): (Tuple[O, S], Tuple[O, S])

    Permalink
    Definition Classes
    Layer
  19. def forwardWithoutContext(input: O)(implicit mode: Mode): (Tuple[O, S], Tuple[O, S])

    Permalink
    Definition Classes
    BidirectionalRNNLayer
  20. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  21. final def getParameter(name: String, dataType: types.DataType, shape: core.Shape, initializer: Initializer = null, regularizer: Regularizer = null, trainable: Boolean = true, reuse: Reuse = ReuseOrCreateNew, collections: Set[Key[ops.variables.Variable]] = Set.empty, cachingDevice: (OpSpecification) ⇒ String = null): ops.Output

    Permalink
    Definition Classes
    Layer
  22. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  23. val initialStateBw: () ⇒ S

    Permalink

    Initial state to use for the backward RNN, which is a structure over tensors with shapes [batchSize, stateShape(i)(0), stateShape(i)(1), ...], where i corresponds to the index of the corresponding state.

    Initial state to use for the backward RNN, which is a structure over tensors with shapes [batchSize, stateShape(i)(0), stateShape(i)(1), ...], where i corresponds to the index of the corresponding state. Defaults to a zero state.

  24. val initialStateFw: () ⇒ S

    Permalink

    Initial state to use for the forward RNN, which is a structure over tensors with shapes [batchSize, stateShape(i)(0), stateShape(i)(1), ...], where i corresponds to the index of the corresponding state.

    Initial state to use for the forward RNN, which is a structure over tensors with shapes [batchSize, stateShape(i)(0), stateShape(i)(1), ...], where i corresponds to the index of the corresponding state. Defaults to a zero state.

  25. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  26. val layerType: String

    Permalink
    Definition Classes
    BidirectionalRNNLayer
  27. def map[MR](mapFn: ((Tuple[O, S], Tuple[O, S])) ⇒ MR): Layer[O, MR]

    Permalink
    Definition Classes
    Layer
  28. val name: String

    Permalink

    Name scope (also acting as variable scope) for this layer.

    Name scope (also acting as variable scope) for this layer.

    Definition Classes
    BidirectionalRNNLayer
  29. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  30. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  31. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  32. val parallelIterations: Int

    Permalink

    Number of RNN loop iterations allowed to run in parallel.

  33. val sequenceLengths: tensors.Tensor[types.DataType]

    Permalink

    Optional INT32 tensor with shape [batchSize] containing the sequence lengths for each row in the batch.

  34. val swapMemory: Boolean

    Permalink

    If true, GPU-CPU memory swapping support is enabled for the RNN loop.

  35. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  36. val timeMajor: Boolean

    Permalink

    Boolean value indicating whether the inputs are provided in time-major format (i.e., have shape [time, batch, depth]) or in batch-major format (i.e., have shape [batch, time, depth]).

  37. def toString(): String

    Permalink
    Definition Classes
    Layer → AnyRef → Any
  38. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  39. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  40. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  41. def withConcatenatedOutputs: Layer[O, Tuple[O, (S, S)]]

    Permalink

Inherited from Layer[O, (Tuple[O, S], Tuple[O, S])]

Inherited from AnyRef

Inherited from Any

Ungrouped