Package

org.platanios.tensorflow.api.ops.rnn

attention

Permalink

package attention

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. attention
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. abstract class Attention[AS, ASS] extends AnyRef

    Permalink

    Base class for attention mechanisms.

  2. class AttentionWrapperCell[S, SS, AS, ASS] extends RNNCell[Output, core.Shape, AttentionWrapperState[S, SS, Seq[AS], Seq[ASS]], (SS, core.Shape, core.Shape, Seq[core.Shape], Seq[core.Shape], Seq[ASS])]

    Permalink

    RNN cell that wraps another RNN cell and adds support for attention to it.

  3. case class AttentionWrapperState[S, SS, AS, ASS](cellState: S, time: Output, attention: Output, alignments: Seq[Output], alignmentsHistory: Seq[TensorArray], attentionState: AS)(implicit evS: Aux[S, SS], evAS: Aux[AS, ASS]) extends Product with Serializable

    Permalink

    State of the attention wrapper RNN cell.

    State of the attention wrapper RNN cell.

    cellState

    Wrapped cell state.

    time

    INT32 scalar containing the current time step.

    attention

    Attention emitted at the previous time step.

    alignments

    Alignments emitted at the previous time step for each attention mechanism.

    alignmentsHistory

    Alignments emitted at all time steps for each attention mechanism. Call stack() on each of the tensor arrays to convert them to tensors.

    attentionState

    Attention cell state.

  4. class BahdanauAttention extends SimpleAttention

    Permalink

    Bahdanau-style (multiplicative) attention scoring.

    Bahdanau-style (multiplicative) attention scoring.

    This attention has two forms. The first is standard Luong attention, as described in: ["Effective Approaches to Attention-based Neural Machine Translation.", EMNLP 2015](https://arxiv.org/abs/1508.04025).

    The second is the scaled form inspired partly by the normalized form of Bahdanau attention. To enable the second form, construct the object with weightsScale set to the value of a scalar scaling variable.

    This attention has two forms. The first is Bahdanau attention, as described in: ["Neural Machine Translation by Jointly Learning to Align and Translate.", ICLR 2015](https://arxiv.org/abs/1409.0473).

    The second is a normalized form inspired by the weight normalization method described in: ["Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks.", NIPS 2016](https://arxiv.org/abs/1602.07868).

  5. class LuongAttention extends SimpleAttention

    Permalink

    Luong-style (multiplicative) attention scoring.

    Luong-style (multiplicative) attention scoring.

    This attention has two forms. The first is standard Luong attention, as described in: ["Effective Approaches to Attention-based Neural Machine Translation.", EMNLP 2015](https://arxiv.org/abs/1508.04025).

    The second is the scaled form inspired partly by the normalized form of Bahdanau attention. To enable the second form, construct the object with weightsScale set to the value of a scalar scaling variable.

  6. abstract class SimpleAttention extends Attention[Output, core.Shape]

    Permalink

    Base class for attention models that use as state the previous alignment.

Inherited from AnyRef

Inherited from Any

Ungrouped