Class RNN

  • All Implemented Interfaces:
    Block

    public class RNN
    extends RecurrentBlock
    RNN is an implementation of recurrent neural networks which applies a single-gate recurrent layer to input. Two kinds of activation function are supported: ReLU and Tanh.

    Current implementation refers the [paper](https://crl.ucsd.edu/~elman/Papers/fsit.pdf), Finding structure in time - Elman, 1988.

    The RNN operator is formulated as below:

    With ReLU activation function: \(h_t = relu(W_{ih} * x_t + b_{ih} + W_{hh} * h_{(t-1)} + b_{hh})\)

    With Tanh activation function: \(h_t = \tanh(W_{ih} * x_t + b_{ih} + W_{hh} * h_{(t-1)} + b_{hh})\)