org.platanios.tensorflow.api.learn.layers.rnn.cell
Name scope (also acting as variable scope) for this layer.
RNN cell on which to perform dropout.
Keep probability for the input of the RNN cell.
Keep probability for the output of the RNN cell.
Keep probability for the output state of the RNN cell.
Optional random seed, used to generate a random seed pair for the random number generator, when combined with the graph-level seed.
RNN cell on which to perform dropout.
Keep probability for the input of the RNN cell.
Name scope (also acting as variable scope) for this layer.
Name scope (also acting as variable scope) for this layer.
Keep probability for the output of the RNN cell.
Optional random seed, used to generate a random seed pair for the random number generator, when combined with the graph-level seed.
Keep probability for the output state of the RNN cell.
RNN cell that applies dropout to the provided RNN cell.
Note that currently, a different dropout mask is used for each time step in an RNN (i.e., not using the variational recurrent dropout method described in ["A Theoretically Grounded Application of Dropout in Recurrent Neural Networks"](https://arxiv.org/abs/1512.05287).
Note also that for LSTM cells, no dropout is applied to the memory tensor of the state. It is only applied to the state tensor.