Interface RecurrentLayer

    • Method Detail

      • rnnTimeStep

        INDArray rnnTimeStep​(INDArray input,
                             LayerWorkspaceMgr workspaceMgr)
        Do one or more time steps using the previous time step state stored in stateMap.
        Can be used to efficiently do forward pass one or n-steps at a time (instead of doing forward pass always from t=0)
        If stateMap is empty, default initialization (usually zeros) is used
        Implementations also update stateMap at the end of this method
        Parameters:
        input - Input to this layer
        Returns:
        activations
      • rnnGetPreviousState

        Map<String,​INDArray> rnnGetPreviousState()
        Returns a shallow copy of the RNN stateMap (that contains the stored history for use in methods such as rnnTimeStep
      • rnnSetPreviousState

        void rnnSetPreviousState​(Map<String,​INDArray> stateMap)
        Set the stateMap (stored history). Values set using this method will be used in next call to rnnTimeStep()
      • rnnClearPreviousState

        void rnnClearPreviousState()
        Reset/clear the stateMap for rnnTimeStep() and tBpttStateMap for rnnActivateUsingStoredState()
      • rnnActivateUsingStoredState

        INDArray rnnActivateUsingStoredState​(INDArray input,
                                             boolean training,
                                             boolean storeLastForTBPTT,
                                             LayerWorkspaceMgr workspaceMg)
        Similar to rnnTimeStep, this method is used for activations using the state stored in the stateMap as the initialization. However, unlike rnnTimeStep this method does not alter the stateMap; therefore, unlike rnnTimeStep, multiple calls to this method (with identical input) will:
        (a) result in the same output
        (b) leave the state maps (both stateMap and tBpttStateMap) in an identical state
        Parameters:
        input - Layer input
        training - if true: training. Otherwise: test
        storeLastForTBPTT - If true: store the final state in tBpttStateMap for use in truncated BPTT training
        Returns:
        Layer activations
      • rnnGetTBPTTState

        Map<String,​INDArray> rnnGetTBPTTState()
        Get the RNN truncated backpropagations through time (TBPTT) state for the recurrent layer. The TBPTT state is used to store intermediate activations/state between updating parameters when doing TBPTT learning
        Returns:
        State for the RNN layer
      • rnnSetTBPTTState

        void rnnSetTBPTTState​(Map<String,​INDArray> state)
        Set the RNN truncated backpropagations through time (TBPTT) state for the recurrent layer. The TBPTT state is used to store intermediate activations/state between updating parameters when doing TBPTT learning
        Parameters:
        state - TBPTT state to set
      • tbpttBackpropGradient

        Pair<Gradient,​INDArray> tbpttBackpropGradient​(INDArray epsilon,
                                                            int tbpttBackLength,
                                                            LayerWorkspaceMgr workspaceMgr)
        Truncated BPTT equivalent of Layer.backpropGradient(). Primary difference here is that forward pass in the context of BPTT is that we do forward pass using stored state for truncated BPTT vs. from zero initialization for standard BPTT.