Class SimpleRnn

    • Method Detail

      • rnnTimeStep

        public INDArray rnnTimeStep​(INDArray input,
                                    LayerWorkspaceMgr workspaceMgr)
        Description copied from interface: RecurrentLayer
        Do one or more time steps using the previous time step state stored in stateMap.
        Can be used to efficiently do forward pass one or n-steps at a time (instead of doing forward pass always from t=0)
        If stateMap is empty, default initialization (usually zeros) is used
        Implementations also update stateMap at the end of this method
        Parameters:
        input - Input to this layer
        Returns:
        activations
      • rnnActivateUsingStoredState

        public INDArray rnnActivateUsingStoredState​(INDArray input,
                                                    boolean training,
                                                    boolean storeLastForTBPTT,
                                                    LayerWorkspaceMgr workspaceMgr)
        Description copied from interface: RecurrentLayer
        Similar to rnnTimeStep, this method is used for activations using the state stored in the stateMap as the initialization. However, unlike rnnTimeStep this method does not alter the stateMap; therefore, unlike rnnTimeStep, multiple calls to this method (with identical input) will:
        (a) result in the same output
        (b) leave the state maps (both stateMap and tBpttStateMap) in an identical state
        Parameters:
        input - Layer input
        training - if true: training. Otherwise: test
        storeLastForTBPTT - If true: store the final state in tBpttStateMap for use in truncated BPTT training
        Returns:
        Layer activations
      • backpropGradient

        public Pair<Gradient,​INDArray> backpropGradient​(INDArray epsilon,
                                                              LayerWorkspaceMgr workspaceMgr)
        Description copied from interface: Layer
        Calculate the gradient relative to the error in the next layer
        Specified by:
        backpropGradient in interface Layer
        Overrides:
        backpropGradient in class BaseLayer<SimpleRnn>
        Parameters:
        epsilon - w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C is cost function a=sigma(z) is activation.
        workspaceMgr - Workspace manager
        Returns:
        Pair where Gradient is gradient for this layer, INDArray is epsilon (activation gradient) needed by next layer, but before element-wise multiply by sigmaPrime(z). So for standard feed-forward layer, if this layer is L, then return.getSecond() == dL/dIn = (w^(L)*(delta^(L))^T)^T. Note that the returned array should be placed in the ArrayType.ACTIVATION_GRAD workspace via the workspace manager
      • tbpttBackpropGradient

        public Pair<Gradient,​INDArray> tbpttBackpropGradient​(INDArray epsilon,
                                                                   int tbpttBackLength,
                                                                   LayerWorkspaceMgr workspaceMgr)
        Description copied from interface: RecurrentLayer
        Truncated BPTT equivalent of Layer.backpropGradient(). Primary difference here is that forward pass in the context of BPTT is that we do forward pass using stored state for truncated BPTT vs. from zero initialization for standard BPTT.
      • isPretrainLayer

        public boolean isPretrainLayer()
        Description copied from interface: Layer
        Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)
        Returns:
        true if the layer can be pretrained (using fit(INDArray), false otherwise
      • activate

        public INDArray activate​(boolean training,
                                 LayerWorkspaceMgr workspaceMgr)
        Description copied from interface: Layer
        Perform forward pass and return the activations array with the last set input
        Specified by:
        activate in interface Layer
        Overrides:
        activate in class BaseLayer<SimpleRnn>
        Parameters:
        training - training or test mode
        workspaceMgr - Workspace manager
        Returns:
        the activation (layer output) of the last specified input. Note that the returned array should be placed in the ArrayType.ACTIVATIONS workspace via the workspace manager
      • hasLayerNorm

        public boolean hasLayerNorm()
        Description copied from class: BaseLayer
        Does this layer support and is it enabled layer normalization? Only Dense and SimpleRNN Layers support layer normalization.
        Overrides:
        hasLayerNorm in class BaseLayer<SimpleRnn>
        Returns:
        True if layer normalization is enabled on this layer, false otherwise