Interface Layer

    • Method Detail

      • setCacheMode

        void setCacheMode​(CacheMode mode)
        This method sets given CacheMode for current layer
        Parameters:
        mode -
      • calcRegularizationScore

        double calcRegularizationScore​(boolean backpropOnlyParams)
        Calculate the regularization component of the score, for the parameters in this layer
        For example, the L1, L2 and/or weight decay components of the loss function
        Parameters:
        backpropOnlyParams - If true: calculate regularization score based on backprop params only. If false: calculate based on all params (including pretrain params, if any)
        Returns:
        the regularization score of
      • type

        Layer.Type type()
        Returns the layer type
        Returns:
      • backpropGradient

        Pair<Gradient,​INDArray> backpropGradient​(INDArray epsilon,
                                                       LayerWorkspaceMgr workspaceMgr)
        Calculate the gradient relative to the error in the next layer
        Parameters:
        epsilon - w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C is cost function a=sigma(z) is activation.
        workspaceMgr - Workspace manager
        Returns:
        Pair where Gradient is gradient for this layer, INDArray is epsilon (activation gradient) needed by next layer, but before element-wise multiply by sigmaPrime(z). So for standard feed-forward layer, if this layer is L, then return.getSecond() == dL/dIn = (w^(L)*(delta^(L))^T)^T. Note that the returned array should be placed in the ArrayType.ACTIVATION_GRAD workspace via the workspace manager
      • activate

        INDArray activate​(boolean training,
                          LayerWorkspaceMgr workspaceMgr)
        Perform forward pass and return the activations array with the last set input
        Parameters:
        training - training or test mode
        workspaceMgr - Workspace manager
        Returns:
        the activation (layer output) of the last specified input. Note that the returned array should be placed in the ArrayType.ACTIVATIONS workspace via the workspace manager
      • activate

        INDArray activate​(INDArray input,
                          boolean training,
                          LayerWorkspaceMgr mgr)
        Perform forward pass and return the activations array with the specified input
        Parameters:
        input - the input to use
        training - train or test mode
        mgr - Workspace manager.
        Returns:
        Activations array. Note that the returned array should be placed in the ArrayType.ACTIVATIONS workspace via the workspace manager
      • setIndex

        void setIndex​(int index)
        Set the layer index.
      • getIndex

        int getIndex()
        Get the layer index.
      • getIterationCount

        int getIterationCount()
        Returns:
        The current iteration count (number of parameter updates) for the layer/network
      • getEpochCount

        int getEpochCount()
        Returns:
        The current epoch count (number of training epochs passed) for the layer/network
      • setIterationCount

        void setIterationCount​(int iterationCount)
        Set the current iteration count (number of parameter updates) for the layer/network
      • setEpochCount

        void setEpochCount​(int epochCount)
        Set the current epoch count (number of epochs passed ) for the layer/network
      • setInputMiniBatchSize

        void setInputMiniBatchSize​(int size)
        Set current/last input mini-batch size.
        Used for score and gradient calculations. Mini batch size may be different from getInput().size(0) due to reshaping operations - for example, when using RNNs with DenseLayer and OutputLayer. Called automatically during forward pass.
      • getInputMiniBatchSize

        int getInputMiniBatchSize()
        Get current/last input mini-batch size, as set by setInputMiniBatchSize(int)
        See Also:
        setInputMiniBatchSize(int)
      • isPretrainLayer

        boolean isPretrainLayer()
        Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)
        Returns:
        true if the layer can be pretrained (using fit(INDArray), false otherwise
      • clearNoiseWeightParams

        void clearNoiseWeightParams()
      • allowInputModification

        void allowInputModification​(boolean allow)
        A performance optimization: mark whether the layer is allowed to modify its input array in-place. In many cases, this is totally safe - in others, the input array will be shared by multiple layers, and hence it's not safe to modify the input array. This is usually used by ops such as dropout.
        Parameters:
        allow - If true: the input array is safe to modify. If false: the input array should be copied before it is modified (i.e., in-place modifications are un-safe)
      • feedForwardMaskArray

        Pair<INDArray,​MaskState> feedForwardMaskArray​(INDArray maskArray,
                                                            MaskState currentMaskState,
                                                            int minibatchSize)
        Feed forward the input mask array, setting in the layer as appropriate. This allows different layers to handle masks differently - for example, bidirectional RNNs and normal RNNs operate differently with masks (the former sets activations to 0 outside of the data present region (and keeps the mask active for future layers like dense layers), whereas normal RNNs don't zero out the activations/errors )instead relying on backpropagated error arrays to handle the variable length case.
        This is also used for example for networks that contain global pooling layers, arbitrary preprocessors, etc.
        Parameters:
        maskArray - Mask array to set
        currentMaskState - Current state of the mask - see MaskState
        minibatchSize - Current minibatch size. Needs to be known as it cannot always be inferred from the activations array due to reshaping (such as a DenseLayer within a recurrent neural network)
        Returns:
        New mask array after this layer, along with the new mask state.
      • getHelper

        LayerHelper getHelper()
        Returns:
        Get the layer helper, if any