Class RnnLossLayer

    • Method Detail

      • backpropGradient

        public Pair<Gradient,​INDArray> backpropGradient​(INDArray epsilon,
                                                              LayerWorkspaceMgr workspaceMgr)
        Description copied from interface: Layer
        Calculate the gradient relative to the error in the next layer
        Specified by:
        backpropGradient in interface Layer
        Overrides:
        backpropGradient in class BaseLayer<RnnLossLayer>
        Parameters:
        epsilon - w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C is cost function a=sigma(z) is activation.
        workspaceMgr - Workspace manager
        Returns:
        Pair where Gradient is gradient for this layer, INDArray is epsilon (activation gradient) needed by next layer, but before element-wise multiply by sigmaPrime(z). So for standard feed-forward layer, if this layer is L, then return.getSecond() == dL/dIn = (w^(L)*(delta^(L))^T)^T. Note that the returned array should be placed in the ArrayType.ACTIVATION_GRAD workspace via the workspace manager
      • calcRegularizationScore

        public double calcRegularizationScore​(boolean backpropParamsOnly)
        Description copied from interface: Layer
        Calculate the regularization component of the score, for the parameters in this layer
        For example, the L1, L2 and/or weight decay components of the loss function
        Specified by:
        calcRegularizationScore in interface Layer
        Overrides:
        calcRegularizationScore in class BaseLayer<RnnLossLayer>
        Parameters:
        backpropParamsOnly - If true: calculate regularization score based on backprop params only. If false: calculate based on all params (including pretrain params, if any)
        Returns:
        the regularization score of
      • f1Score

        public double f1Score​(DataSet data)
        Description copied from interface: Classifier
        Sets the input and labels and returns a score for the prediction wrt true labels
        Specified by:
        f1Score in interface Classifier
        Parameters:
        data - the data to score
        Returns:
        the score for the given input,label pairs
      • f1Score

        public double f1Score​(INDArray examples,
                              INDArray labels)
        Returns the f1 score for the given examples. Think of this to be like a percentage right. The higher the number the more it got right. This is on a scale from 0 to 1.
        Specified by:
        f1Score in interface Classifier
        Parameters:
        examples - te the examples to classify (one example in each row)
        labels - the true labels
        Returns:
        the scores for each ndarray
      • numLabels

        public int numLabels()
        Description copied from interface: Classifier
        Returns the number of possible labels
        Specified by:
        numLabels in interface Classifier
        Returns:
        the number of possible labels for this classifier
      • fit

        public void fit​(DataSetIterator iter)
        Description copied from interface: Classifier
        Train the model based on the datasetiterator
        Specified by:
        fit in interface Classifier
        Parameters:
        iter - the iterator to train on
      • predict

        public int[] predict​(INDArray examples)
        Description copied from interface: Classifier
        Takes in a list of examples For each row, returns a label
        Specified by:
        predict in interface Classifier
        Parameters:
        examples - the examples to classify (one example in each row)
        Returns:
        the labels for each example
      • predict

        public List<String> predict​(DataSet dataSet)
        Description copied from interface: Classifier
        Takes in a DataSet of examples For each row, returns a label
        Specified by:
        predict in interface Classifier
        Parameters:
        dataSet - the examples to classify
        Returns:
        the labels for each example
      • fit

        public void fit​(INDArray examples,
                        INDArray labels)
        Description copied from interface: Classifier
        Fit the model
        Specified by:
        fit in interface Classifier
        Parameters:
        examples - the examples to classify (one example in each row)
        labels - the example labels(a binary outcome matrix)
      • fit

        public void fit​(DataSet data)
        Description copied from interface: Classifier
        Fit the model
        Specified by:
        fit in interface Classifier
        Parameters:
        data - the data to train on
      • fit

        public void fit​(INDArray examples,
                        int[] labels)
        Description copied from interface: Classifier
        Fit the model
        Specified by:
        fit in interface Classifier
        Parameters:
        examples - the examples to classify (one example in each row)
        labels - the labels for each example (the number of labels must match the number of rows in the example
      • activate

        public INDArray activate​(boolean training,
                                 LayerWorkspaceMgr workspaceMgr)
        Description copied from interface: Layer
        Perform forward pass and return the activations array with the last set input
        Specified by:
        activate in interface Layer
        Overrides:
        activate in class BaseLayer<RnnLossLayer>
        Parameters:
        training - training or test mode
        workspaceMgr - Workspace manager
        Returns:
        the activation (layer output) of the last specified input. Note that the returned array should be placed in the ArrayType.ACTIVATIONS workspace via the workspace manager
      • isPretrainLayer

        public boolean isPretrainLayer()
        Description copied from interface: Layer
        Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)
        Specified by:
        isPretrainLayer in interface Layer
        Returns:
        true if the layer can be pretrained (using fit(INDArray), false otherwise
      • feedForwardMaskArray

        public Pair<INDArray,​MaskState> feedForwardMaskArray​(INDArray maskArray,
                                                                   MaskState currentMaskState,
                                                                   int minibatchSize)
        Description copied from interface: Layer
        Feed forward the input mask array, setting in the layer as appropriate. This allows different layers to handle masks differently - for example, bidirectional RNNs and normal RNNs operate differently with masks (the former sets activations to 0 outside of the data present region (and keeps the mask active for future layers like dense layers), whereas normal RNNs don't zero out the activations/errors )instead relying on backpropagated error arrays to handle the variable length case.
        This is also used for example for networks that contain global pooling layers, arbitrary preprocessors, etc.
        Specified by:
        feedForwardMaskArray in interface Layer
        Overrides:
        feedForwardMaskArray in class AbstractLayer<RnnLossLayer>
        Parameters:
        maskArray - Mask array to set
        currentMaskState - Current state of the mask - see MaskState
        minibatchSize - Current minibatch size. Needs to be known as it cannot always be inferred from the activations array due to reshaping (such as a DenseLayer within a recurrent neural network)
        Returns:
        New mask array after this layer, along with the new mask state.
      • needsLabels

        public boolean needsLabels()
        Description copied from interface: IOutputLayer
        Returns true if labels are required for this output layer
        Specified by:
        needsLabels in interface IOutputLayer
        Returns:
        true if this output layer needs labels or not
      • computeScore

        public double computeScore​(double fullNetRegTerm,
                                   boolean training,
                                   LayerWorkspaceMgr workspaceMgr)
        Description copied from interface: IOutputLayer
        Compute score after labels and input have been set.
        Specified by:
        computeScore in interface IOutputLayer
        Parameters:
        fullNetRegTerm - Regularization score (l1/l2/weight decay) for the entire network
        training - whether score should be calculated at train or test time (this affects things like application of dropout, etc)
        Returns:
        score (loss function)
      • computeScoreForExamples

        public INDArray computeScoreForExamples​(double fullNetRegTerm,
                                                LayerWorkspaceMgr workspaceMgr)
        Compute the score for each example individually, after labels and input have been set.
        Specified by:
        computeScoreForExamples in interface IOutputLayer
        Parameters:
        fullNetRegTerm - Regularization score term for the entire network (or, 0.0 to not include regularization)
        Returns:
        A column INDArray of shape [numExamples,1], where entry i is the score of the ith example