Class BasePretrainNetwork<LayerConfT extends BasePretrainNetwork>

    • Method Detail

      • getCorruptedInput

        public INDArray getCorruptedInput​(INDArray x,
                                          double corruptionLevel)
        Corrupts the given input by doing a binomial sampling given the corruption level
        Parameters:
        x - the input to corrupt
        corruptionLevel - the corruption value
        Returns:
        the binomial sampled corrupted input
      • sampleHiddenGivenVisible

        public abstract Pair<INDArray,​INDArray> sampleHiddenGivenVisible​(INDArray v)
        Sample the hidden distribution given the visible
        Parameters:
        v - the visible to sample from
        Returns:
        the hidden mean and sample
      • sampleVisibleGivenHidden

        public abstract Pair<INDArray,​INDArray> sampleVisibleGivenHidden​(INDArray h)
        Sample the visible distribution given the hidden
        Parameters:
        h - the hidden to sample from
        Returns:
        the mean and sample
      • paramTable

        public Map<String,​INDArray> paramTable​(boolean backpropParamsOnly)
        Description copied from interface: Model
        Table of parameters by key, for backprop For many models (dense layers, etc) - all parameters are backprop parameters
        Specified by:
        paramTable in interface Model
        Specified by:
        paramTable in interface Trainable
        Overrides:
        paramTable in class BaseLayer<LayerConfT extends BasePretrainNetwork>
        Parameters:
        backpropParamsOnly - If true, return backprop params only. If false: return all params (equivalent to paramsTable())
        Returns:
        Parameter table
      • setParams

        public void setParams​(INDArray params)
        Description copied from interface: Model
        Set the parameters for this model. This expects a linear ndarray which then be unpacked internally relative to the expected ordering of the model
        Specified by:
        setParams in interface Model
        Overrides:
        setParams in class BaseLayer<LayerConfT extends BasePretrainNetwork>
        Parameters:
        params - the parameters for the model
      • backpropGradient

        public Pair<Gradient,​INDArray> backpropGradient​(INDArray epsilon,
                                                              LayerWorkspaceMgr workspaceMgr)
        Description copied from interface: Layer
        Calculate the gradient relative to the error in the next layer
        Specified by:
        backpropGradient in interface Layer
        Overrides:
        backpropGradient in class BaseLayer<LayerConfT extends BasePretrainNetwork>
        Parameters:
        epsilon - w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C is cost function a=sigma(z) is activation.
        workspaceMgr - Workspace manager
        Returns:
        Pair where Gradient is gradient for this layer, INDArray is epsilon (activation gradient) needed by next layer, but before element-wise multiply by sigmaPrime(z). So for standard feed-forward layer, if this layer is L, then return.getSecond() == dL/dIn = (w^(L)*(delta^(L))^T)^T. Note that the returned array should be placed in the ArrayType.ACTIVATION_GRAD workspace via the workspace manager
      • calcRegularizationScore

        public double calcRegularizationScore​(boolean backpropParamsOnly)
        Description copied from interface: Layer
        Calculate the regularization component of the score, for the parameters in this layer
        For example, the L1, L2 and/or weight decay components of the loss function
        Specified by:
        calcRegularizationScore in interface Layer
        Overrides:
        calcRegularizationScore in class BaseLayer<LayerConfT extends BasePretrainNetwork>
        Parameters:
        backpropParamsOnly - If true: calculate regularization score based on backprop params only. If false: calculate based on all params (including pretrain params, if any)
        Returns:
        the regularization score of