Class BatchNormalization

    • Field Detail

      • ONE_ON_2LOGE_10

        protected static final double ONE_ON_2LOGE_10
      • helperCountFail

        protected int helperCountFail
      • index

        protected int index
    • Method Detail

      • backpropGradient

        public Pair<Gradient,​INDArray> backpropGradient​(INDArray epsilon,
                                                              LayerWorkspaceMgr workspaceMgr)
        Description copied from interface: Layer
        Calculate the gradient relative to the error in the next layer
        Specified by:
        backpropGradient in interface Layer
        Overrides:
        backpropGradient in class BaseLayer<BatchNormalization>
        Parameters:
        epsilon - w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C is cost function a=sigma(z) is activation.
        workspaceMgr - Workspace manager
        Returns:
        Pair where Gradient is gradient for this layer, INDArray is epsilon (activation gradient) needed by next layer, but before element-wise multiply by sigmaPrime(z). So for standard feed-forward layer, if this layer is L, then return.getSecond() == dL/dIn = (w^(L)*(delta^(L))^T)^T. Note that the returned array should be placed in the ArrayType.ACTIVATION_GRAD workspace via the workspace manager
      • activate

        public INDArray activate​(boolean training,
                                 LayerWorkspaceMgr workspaceMgr)
        Description copied from interface: Layer
        Perform forward pass and return the activations array with the last set input
        Specified by:
        activate in interface Layer
        Overrides:
        activate in class BaseLayer<BatchNormalization>
        Parameters:
        training - training or test mode
        workspaceMgr - Workspace manager
        Returns:
        the activation (layer output) of the last specified input. Note that the returned array should be placed in the ArrayType.ACTIVATIONS workspace via the workspace manager
      • isPretrainLayer

        public boolean isPretrainLayer()
        Description copied from interface: Layer
        Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)
        Returns:
        true if the layer can be pretrained (using fit(INDArray), false otherwise
      • getShape

        public long[] getShape​(INDArray x)
      • updaterDivideByMinibatch

        public boolean updaterDivideByMinibatch​(String paramName)
        Description copied from interface: Trainable
        DL4J layers typically produce the sum of the gradients during the backward pass for each layer, and if required (if minibatch=true) then divide by the minibatch size.
        However, there are some exceptions, such as the batch norm mean/variance estimate parameters: these "gradients" are actually not gradients, but are updates to be applied directly to the parameter vector. Put another way, most gradients should be divided by the minibatch to get the average; some "gradients" are actually final updates already, and should not be divided by the minibatch size.
        Specified by:
        updaterDivideByMinibatch in interface Trainable
        Overrides:
        updaterDivideByMinibatch in class AbstractLayer<BatchNormalization>
        Parameters:
        paramName - Name of the parameter
        Returns:
        True if gradients should be divided by minibatch (most params); false otherwise (edge cases like batch norm mean/variance estimates)