Class Convolution

  • All Implemented Interfaces:
    Block
    Direct Known Subclasses:
    Conv1d, Conv2d, Conv3d

    public abstract class Convolution
    extends AbstractBlock
    A convolution layer does a dot product calculation on each channel of \(k\)-channel input data by specified number of filters, each containing \(k\) kernels for calculating each channel in the input data and then summed per filter, hence the number of filters denote the number of output channels of a convolution layer. Some modifications may be set on a convolution layer, namely stride which shows the distance between each convolved input data in a channel, and padding which shows the preservation of input size (width and/or height and/or depth) by adding specified padding to the sides of the output. A convolution layer extracts features of input data with different representations where each representation lies per channel in the output, often known as feature map or feature vector.

    While convolution process itself has been around for quite some time in mathematics, in 1998 LeCun et al. implemented the very first convolution layers forming a network called LeNet-5 for character recognition task; details of the network's implementation can be find in LeNet-5's paper. When other approaches at that time used handcrafted features with external stage of feature extraction, convolution layer performed feature extraction on its own with no human interference. This marks a new era of machine-extracted features, but it was not until 2012 that the published paper of AlexNet marked the beginning of convolutional neural networks, which by the name itself heavily relies on convolution layer.

    Convolution layer is usually used in image-related tasks due to its well-renowned performance as shown by existing works and currently, other non-image-related fields of study are beginning to incorporate convolution layer as an addition or replacement of previous approaches, with one example being time series processing with 1-dimensional convolution layer. Due to the nature of convolution that processes all points in the input data, it is computationally expensive, hence the use of GPU is strongly recommended for faster performance as opposed to using CPU. Note that it is also common to stack convolution layers with different output channels for more representations of the input data.

    Current implementations of Convolution are Conv1d with input dimension of LayoutType.WIDTH, Conv2d with input dimension of LayoutType.WIDTH and LayoutType.HEIGHT, and lastly Conv3d with input dimension of LayoutType.WIDTH, LayoutType.HEIGHT, and LayoutType.DEPTH. These implementations share the same core principal as a Convolution layer does, with the difference being the number of input dimension each operates on as denoted by ConvXD for X dimension(s).

    See Also:
    The D2L chapters on convolution
    • Field Detail

      • kernelShape

        protected Shape kernelShape
      • stride

        protected Shape stride
      • padding

        protected Shape padding
      • dilation

        protected Shape dilation
      • filters

        protected int filters
      • groups

        protected int groups
      • includeBias

        protected boolean includeBias
    • Method Detail

      • getExpectedLayout

        protected abstract LayoutType[] getExpectedLayout()
        Returns the expected layout of the input.
        Returns:
        the expected layout of the input
      • getStringLayout

        protected abstract java.lang.String getStringLayout()
        Returns the string representing the layout of the input.
        Returns:
        the string representing the layout of the input
      • numDimensions

        protected abstract int numDimensions()
        Returns the number of dimensions of the input.
        Returns:
        the number of dimensions of the input
      • beforeInitialize

        protected void beforeInitialize​(Shape... inputShapes)
        Performs any action necessary before initialization. For example, keep the input information or verify the layout.
        Overrides:
        beforeInitialize in class AbstractBaseBlock
        Parameters:
        inputShapes - the expected shapes of the input
      • getOutputShapes

        public Shape[] getOutputShapes​(Shape[] inputs)
        Returns the expected output shapes of the block for the specified input shapes.
        Parameters:
        inputs - the shapes of the inputs
        Returns:
        the expected output shapes of the block
      • loadMetadata

        public void loadMetadata​(byte loadVersion,
                                 java.io.DataInputStream is)
                          throws java.io.IOException,
                                 MalformedModelException
        Overwrite this to load additional metadata with the parameter values.

        If you overwrite AbstractBaseBlock.saveMetadata(DataOutputStream) or need to provide backward compatibility to older binary formats, you prabably need to overwrite this. This default implementation checks if the version number fits, if not it throws an MalformedModelException. After that it restores the input shapes.

        Overrides:
        loadMetadata in class AbstractBaseBlock
        Parameters:
        loadVersion - the version used for loading this metadata.
        is - the input stream we are loading from
        Throws:
        java.io.IOException - loading failed
        MalformedModelException - data can be loaded but has wrong format
      • getKernelShape

        public Shape getKernelShape()
        Returns the shape of the kernel.
        Returns:
        the shape of the kernel
      • getStride

        public Shape getStride()
        Returns the stride of the convolution.
        Returns:
        the stride of the convolution
      • getPadding

        public Shape getPadding()
        Returns the padding along each dimension.
        Returns:
        the padding along each dimension
      • getDilation

        public Shape getDilation()
        Returns the dilation along each dimension.
        Returns:
        the dilation along each dimension
      • getFilters

        public int getFilters()
        Returns the required number of filters.
        Returns:
        the required number of filters
      • getGroups

        public int getGroups()
        Returns the number of group partitions.
        Returns:
        the number of group partitions
      • isIncludeBias

        public boolean isIncludeBias()
        Returns whether to include a bias vector.
        Returns:
        whether to include a bias vector