Class VariationalAutoencoder.Builder

    • Constructor Detail

      • Builder

        public Builder()
    • Method Detail

      • encoderLayerSizes

        public VariationalAutoencoder.Builder encoderLayerSizes​(int... encoderLayerSizes)
        Size of the encoder layers, in units. Each encoder layer is functionally equivalent to a DenseLayer. Typically the number and size of the decoder layers (set via decoderLayerSizes(int...) is similar to the encoder layers.
        Parameters:
        encoderLayerSizes - Size of each encoder layer in the variational autoencoder
      • setEncoderLayerSizes

        public void setEncoderLayerSizes​(int... encoderLayerSizes)
        Size of the encoder layers, in units. Each encoder layer is functionally equivalent to a DenseLayer. Typically the number and size of the decoder layers (set via decoderLayerSizes(int...) is similar to the encoder layers.
        Parameters:
        encoderLayerSizes - Size of each encoder layer in the variational autoencoder
      • decoderLayerSizes

        public VariationalAutoencoder.Builder decoderLayerSizes​(int... decoderLayerSizes)
        Size of the decoder layers, in units. Each decoder layer is functionally equivalent to a DenseLayer. Typically the number and size of the decoder layers is similar to the encoder layers (set via encoderLayerSizes(int...).
        Parameters:
        decoderLayerSizes - Size of each deccoder layer in the variational autoencoder
      • setDecoderLayerSizes

        public void setDecoderLayerSizes​(int... decoderLayerSizes)
        Size of the decoder layers, in units. Each decoder layer is functionally equivalent to a DenseLayer. Typically the number and size of the decoder layers is similar to the encoder layers (set via encoderLayerSizes(int...).
        Parameters:
        decoderLayerSizes - Size of each deccoder layer in the variational autoencoder
      • lossFunction

        public VariationalAutoencoder.Builder lossFunction​(IActivation outputActivationFn,
                                                           LossFunctions.LossFunction lossFunction)
        Configure the VAE to use the specified loss function for the reconstruction, instead of a ReconstructionDistribution. Note that this is NOT following the standard VAE design (as per Kingma & Welling), which assumes a probabilistic output - i.e., some p(x|z). It is however a valid network configuration, allowing for optimization of more traditional objectives such as mean squared error.
        Note: clearly, setting the loss function here will override any previously set recontruction distribution
        Parameters:
        outputActivationFn - Activation function for the output/reconstruction
        lossFunction - Loss function to use
      • lossFunction

        public VariationalAutoencoder.Builder lossFunction​(Activation outputActivationFn,
                                                           LossFunctions.LossFunction lossFunction)
        Configure the VAE to use the specified loss function for the reconstruction, instead of a ReconstructionDistribution. Note that this is NOT following the standard VAE design (as per Kingma & Welling), which assumes a probabilistic output - i.e., some p(x|z). It is however a valid network configuration, allowing for optimization of more traditional objectives such as mean squared error.
        Note: clearly, setting the loss function here will override any previously set recontruction distribution
        Parameters:
        outputActivationFn - Activation function for the output/reconstruction
        lossFunction - Loss function to use
      • lossFunction

        public VariationalAutoencoder.Builder lossFunction​(IActivation outputActivationFn,
                                                           ILossFunction lossFunction)
        Configure the VAE to use the specified loss function for the reconstruction, instead of a ReconstructionDistribution. Note that this is NOT following the standard VAE design (as per Kingma & Welling), which assumes a probabilistic output - i.e., some p(x|z). It is however a valid network configuration, allowing for optimization of more traditional objectives such as mean squared error.
        Note: clearly, setting the loss function here will override any previously set recontruction distribution
        Parameters:
        outputActivationFn - Activation function for the output/reconstruction
        lossFunction - Loss function to use
      • pzxActivationFn

        public VariationalAutoencoder.Builder pzxActivationFn​(IActivation activationFunction)
        Activation function for the input to P(z|data).
        Care should be taken with this, as some activation functions (relu, etc) are not suitable due to being bounded in range [0,infinity).
        Parameters:
        activationFunction - Activation function for p(z|x)
      • pzxActivationFunction

        public VariationalAutoencoder.Builder pzxActivationFunction​(Activation activation)
        Activation function for the input to P(z|data).
        Care should be taken with this, as some activation functions (relu, etc) are not suitable due to being bounded in range [0,infinity).
        Parameters:
        activation - Activation function for p(z|x)
      • numSamples

        public VariationalAutoencoder.Builder numSamples​(int numSamples)
        Set the number of samples per data point (from VAE state Z) used when doing pretraining. Default value: 1.

        This is parameter L from Kingma and Welling: "In our experiments we found that the number of samples L per datapoint can be set to 1 as long as the minibatch size M was large enough, e.g. M = 100."

        Parameters:
        numSamples - Number of samples per data point for pretraining