public class VariationalAutoencoder extends BasePretrainNetwork
See: Kingma & Welling, 2013: Auto-Encoding Variational Bayes - https://arxiv.org/abs/1312.6114
This implementation allows multiple encoder and decoder layers, the number and sizes of which can be set independently.
A note on scores during pretraining: This implementation minimizes the negative of the variational lower bound objective as described in Kingma & Welling; the mathematics in that paper is based on maximization of the variational lower bound instead. Thus, scores reported during pretraining in DL4J are the negative of the variational lower bound equation in the paper. The backpropagation and learning procedure is otherwise as described there.
Modifier and Type | Class and Description |
---|---|
static class |
VariationalAutoencoder.Builder |
lossFunction, visibleBiasInit
nIn, nOut
activationFn, adamMeanDecay, adamVarDecay, biasInit, biasLearningRate, dist, epsilon, gradientNormalization, gradientNormalizationThreshold, iUpdater, l1, l1Bias, l2, l2Bias, learningRate, learningRateSchedule, momentum, momentumSchedule, rho, rmsDecay, updater, weightInit
Modifier and Type | Method and Description |
---|---|
double |
getL1ByParam(String paramName)
Get the L1 coefficient for the given parameter.
|
double |
getL2ByParam(String paramName)
Get the L2 coefficient for the given parameter.
|
double |
getLearningRateByParam(String paramName)
Get the (initial) learning rate coefficient for the given parameter.
|
LayerMemoryReport |
getMemoryReport(InputType inputType)
This is a report of the estimated memory consumption for the given layer
|
ParamInitializer |
initializer() |
Layer |
instantiate(NeuralNetConfiguration conf,
Collection<IterationListener> iterationListeners,
int layerIndex,
org.nd4j.linalg.api.ndarray.INDArray layerParamsView,
boolean initializeParams) |
boolean |
isPretrainParam(String paramName)
Is the specified parameter a layerwise pretraining only parameter?
For example, visible bias params in an autoencoder (or, decoder params in a variational autoencoder) aren't used during supervised backprop. Layers (like DenseLayer, etc) with no pretrainable parameters will return false for all (valid) inputs. |
getOutputType, getPreProcessorForInputType, setNIn
clone, getIUpdaterByParam, getUpdaterByParam, resetLayerDefaultConfig
public Layer instantiate(NeuralNetConfiguration conf, Collection<IterationListener> iterationListeners, int layerIndex, org.nd4j.linalg.api.ndarray.INDArray layerParamsView, boolean initializeParams)
instantiate
in class Layer
public ParamInitializer initializer()
initializer
in class Layer
public double getLearningRateByParam(String paramName)
Layer
getLearningRateByParam
in class BasePretrainNetwork
paramName
- Parameter namepublic double getL1ByParam(String paramName)
Layer
getL1ByParam
in class BasePretrainNetwork
paramName
- Parameter namepublic double getL2ByParam(String paramName)
Layer
getL2ByParam
in class BasePretrainNetwork
paramName
- Parameter namepublic boolean isPretrainParam(String paramName)
Layer
isPretrainParam
in class BasePretrainNetwork
paramName
- Parameter name/keypublic LayerMemoryReport getMemoryReport(InputType inputType)
Layer
getMemoryReport
in class Layer
inputType
- Input type to the layer. Memory consumption is often a function of the input typeCopyright © 2017. All rights reserved.