Class VariationalAutoencoder
- java.lang.Object
-
- org.deeplearning4j.nn.layers.variational.VariationalAutoencoder
-
- All Implemented Interfaces:
Serializable,Cloneable,Layer,Model,Trainable
public class VariationalAutoencoder extends Object implements Layer
- See Also:
- Serialized Form
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from interface org.deeplearning4j.nn.api.Layer
Layer.TrainingMode, Layer.Type
-
-
Field Summary
Fields Modifier and Type Field Description protected CacheModecacheModeprotected NeuralNetConfigurationconfprotected DataTypedataTypeprotected int[]decoderLayerSizesprotected int[]encoderLayerSizesprotected intepochCountprotected Gradientgradientprotected INDArraygradientsFlattenedprotected Map<String,INDArray>gradientViewsprotected intindexprotected INDArrayinputprotected intiterationCountprotected INDArraymaskArrayprotected intnumSamplesprotected ConvexOptimizeroptimizerprotected Map<String,INDArray>paramsprotected INDArrayparamsFlattenedprotected IActivationpzxActivationFnprotected ReconstructionDistributionreconstructionDistributionprotected doublescoreprotected Solversolverprotected Collection<TrainingListener>trainingListenersprotected Map<String,INDArray>weightNoiseParamsprotected booleanzeroedPretrainParamGradients
-
Constructor Summary
Constructors Constructor Description VariationalAutoencoder(NeuralNetConfiguration conf, DataType dataType)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description INDArrayactivate(boolean training, LayerWorkspaceMgr workspaceMgr)Perform forward pass and return the activations array with the last set inputINDArrayactivate(INDArray input, boolean training, LayerWorkspaceMgr workspaceMgr)Perform forward pass and return the activations array with the specified inputvoidaddListeners(TrainingListener... listeners)This method ADDS additional TrainingListener to existing listenersvoidallowInputModification(boolean allow)A performance optimization: mark whether the layer is allowed to modify its input array in-place.voidapplyConstraints(int iteration, int epoch)Apply any constraints to the modelvoidassertInputSet(boolean backprop)Pair<Gradient,INDArray>backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)Calculate the gradient relative to the error in the next layerintbatchSize()The current inputs batch sizedoublecalcRegularizationScore(boolean backpropParamsOnly)Calculate the regularization component of the score, for the parameters in this layer
For example, the L1, L2 and/or weight decay components of the loss functionvoidclear()Clear inputvoidclearNoiseWeightParams()voidclose()voidcomputeGradientAndScore(LayerWorkspaceMgr workspaceMgr)Update the scoreNeuralNetConfigurationconf()The configuration for the neural networkPair<INDArray,MaskState>feedForwardMaskArray(INDArray maskArray, MaskState currentMaskState, int minibatchSize)Feed forward the input mask array, setting in the layer as appropriate.voidfit()All models have a fit methodvoidfit(INDArray data, LayerWorkspaceMgr workspaceMgr)Fit the model to the given dataINDArraygenerateAtMeanGivenZ(INDArray latentSpaceValues)Given a specified values for the latent space as input (latent space being z in p(z|data)), generate output from P(x|z), where x = E[P(x|z)]
i.e., return the mean value for the distribution P(x|z)INDArraygenerateRandomGivenZ(INDArray latentSpaceValues, LayerWorkspaceMgr workspaceMgr)Given a specified values for the latent space as input (latent space being z in p(z|data)), randomly generate output x, where x ~ P(x|z)TrainingConfiggetConfig()INDArraygetGradientsViewArray()LayerHelpergetHelper()intgetIndex()Get the layer index.intgetInputMiniBatchSize()Get current/last input mini-batch size, as set by setInputMiniBatchSize(int)Collection<TrainingListener>getListeners()Get the iteration listeners for this layer.INDArraygetMaskArray()ConvexOptimizergetOptimizer()Returns this models optimizerINDArraygetParam(String param)Get the parameterprotected INDArraygetParamWithNoise(String param, boolean training, LayerWorkspaceMgr workspaceMgr)Gradientgradient()Get the gradient.Pair<Gradient,Double>gradientAndScore()Get the gradient and scorebooleanhasLossFunction()Does the reconstruction distribution have a loss function (such as mean squared error) or is it a standard probabilistic reconstruction distribution?voidinit()Init the modelINDArrayinput()The input/feature matrix for the modelbooleanisPretrainLayer()Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)booleanisPretrainParam(String param)protected VariationalAutoencoderlayerConf()protected StringlayerId()longnumParams()the number of parameters for the modellongnumParams(boolean backwards)the number of parameters for the modelINDArrayparams()Parameters of the model (if any)Map<String,INDArray>paramTable()The param tableMap<String,INDArray>paramTable(boolean backpropParamsOnly)Table of parameters by key, for backprop For many models (dense layers, etc) - all parameters are backprop parametersINDArraypreOutput(boolean training, LayerWorkspaceMgr workspaceMgr)INDArrayreconstructionError(INDArray data)Return the reconstruction error for this variational autoencoder.
NOTE (important): This method is used ONLY for VAEs that have a standard neural network loss function (i.e., anILossFunctioninstance such as mean squared error) instead of using a probabilistic reconstruction distribution P(x|z) for the reconstructions (as presented in the VAE architecture by Kingma and Welling).
You can check if the VAE has a loss function usinghasLossFunction()
Consequently, the reconstruction error is a simple deterministic function (no Monte-Carlo sampling is required, unlikereconstructionProbability(INDArray, int)andreconstructionLogProbability(INDArray, int))INDArrayreconstructionLogProbability(INDArray data, int numSamples)Return the log reconstruction probability given the specified number of samples.
SeereconstructionLogProbability(INDArray, int)for more detailsINDArrayreconstructionProbability(INDArray data, int numSamples)Calculate the reconstruction probability, as described in An & Cho, 2015 - "Variational Autoencoder based Anomaly Detection using Reconstruction Probability" (Algorithm 4)
The authors describe it as follows: "This is essentially the probability of the data being generated from a given latent variable drawn from the approximate posterior distribution."
Specifically, for each example x in the input, calculate p(x).doublescore()The score for the modelvoidsetBackpropGradientsViewArray(INDArray gradients)Set the gradients array as a view of the full (backprop) network parameters NOTE: this is intended to be used internally in MultiLayerNetwork and ComputationGraph, not by users.voidsetCacheMode(CacheMode mode)This method sets given CacheMode for current layervoidsetConf(NeuralNetConfiguration conf)Setter for the configurationvoidsetIndex(int index)Set the layer index.voidsetInput(INDArray input, LayerWorkspaceMgr layerWorkspaceMgr)Set the layer input.voidsetInputMiniBatchSize(int size)Set current/last input mini-batch size.
Used for score and gradient calculations.voidsetListeners(Collection<TrainingListener> listeners)Set theTrainingListeners for this model.voidsetListeners(TrainingListener... listeners)Set theTrainingListeners for this model.voidsetMaskArray(INDArray maskArray)Set the mask array.voidsetParam(String key, INDArray val)Set the parameter with a new ndarrayvoidsetParams(INDArray params)Set the parameters for this model.voidsetParamsViewArray(INDArray params)Set the initial parameters array as a view of the full (backprop) network parameters NOTE: this is intended to be used internally in MultiLayerNetwork and ComputationGraph, not by users.voidsetParamTable(Map<String,INDArray> paramTable)Setter for the param tableLayer.Typetype()Returns the layer typevoidupdate(Gradient gradient)Update layer weights and biases with gradient changevoidupdate(INDArray gradient, String paramType)Perform one update applying the gradientbooleanupdaterDivideByMinibatch(String paramName)DL4J layers typically produce the sum of the gradients during the backward pass for each layer, and if required (if minibatch=true) then divide by the minibatch size.
However, there are some exceptions, such as the batch norm mean/variance estimate parameters: these "gradients" are actually not gradients, but are updates to be applied directly to the parameter vector.-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface org.deeplearning4j.nn.api.Layer
getEpochCount, getIterationCount, setEpochCount, setIterationCount
-
-
-
-
Field Detail
-
input
protected INDArray input
-
paramsFlattened
protected INDArray paramsFlattened
-
gradientsFlattened
protected INDArray gradientsFlattened
-
conf
protected NeuralNetConfiguration conf
-
score
protected double score
-
optimizer
protected ConvexOptimizer optimizer
-
gradient
protected Gradient gradient
-
trainingListeners
protected Collection<TrainingListener> trainingListeners
-
index
protected int index
-
maskArray
protected INDArray maskArray
-
solver
protected Solver solver
-
encoderLayerSizes
protected int[] encoderLayerSizes
-
decoderLayerSizes
protected int[] decoderLayerSizes
-
reconstructionDistribution
protected ReconstructionDistribution reconstructionDistribution
-
pzxActivationFn
protected IActivation pzxActivationFn
-
numSamples
protected int numSamples
-
cacheMode
protected CacheMode cacheMode
-
dataType
protected DataType dataType
-
zeroedPretrainParamGradients
protected boolean zeroedPretrainParamGradients
-
iterationCount
protected int iterationCount
-
epochCount
protected int epochCount
-
-
Constructor Detail
-
VariationalAutoencoder
public VariationalAutoencoder(NeuralNetConfiguration conf, DataType dataType)
-
-
Method Detail
-
layerConf
protected VariationalAutoencoder layerConf()
-
setCacheMode
public void setCacheMode(CacheMode mode)
Description copied from interface:LayerThis method sets given CacheMode for current layer- Specified by:
setCacheModein interfaceLayer
-
layerId
protected String layerId()
-
update
public void update(Gradient gradient)
Description copied from interface:ModelUpdate layer weights and biases with gradient change
-
update
public void update(INDArray gradient, String paramType)
Description copied from interface:ModelPerform one update applying the gradient
-
score
public double score()
Description copied from interface:ModelThe score for the model
-
getParamWithNoise
protected INDArray getParamWithNoise(String param, boolean training, LayerWorkspaceMgr workspaceMgr)
-
computeGradientAndScore
public void computeGradientAndScore(LayerWorkspaceMgr workspaceMgr)
Description copied from interface:ModelUpdate the score- Specified by:
computeGradientAndScorein interfaceModel
-
params
public INDArray params()
Description copied from interface:ModelParameters of the model (if any)
-
getConfig
public TrainingConfig getConfig()
-
numParams
public long numParams()
Description copied from interface:Modelthe number of parameters for the model
-
numParams
public long numParams(boolean backwards)
Description copied from interface:Modelthe number of parameters for the model
-
setParams
public void setParams(INDArray params)
Description copied from interface:ModelSet the parameters for this model. This expects a linear ndarray which then be unpacked internally relative to the expected ordering of the model
-
setParamsViewArray
public void setParamsViewArray(INDArray params)
Description copied from interface:ModelSet the initial parameters array as a view of the full (backprop) network parameters NOTE: this is intended to be used internally in MultiLayerNetwork and ComputationGraph, not by users.- Specified by:
setParamsViewArrayin interfaceModel- Parameters:
params- a 1 x nParams row vector that is a view of the larger (MLN/CG) parameters array
-
getGradientsViewArray
public INDArray getGradientsViewArray()
- Specified by:
getGradientsViewArrayin interfaceModel- Specified by:
getGradientsViewArrayin interfaceTrainable- Returns:
- 1D gradients view array
-
setBackpropGradientsViewArray
public void setBackpropGradientsViewArray(INDArray gradients)
Description copied from interface:ModelSet the gradients array as a view of the full (backprop) network parameters NOTE: this is intended to be used internally in MultiLayerNetwork and ComputationGraph, not by users.- Specified by:
setBackpropGradientsViewArrayin interfaceModel- Parameters:
gradients- a 1 x nParams row vector that is a view of the larger (MLN/CG) gradients array
-
fit
public void fit(INDArray data, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:ModelFit the model to the given data
-
gradient
public Gradient gradient()
Description copied from interface:ModelGet the gradient. Note that this method will not calculate the gradient, it will rather return the gradient that has been computed before. For calculating the gradient, seeModel.computeGradientAndScore(LayerWorkspaceMgr)} .
-
gradientAndScore
public Pair<Gradient,Double> gradientAndScore()
Description copied from interface:ModelGet the gradient and score- Specified by:
gradientAndScorein interfaceModel- Returns:
- the gradient and score
-
batchSize
public int batchSize()
Description copied from interface:ModelThe current inputs batch size
-
conf
public NeuralNetConfiguration conf()
Description copied from interface:ModelThe configuration for the neural network
-
setConf
public void setConf(NeuralNetConfiguration conf)
Description copied from interface:ModelSetter for the configuration
-
input
public INDArray input()
Description copied from interface:ModelThe input/feature matrix for the model
-
getOptimizer
public ConvexOptimizer getOptimizer()
Description copied from interface:ModelReturns this models optimizer- Specified by:
getOptimizerin interfaceModel- Returns:
- this models optimizer
-
getParam
public INDArray getParam(String param)
Description copied from interface:ModelGet the parameter
-
paramTable
public Map<String,INDArray> paramTable()
Description copied from interface:ModelThe param table- Specified by:
paramTablein interfaceModel- Returns:
-
paramTable
public Map<String,INDArray> paramTable(boolean backpropParamsOnly)
Description copied from interface:ModelTable of parameters by key, for backprop For many models (dense layers, etc) - all parameters are backprop parameters- Specified by:
paramTablein interfaceModel- Specified by:
paramTablein interfaceTrainable- Parameters:
backpropParamsOnly- If true, return backprop params only. If false: return all params (equivalent to paramsTable())- Returns:
- Parameter table
-
updaterDivideByMinibatch
public boolean updaterDivideByMinibatch(String paramName)
Description copied from interface:TrainableDL4J layers typically produce the sum of the gradients during the backward pass for each layer, and if required (if minibatch=true) then divide by the minibatch size.
However, there are some exceptions, such as the batch norm mean/variance estimate parameters: these "gradients" are actually not gradients, but are updates to be applied directly to the parameter vector. Put another way, most gradients should be divided by the minibatch to get the average; some "gradients" are actually final updates already, and should not be divided by the minibatch size.- Specified by:
updaterDivideByMinibatchin interfaceTrainable- Parameters:
paramName- Name of the parameter- Returns:
- True if gradients should be divided by minibatch (most params); false otherwise (edge cases like batch norm mean/variance estimates)
-
setParamTable
public void setParamTable(Map<String,INDArray> paramTable)
Description copied from interface:ModelSetter for the param table- Specified by:
setParamTablein interfaceModel
-
setParam
public void setParam(String key, INDArray val)
Description copied from interface:ModelSet the parameter with a new ndarray
-
clear
public void clear()
Description copied from interface:ModelClear input
-
applyConstraints
public void applyConstraints(int iteration, int epoch)Description copied from interface:ModelApply any constraints to the model- Specified by:
applyConstraintsin interfaceModel
-
isPretrainParam
public boolean isPretrainParam(String param)
-
calcRegularizationScore
public double calcRegularizationScore(boolean backpropParamsOnly)
Description copied from interface:LayerCalculate the regularization component of the score, for the parameters in this layer
For example, the L1, L2 and/or weight decay components of the loss function- Specified by:
calcRegularizationScorein interfaceLayer- Parameters:
backpropParamsOnly- If true: calculate regularization score based on backprop params only. If false: calculate based on all params (including pretrain params, if any)- Returns:
- the regularization score of
-
type
public Layer.Type type()
Description copied from interface:LayerReturns the layer type
-
backpropGradient
public Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerCalculate the gradient relative to the error in the next layer- Specified by:
backpropGradientin interfaceLayer- Parameters:
epsilon- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C is cost function a=sigma(z) is activation.workspaceMgr- Workspace manager- Returns:
- Pair
where Gradient is gradient for this layer, INDArray is epsilon (activation gradient) needed by next layer, but before element-wise multiply by sigmaPrime(z). So for standard feed-forward layer, if this layer is L, then return.getSecond() == dL/dIn = (w^(L)*(delta^(L))^T)^T. Note that the returned array should be placed in the ArrayType.ACTIVATION_GRADworkspace via the workspace manager
-
preOutput
public INDArray preOutput(boolean training, LayerWorkspaceMgr workspaceMgr)
-
activate
public INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerPerform forward pass and return the activations array with the last set input- Specified by:
activatein interfaceLayer- Parameters:
training- training or test modeworkspaceMgr- Workspace manager- Returns:
- the activation (layer output) of the last specified input. Note that the returned array should be placed
in the
ArrayType.ACTIVATIONSworkspace via the workspace manager
-
activate
public INDArray activate(INDArray input, boolean training, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerPerform forward pass and return the activations array with the specified input- Specified by:
activatein interfaceLayer- Parameters:
input- the input to usetraining- train or test modeworkspaceMgr- Workspace manager.- Returns:
- Activations array. Note that the returned array should be placed in the
ArrayType.ACTIVATIONSworkspace via the workspace manager
-
getListeners
public Collection<TrainingListener> getListeners()
Description copied from interface:LayerGet the iteration listeners for this layer.- Specified by:
getListenersin interfaceLayer
-
setListeners
public void setListeners(TrainingListener... listeners)
Description copied from interface:LayerSet theTrainingListeners for this model. If any listeners have previously been set, they will be replaced by this method- Specified by:
setListenersin interfaceLayer- Specified by:
setListenersin interfaceModel
-
setListeners
public void setListeners(Collection<TrainingListener> listeners)
Description copied from interface:LayerSet theTrainingListeners for this model. If any listeners have previously been set, they will be replaced by this method- Specified by:
setListenersin interfaceLayer- Specified by:
setListenersin interfaceModel
-
addListeners
public void addListeners(TrainingListener... listeners)
This method ADDS additional TrainingListener to existing listeners- Specified by:
addListenersin interfaceModel- Parameters:
listeners-
-
setIndex
public void setIndex(int index)
Description copied from interface:LayerSet the layer index.
-
getIndex
public int getIndex()
Description copied from interface:LayerGet the layer index.
-
setInput
public void setInput(INDArray input, LayerWorkspaceMgr layerWorkspaceMgr)
Description copied from interface:LayerSet the layer input.
-
setInputMiniBatchSize
public void setInputMiniBatchSize(int size)
Description copied from interface:LayerSet current/last input mini-batch size.
Used for score and gradient calculations. Mini batch size may be different from getInput().size(0) due to reshaping operations - for example, when using RNNs with DenseLayer and OutputLayer. Called automatically during forward pass.- Specified by:
setInputMiniBatchSizein interfaceLayer
-
getInputMiniBatchSize
public int getInputMiniBatchSize()
Description copied from interface:LayerGet current/last input mini-batch size, as set by setInputMiniBatchSize(int)- Specified by:
getInputMiniBatchSizein interfaceLayer- See Also:
Layer.setInputMiniBatchSize(int)
-
setMaskArray
public void setMaskArray(INDArray maskArray)
Description copied from interface:LayerSet the mask array. Note: In general,Layer.feedForwardMaskArray(INDArray, MaskState, int)should be used in preference to this.- Specified by:
setMaskArrayin interfaceLayer- Parameters:
maskArray- Mask array to set
-
getMaskArray
public INDArray getMaskArray()
- Specified by:
getMaskArrayin interfaceLayer
-
isPretrainLayer
public boolean isPretrainLayer()
Description copied from interface:LayerReturns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)- Specified by:
isPretrainLayerin interfaceLayer- Returns:
- true if the layer can be pretrained (using fit(INDArray), false otherwise
-
clearNoiseWeightParams
public void clearNoiseWeightParams()
- Specified by:
clearNoiseWeightParamsin interfaceLayer
-
allowInputModification
public void allowInputModification(boolean allow)
Description copied from interface:LayerA performance optimization: mark whether the layer is allowed to modify its input array in-place. In many cases, this is totally safe - in others, the input array will be shared by multiple layers, and hence it's not safe to modify the input array. This is usually used by ops such as dropout.- Specified by:
allowInputModificationin interfaceLayer- Parameters:
allow- If true: the input array is safe to modify. If false: the input array should be copied before it is modified (i.e., in-place modifications are un-safe)
-
feedForwardMaskArray
public Pair<INDArray,MaskState> feedForwardMaskArray(INDArray maskArray, MaskState currentMaskState, int minibatchSize)
Description copied from interface:LayerFeed forward the input mask array, setting in the layer as appropriate. This allows different layers to handle masks differently - for example, bidirectional RNNs and normal RNNs operate differently with masks (the former sets activations to 0 outside of the data present region (and keeps the mask active for future layers like dense layers), whereas normal RNNs don't zero out the activations/errors )instead relying on backpropagated error arrays to handle the variable length case.
This is also used for example for networks that contain global pooling layers, arbitrary preprocessors, etc.- Specified by:
feedForwardMaskArrayin interfaceLayer- Parameters:
maskArray- Mask array to setcurrentMaskState- Current state of the mask - seeMaskStateminibatchSize- Current minibatch size. Needs to be known as it cannot always be inferred from the activations array due to reshaping (such as a DenseLayer within a recurrent neural network)- Returns:
- New mask array after this layer, along with the new mask state.
-
getHelper
public LayerHelper getHelper()
-
fit
public void fit()
Description copied from interface:ModelAll models have a fit method
-
reconstructionProbability
public INDArray reconstructionProbability(INDArray data, int numSamples)
Calculate the reconstruction probability, as described in An & Cho, 2015 - "Variational Autoencoder based Anomaly Detection using Reconstruction Probability" (Algorithm 4)
The authors describe it as follows: "This is essentially the probability of the data being generated from a given latent variable drawn from the approximate posterior distribution."
Specifically, for each example x in the input, calculate p(x). Note however that p(x) is a stochastic (Monte-Carlo) estimate of the true p(x), based on the specified number of samples. More samples will produce a more accurate (lower variance) estimate of the true p(x) for the current model parameters.
Internally usesreconstructionLogProbability(INDArray, int)for the actual implementation. That method may be more numerically stable in some cases.
The returned array is a column vector of reconstruction probabilities, for each example. Thus, reconstruction probabilities can (and should, for efficiency) be calculated in a batched manner.- Parameters:
data- The data to calculate the reconstruction probability fornumSamples- Number of samples with which to base the reconstruction probability on.- Returns:
- Column vector of reconstruction probabilities for each example (shape: [numExamples,1])
-
reconstructionLogProbability
public INDArray reconstructionLogProbability(INDArray data, int numSamples)
Return the log reconstruction probability given the specified number of samples.
SeereconstructionLogProbability(INDArray, int)for more details- Parameters:
data- The data to calculate the log reconstruction probabilitynumSamples- Number of samples with which to base the reconstruction probability on.- Returns:
- Column vector of reconstruction log probabilities for each example (shape: [numExamples,1])
-
generateAtMeanGivenZ
public INDArray generateAtMeanGivenZ(INDArray latentSpaceValues)
Given a specified values for the latent space as input (latent space being z in p(z|data)), generate output from P(x|z), where x = E[P(x|z)]
i.e., return the mean value for the distribution P(x|z)- Parameters:
latentSpaceValues- Values for the latent space. size(1) must equal nOut configuration parameter- Returns:
- Sample of data: E[P(x|z)]
-
generateRandomGivenZ
public INDArray generateRandomGivenZ(INDArray latentSpaceValues, LayerWorkspaceMgr workspaceMgr)
Given a specified values for the latent space as input (latent space being z in p(z|data)), randomly generate output x, where x ~ P(x|z)- Parameters:
latentSpaceValues- Values for the latent space. size(1) must equal nOut configuration parameter- Returns:
- Sample of data: x ~ P(x|z)
-
hasLossFunction
public boolean hasLossFunction()
Does the reconstruction distribution have a loss function (such as mean squared error) or is it a standard probabilistic reconstruction distribution?
-
reconstructionError
public INDArray reconstructionError(INDArray data)
Return the reconstruction error for this variational autoencoder.
NOTE (important): This method is used ONLY for VAEs that have a standard neural network loss function (i.e., anILossFunctioninstance such as mean squared error) instead of using a probabilistic reconstruction distribution P(x|z) for the reconstructions (as presented in the VAE architecture by Kingma and Welling).
You can check if the VAE has a loss function usinghasLossFunction()
Consequently, the reconstruction error is a simple deterministic function (no Monte-Carlo sampling is required, unlikereconstructionProbability(INDArray, int)andreconstructionLogProbability(INDArray, int))- Parameters:
data- The data to calculate the reconstruction error on- Returns:
- Column vector of reconstruction errors for each example (shape: [numExamples,1])
-
assertInputSet
public void assertInputSet(boolean backprop)
-
-