Class BasePretrainNetwork
- java.lang.Object
-
- org.deeplearning4j.nn.conf.layers.Layer
-
- org.deeplearning4j.nn.conf.layers.BaseLayer
-
- org.deeplearning4j.nn.conf.layers.FeedForwardLayer
-
- org.deeplearning4j.nn.conf.layers.BasePretrainNetwork
-
- All Implemented Interfaces:
Serializable
,Cloneable
,TrainingConfig
- Direct Known Subclasses:
AutoEncoder
,VariationalAutoencoder
public abstract class BasePretrainNetwork extends FeedForwardLayer
- See Also:
- Serialized Form
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static class
BasePretrainNetwork.Builder<T extends BasePretrainNetwork.Builder<T>>
-
Field Summary
Fields Modifier and Type Field Description protected LossFunctions.LossFunction
lossFunction
protected double
visibleBiasInit
-
Fields inherited from class org.deeplearning4j.nn.conf.layers.FeedForwardLayer
nIn, nOut, timeDistributedFormat
-
Fields inherited from class org.deeplearning4j.nn.conf.layers.BaseLayer
activationFn, biasInit, biasUpdater, gainInit, gradientNormalization, gradientNormalizationThreshold, iUpdater, regularization, regularizationBias, weightInitFn, weightNoise
-
Fields inherited from class org.deeplearning4j.nn.conf.layers.Layer
constraints, iDropout, layerName
-
-
Constructor Summary
Constructors Constructor Description BasePretrainNetwork(BasePretrainNetwork.Builder builder)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description boolean
isPretrainParam(String paramName)
Is the specified parameter a layerwise pretraining only parameter?
For example, visible bias params in an autoencoder (or, decoder params in a variational autoencoder) aren't used during supervised backprop.
Layers (like DenseLayer, etc) with no pretrainable parameters will return false for all (valid) inputs.-
Methods inherited from class org.deeplearning4j.nn.conf.layers.FeedForwardLayer
getOutputType, getPreProcessorForInputType, setNIn
-
Methods inherited from class org.deeplearning4j.nn.conf.layers.BaseLayer
clone, getGradientNormalization, getRegularizationByParam, getUpdaterByParam, resetLayerDefaultConfig
-
Methods inherited from class org.deeplearning4j.nn.conf.layers.Layer
getMemoryReport, initializeConstraints, initializer, instantiate, setDataType
-
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface org.deeplearning4j.nn.api.TrainingConfig
getGradientNormalizationThreshold, getLayerName
-
-
-
-
Field Detail
-
lossFunction
protected LossFunctions.LossFunction lossFunction
-
visibleBiasInit
protected double visibleBiasInit
-
-
Constructor Detail
-
BasePretrainNetwork
public BasePretrainNetwork(BasePretrainNetwork.Builder builder)
-
-
Method Detail
-
isPretrainParam
public boolean isPretrainParam(String paramName)
Description copied from class:Layer
Is the specified parameter a layerwise pretraining only parameter?
For example, visible bias params in an autoencoder (or, decoder params in a variational autoencoder) aren't used during supervised backprop.
Layers (like DenseLayer, etc) with no pretrainable parameters will return false for all (valid) inputs.- Specified by:
isPretrainParam
in interfaceTrainingConfig
- Overrides:
isPretrainParam
in classFeedForwardLayer
- Parameters:
paramName
- Parameter name/key- Returns:
- True if the parameter is for layerwise pretraining only, false otherwise
-
-