Class NoParamLayer
- java.lang.Object
-
- org.deeplearning4j.nn.conf.layers.Layer
-
- org.deeplearning4j.nn.conf.layers.NoParamLayer
-
- All Implemented Interfaces:
Serializable
,Cloneable
,TrainingConfig
- Direct Known Subclasses:
ActivationLayer
,BaseUpsamplingLayer
,Cropping1D
,Cropping2D
,Cropping3D
,GlobalPoolingLayer
,MaskLayer
,SpaceToBatchLayer
,SpaceToDepthLayer
,Subsampling3DLayer
,SubsamplingLayer
,ZeroPadding1DLayer
,ZeroPadding3DLayer
,ZeroPaddingLayer
public abstract class NoParamLayer extends Layer
- See Also:
- Serialized Form
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class org.deeplearning4j.nn.conf.layers.Layer
Layer.Builder<T extends Layer.Builder<T>>
-
-
Field Summary
-
Fields inherited from class org.deeplearning4j.nn.conf.layers.Layer
constraints, iDropout, layerName
-
-
Constructor Summary
Constructors Modifier Constructor Description protected
NoParamLayer(Layer.Builder builder)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description GradientNormalization
getGradientNormalization()
double
getGradientNormalizationThreshold()
List<Regularization>
getRegularizationByParam(String paramName)
Get the regularization types (l1/l2/weight decay) for the given parameter.ParamInitializer
initializer()
boolean
isPretrainParam(String paramName)
Is the specified parameter a layerwise pretraining only parameter?
For example, visible bias params in an autoencoder (or, decoder params in a variational autoencoder) aren't used during supervised backprop.
Layers (like DenseLayer, etc) with no pretrainable parameters will return false for all (valid) inputs.void
setNIn(InputType inputType, boolean override)
Set the nIn value (number of inputs, or input channels for CNNs) based on the given input type-
Methods inherited from class org.deeplearning4j.nn.conf.layers.Layer
clone, getMemoryReport, getOutputType, getPreProcessorForInputType, getUpdaterByParam, initializeConstraints, instantiate, resetLayerDefaultConfig, setDataType
-
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface org.deeplearning4j.nn.api.TrainingConfig
getLayerName
-
-
-
-
Constructor Detail
-
NoParamLayer
protected NoParamLayer(Layer.Builder builder)
-
-
Method Detail
-
initializer
public ParamInitializer initializer()
- Specified by:
initializer
in classLayer
- Returns:
- The parameter initializer for this model
-
setNIn
public void setNIn(InputType inputType, boolean override)
Description copied from class:Layer
Set the nIn value (number of inputs, or input channels for CNNs) based on the given input type
-
getRegularizationByParam
public List<Regularization> getRegularizationByParam(String paramName)
Description copied from class:Layer
Get the regularization types (l1/l2/weight decay) for the given parameter. Different parameters may have different regularization types.- Specified by:
getRegularizationByParam
in interfaceTrainingConfig
- Specified by:
getRegularizationByParam
in classLayer
- Parameters:
paramName
- Parameter name ("W", "b" etc)- Returns:
- Regularization types (if any) for the specified parameter
-
getGradientNormalization
public GradientNormalization getGradientNormalization()
- Returns:
- The gradient normalization configuration
-
getGradientNormalizationThreshold
public double getGradientNormalizationThreshold()
- Returns:
- The gradient normalization threshold
-
isPretrainParam
public boolean isPretrainParam(String paramName)
Description copied from class:Layer
Is the specified parameter a layerwise pretraining only parameter?
For example, visible bias params in an autoencoder (or, decoder params in a variational autoencoder) aren't used during supervised backprop.
Layers (like DenseLayer, etc) with no pretrainable parameters will return false for all (valid) inputs.- Specified by:
isPretrainParam
in interfaceTrainingConfig
- Specified by:
isPretrainParam
in classLayer
- Parameters:
paramName
- Parameter name/key- Returns:
- True if the parameter is for layerwise pretraining only, false otherwise
-
-