Class ConvolutionLayer
- java.lang.Object
-
- org.deeplearning4j.nn.conf.layers.Layer
-
- org.deeplearning4j.nn.conf.layers.BaseLayer
-
- org.deeplearning4j.nn.conf.layers.FeedForwardLayer
-
- org.deeplearning4j.nn.conf.layers.ConvolutionLayer
-
- All Implemented Interfaces:
Serializable
,Cloneable
,TrainingConfig
- Direct Known Subclasses:
Convolution1DLayer
,Convolution2D
,Convolution3D
,Deconvolution2D
,Deconvolution3D
,DepthwiseConvolution2D
,SeparableConvolution2D
public class ConvolutionLayer extends FeedForwardLayer
- See Also:
- Serialized Form
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static class
ConvolutionLayer.AlgoMode
The "PREFER_FASTEST" mode will pick the fastest algorithm for the specified parameters from theConvolutionLayer.FwdAlgo
,ConvolutionLayer.BwdFilterAlgo
, andConvolutionLayer.BwdDataAlgo
lists, but they may be very memory intensive, so if weird errors occur when using cuDNN, please try the "NO_WORKSPACE" mode.static class
ConvolutionLayer.BaseConvBuilder<T extends ConvolutionLayer.BaseConvBuilder<T>>
static class
ConvolutionLayer.Builder
static class
ConvolutionLayer.BwdDataAlgo
The backward data algorithm to use whenConvolutionLayer.AlgoMode
is set to "USER_SPECIFIED".static class
ConvolutionLayer.BwdFilterAlgo
The backward filter algorithm to use whenConvolutionLayer.AlgoMode
is set to "USER_SPECIFIED".static class
ConvolutionLayer.FwdAlgo
The forward algorithm to use whenConvolutionLayer.AlgoMode
is set to "USER_SPECIFIED".
-
Field Summary
Fields Modifier and Type Field Description protected CNN2DFormat
cnn2dDataFormat
protected ConvolutionMode
convolutionMode
protected ConvolutionLayer.AlgoMode
cudnnAlgoMode
Defaults to "PREFER_FASTEST", but "NO_WORKSPACE" uses less memory.protected boolean
cudnnAllowFallback
protected ConvolutionLayer.BwdDataAlgo
cudnnBwdDataAlgo
protected ConvolutionLayer.BwdFilterAlgo
cudnnBwdFilterAlgo
protected ConvolutionLayer.FwdAlgo
cudnnFwdAlgo
protected int[]
dilation
protected boolean
hasBias
protected int[]
kernelSize
protected int[]
padding
protected int[]
stride
-
Fields inherited from class org.deeplearning4j.nn.conf.layers.FeedForwardLayer
nIn, nOut, timeDistributedFormat
-
Fields inherited from class org.deeplearning4j.nn.conf.layers.BaseLayer
activationFn, biasInit, biasUpdater, gainInit, gradientNormalization, gradientNormalizationThreshold, iUpdater, regularization, regularizationBias, weightInitFn, weightNoise
-
Fields inherited from class org.deeplearning4j.nn.conf.layers.Layer
constraints, iDropout, layerName
-
-
Constructor Summary
Constructors Modifier Constructor Description protected
ConvolutionLayer(ConvolutionLayer.BaseConvBuilder<?> builder)
ConvolutionLayer nIn in the input layer is the number of channels nOut is the number of filters to be used in the net or in other words the channels The builder specifies the filter/kernel size, the stride and padding The pooling layer takes the kernel size
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description ConvolutionLayer
clone()
LayerMemoryReport
getMemoryReport(InputType inputType)
This is a report of the estimated memory consumption for the given layerInputType
getOutputType(int layerIndex, InputType inputType)
For a given type of input to this layer, what is the type of the output?InputPreProcessor
getPreProcessorForInputType(InputType inputType)
For the given type of input to this layer, what preprocessor (if any) is required?
Returns null if no preprocessor is required, otherwise returns an appropriateInputPreProcessor
for this layer, such as aCnnToFeedForwardPreProcessor
boolean
hasBias()
ParamInitializer
initializer()
Layer
instantiate(NeuralNetConfiguration conf, Collection<TrainingListener> trainingListeners, int layerIndex, INDArray layerParamsView, boolean initializeParams, DataType networkDataType)
void
setNIn(InputType inputType, boolean override)
Set the nIn value (number of inputs, or input channels for CNNs) based on the given input type-
Methods inherited from class org.deeplearning4j.nn.conf.layers.FeedForwardLayer
isPretrainParam
-
Methods inherited from class org.deeplearning4j.nn.conf.layers.BaseLayer
getGradientNormalization, getRegularizationByParam, getUpdaterByParam, resetLayerDefaultConfig
-
Methods inherited from class org.deeplearning4j.nn.conf.layers.Layer
initializeConstraints, setDataType
-
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface org.deeplearning4j.nn.api.TrainingConfig
getGradientNormalizationThreshold, getLayerName
-
-
-
-
Field Detail
-
hasBias
protected boolean hasBias
-
convolutionMode
protected ConvolutionMode convolutionMode
-
dilation
protected int[] dilation
-
kernelSize
protected int[] kernelSize
-
stride
protected int[] stride
-
padding
protected int[] padding
-
cudnnAllowFallback
protected boolean cudnnAllowFallback
-
cnn2dDataFormat
protected CNN2DFormat cnn2dDataFormat
-
cudnnAlgoMode
protected ConvolutionLayer.AlgoMode cudnnAlgoMode
Defaults to "PREFER_FASTEST", but "NO_WORKSPACE" uses less memory.
-
cudnnFwdAlgo
protected ConvolutionLayer.FwdAlgo cudnnFwdAlgo
-
cudnnBwdFilterAlgo
protected ConvolutionLayer.BwdFilterAlgo cudnnBwdFilterAlgo
-
cudnnBwdDataAlgo
protected ConvolutionLayer.BwdDataAlgo cudnnBwdDataAlgo
-
-
Constructor Detail
-
ConvolutionLayer
protected ConvolutionLayer(ConvolutionLayer.BaseConvBuilder<?> builder)
ConvolutionLayer nIn in the input layer is the number of channels nOut is the number of filters to be used in the net or in other words the channels The builder specifies the filter/kernel size, the stride and padding The pooling layer takes the kernel size
-
-
Method Detail
-
hasBias
public boolean hasBias()
-
clone
public ConvolutionLayer clone()
-
instantiate
public Layer instantiate(NeuralNetConfiguration conf, Collection<TrainingListener> trainingListeners, int layerIndex, INDArray layerParamsView, boolean initializeParams, DataType networkDataType)
- Specified by:
instantiate
in classLayer
-
initializer
public ParamInitializer initializer()
- Specified by:
initializer
in classLayer
- Returns:
- The parameter initializer for this model
-
getOutputType
public InputType getOutputType(int layerIndex, InputType inputType)
Description copied from class:Layer
For a given type of input to this layer, what is the type of the output?- Overrides:
getOutputType
in classFeedForwardLayer
- Parameters:
layerIndex
- Index of the layerinputType
- Type of input for the layer- Returns:
- Type of output from the layer
-
setNIn
public void setNIn(InputType inputType, boolean override)
Description copied from class:Layer
Set the nIn value (number of inputs, or input channels for CNNs) based on the given input type- Overrides:
setNIn
in classFeedForwardLayer
- Parameters:
inputType
- Input type for this layeroverride
- If false: only set the nIn value if it's not already set. If true: set it regardless of whether it's already set or not.
-
getPreProcessorForInputType
public InputPreProcessor getPreProcessorForInputType(InputType inputType)
Description copied from class:Layer
For the given type of input to this layer, what preprocessor (if any) is required?
Returns null if no preprocessor is required, otherwise returns an appropriateInputPreProcessor
for this layer, such as aCnnToFeedForwardPreProcessor
- Overrides:
getPreProcessorForInputType
in classFeedForwardLayer
- Parameters:
inputType
- InputType to this layer- Returns:
- Null if no preprocessor is required, otherwise the type of preprocessor necessary for this layer/input combination
-
getMemoryReport
public LayerMemoryReport getMemoryReport(InputType inputType)
Description copied from class:Layer
This is a report of the estimated memory consumption for the given layer- Specified by:
getMemoryReport
in classLayer
- Parameters:
inputType
- Input type to the layer. Memory consumption is often a function of the input type- Returns:
- Memory report for the layer
-
-