Class SubsamplingLayer
- java.lang.Object
-
- org.deeplearning4j.nn.layers.AbstractLayer<SubsamplingLayer>
-
- org.deeplearning4j.nn.layers.convolution.subsampling.SubsamplingLayer
-
- All Implemented Interfaces:
Serializable,Cloneable,Layer,Model,Trainable
- Direct Known Subclasses:
Subsampling1DLayer
public class SubsamplingLayer extends AbstractLayer<SubsamplingLayer>
- See Also:
- Serialized Form
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from interface org.deeplearning4j.nn.api.Layer
Layer.TrainingMode, Layer.Type
-
-
Field Summary
Fields Modifier and Type Field Description protected ConvolutionModeconvolutionModestatic StringCUDNN_SUBSAMPLING_HELPER_CLASS_NAMEprotected SubsamplingHelperhelperprotected inthelperCountFail-
Fields inherited from class org.deeplearning4j.nn.layers.AbstractLayer
cacheMode, conf, dataType, dropoutApplied, epochCount, index, input, inputModificationAllowed, iterationCount, maskArray, maskState, preOutput, trainingListeners
-
-
Constructor Summary
Constructors Constructor Description SubsamplingLayer(NeuralNetConfiguration conf, DataType dataType)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description INDArrayactivate(boolean training, LayerWorkspaceMgr workspaceMgr)Perform forward pass and return the activations array with the last set inputPair<Gradient,INDArray>backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)Calculate the gradient relative to the error in the next layerdoublecalcRegularizationScore(boolean backpropOnlyParams)Calculate the regularization component of the score, for the parameters in this layer
For example, the L1, L2 and/or weight decay components of the loss functionvoidclearNoiseWeightParams()Pair<INDArray,MaskState>feedForwardMaskArray(INDArray maskArray, MaskState currentMaskState, int minibatchSize)Feed forward the input mask array, setting in the layer as appropriate.voidfit()All models have a fit methodvoidfit(INDArray input, LayerWorkspaceMgr workspaceMgr)Fit the model to the given dataLayerHelpergetHelper()INDArraygetParam(String param)Get the parameterGradientgradient()Get the gradient.booleanisPretrainLayer()Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)longnumParams()The number of parameters for the modelINDArrayparams()Returns the parameters of the neural network as a flattened row vectordoublescore()The score for the modelvoidsetParams(INDArray params)Set the parameters for this model.Layer.Typetype()Returns the layer typevoidupdate(INDArray gradient, String paramType)Perform one update applying the gradient-
Methods inherited from class org.deeplearning4j.nn.layers.AbstractLayer
activate, addListeners, allowInputModification, applyConstraints, applyDropOutIfNecessary, applyMask, assertInputSet, backpropDropOutIfPresent, batchSize, clear, close, computeGradientAndScore, conf, getConfig, getEpochCount, getGradientsViewArray, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, getOptimizer, gradientAndScore, init, input, layerConf, layerId, numParams, paramTable, paramTable, setBackpropGradientsViewArray, setCacheMode, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, setParam, setParams, setParamsViewArray, setParamTable, update, updaterDivideByMinibatch
-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface org.deeplearning4j.nn.api.Layer
getIterationCount, setIterationCount
-
-
-
-
Field Detail
-
helper
protected SubsamplingHelper helper
-
helperCountFail
protected int helperCountFail
-
convolutionMode
protected ConvolutionMode convolutionMode
-
CUDNN_SUBSAMPLING_HELPER_CLASS_NAME
public static final String CUDNN_SUBSAMPLING_HELPER_CLASS_NAME
- See Also:
- Constant Field Values
-
-
Constructor Detail
-
SubsamplingLayer
public SubsamplingLayer(NeuralNetConfiguration conf, DataType dataType)
-
-
Method Detail
-
calcRegularizationScore
public double calcRegularizationScore(boolean backpropOnlyParams)
Description copied from interface:LayerCalculate the regularization component of the score, for the parameters in this layer
For example, the L1, L2 and/or weight decay components of the loss function- Specified by:
calcRegularizationScorein interfaceLayer- Overrides:
calcRegularizationScorein classAbstractLayer<SubsamplingLayer>- Parameters:
backpropOnlyParams- If true: calculate regularization score based on backprop params only. If false: calculate based on all params (including pretrain params, if any)- Returns:
- the regularization score of
-
type
public Layer.Type type()
Description copied from interface:LayerReturns the layer type- Specified by:
typein interfaceLayer- Overrides:
typein classAbstractLayer<SubsamplingLayer>- Returns:
-
backpropGradient
public Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerCalculate the gradient relative to the error in the next layer- Parameters:
epsilon- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C is cost function a=sigma(z) is activation.workspaceMgr- Workspace manager- Returns:
- Pair
where Gradient is gradient for this layer, INDArray is epsilon (activation gradient) needed by next layer, but before element-wise multiply by sigmaPrime(z). So for standard feed-forward layer, if this layer is L, then return.getSecond() == dL/dIn = (w^(L)*(delta^(L))^T)^T. Note that the returned array should be placed in the ArrayType.ACTIVATION_GRADworkspace via the workspace manager
-
activate
public INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerPerform forward pass and return the activations array with the last set input- Parameters:
training- training or test modeworkspaceMgr- Workspace manager- Returns:
- the activation (layer output) of the last specified input. Note that the returned array should be placed
in the
ArrayType.ACTIVATIONSworkspace via the workspace manager
-
isPretrainLayer
public boolean isPretrainLayer()
Description copied from interface:LayerReturns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)- Returns:
- true if the layer can be pretrained (using fit(INDArray), false otherwise
-
clearNoiseWeightParams
public void clearNoiseWeightParams()
-
getHelper
public LayerHelper getHelper()
- Specified by:
getHelperin interfaceLayer- Overrides:
getHelperin classAbstractLayer<SubsamplingLayer>- Returns:
- Get the layer helper, if any
-
gradient
public Gradient gradient()
Description copied from interface:ModelGet the gradient. Note that this method will not calculate the gradient, it will rather return the gradient that has been computed before. For calculating the gradient, seeModel.computeGradientAndScore(LayerWorkspaceMgr)} .- Specified by:
gradientin interfaceModel- Overrides:
gradientin classAbstractLayer<SubsamplingLayer>- Returns:
- the gradient for this model, as calculated before
-
fit
public void fit()
Description copied from interface:ModelAll models have a fit method- Specified by:
fitin interfaceModel- Overrides:
fitin classAbstractLayer<SubsamplingLayer>
-
numParams
public long numParams()
Description copied from class:AbstractLayerThe number of parameters for the model- Specified by:
numParamsin interfaceModel- Specified by:
numParamsin interfaceTrainable- Overrides:
numParamsin classAbstractLayer<SubsamplingLayer>- Returns:
- the number of parameters for the model
-
fit
public void fit(INDArray input, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:ModelFit the model to the given data- Specified by:
fitin interfaceModel- Overrides:
fitin classAbstractLayer<SubsamplingLayer>- Parameters:
input- the data to fit the model to
-
score
public double score()
Description copied from interface:ModelThe score for the model- Specified by:
scorein interfaceModel- Overrides:
scorein classAbstractLayer<SubsamplingLayer>- Returns:
- the score for the model
-
update
public void update(INDArray gradient, String paramType)
Description copied from interface:ModelPerform one update applying the gradient- Specified by:
updatein interfaceModel- Overrides:
updatein classAbstractLayer<SubsamplingLayer>- Parameters:
gradient- the gradient to apply
-
params
public INDArray params()
Description copied from class:AbstractLayerReturns the parameters of the neural network as a flattened row vector- Specified by:
paramsin interfaceModel- Specified by:
paramsin interfaceTrainable- Overrides:
paramsin classAbstractLayer<SubsamplingLayer>- Returns:
- the parameters of the neural network
-
getParam
public INDArray getParam(String param)
Description copied from interface:ModelGet the parameter- Specified by:
getParamin interfaceModel- Overrides:
getParamin classAbstractLayer<SubsamplingLayer>- Parameters:
param- the key of the parameter- Returns:
- the parameter vector/matrix with that particular key
-
setParams
public void setParams(INDArray params)
Description copied from interface:ModelSet the parameters for this model. This expects a linear ndarray which then be unpacked internally relative to the expected ordering of the model- Specified by:
setParamsin interfaceModel- Overrides:
setParamsin classAbstractLayer<SubsamplingLayer>- Parameters:
params- the parameters for the model
-
feedForwardMaskArray
public Pair<INDArray,MaskState> feedForwardMaskArray(INDArray maskArray, MaskState currentMaskState, int minibatchSize)
Description copied from interface:LayerFeed forward the input mask array, setting in the layer as appropriate. This allows different layers to handle masks differently - for example, bidirectional RNNs and normal RNNs operate differently with masks (the former sets activations to 0 outside of the data present region (and keeps the mask active for future layers like dense layers), whereas normal RNNs don't zero out the activations/errors )instead relying on backpropagated error arrays to handle the variable length case.
This is also used for example for networks that contain global pooling layers, arbitrary preprocessors, etc.- Specified by:
feedForwardMaskArrayin interfaceLayer- Overrides:
feedForwardMaskArrayin classAbstractLayer<SubsamplingLayer>- Parameters:
maskArray- Mask array to setcurrentMaskState- Current state of the mask - seeMaskStateminibatchSize- Current minibatch size. Needs to be known as it cannot always be inferred from the activations array due to reshaping (such as a DenseLayer within a recurrent neural network)- Returns:
- New mask array after this layer, along with the new mask state.
-
-