Class LocalResponseNormalization
- java.lang.Object
-
- org.deeplearning4j.nn.layers.AbstractLayer<LocalResponseNormalization>
-
- org.deeplearning4j.nn.layers.normalization.LocalResponseNormalization
-
- All Implemented Interfaces:
Serializable
,Cloneable
,Layer
,Model
,Trainable
public class LocalResponseNormalization extends AbstractLayer<LocalResponseNormalization>
- See Also:
- Serialized Form
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from interface org.deeplearning4j.nn.api.Layer
Layer.TrainingMode, Layer.Type
-
-
Field Summary
Fields Modifier and Type Field Description protected LocalResponseNormalizationHelper
helper
protected int
helperCountFail
static String
LOCAL_RESPONSE_NORM_CUDNN_HELPER_CLASS_NAME
-
Fields inherited from class org.deeplearning4j.nn.layers.AbstractLayer
cacheMode, conf, dataType, dropoutApplied, epochCount, index, input, inputModificationAllowed, iterationCount, maskArray, maskState, preOutput, trainingListeners
-
-
Constructor Summary
Constructors Constructor Description LocalResponseNormalization(NeuralNetConfiguration conf, DataType dataType)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description INDArray
activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the last set inputPair<Gradient,INDArray>
backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Calculate the gradient relative to the error in the next layerdouble
calcRegularizationScore(boolean backpropParamsOnly)
Calculate the regularization component of the score, for the parameters in this layer
For example, the L1, L2 and/or weight decay components of the loss functionvoid
clearNoiseWeightParams()
Layer
clone()
void
fit(INDArray input, LayerWorkspaceMgr workspaceMgr)
Fit the model to the given dataLayerHelper
getHelper()
INDArray
getParam(String param)
Get the parameterboolean
isPretrainLayer()
Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)INDArray
params()
Returns the parameters of the neural network as a flattened row vectorvoid
setParams(INDArray params)
Set the parameters for this model.Layer.Type
type()
Returns the layer type-
Methods inherited from class org.deeplearning4j.nn.layers.AbstractLayer
activate, addListeners, allowInputModification, applyConstraints, applyDropOutIfNecessary, applyMask, assertInputSet, backpropDropOutIfPresent, batchSize, clear, close, computeGradientAndScore, conf, feedForwardMaskArray, fit, getConfig, getEpochCount, getGradientsViewArray, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, getOptimizer, gradient, gradientAndScore, init, input, layerConf, layerId, numParams, numParams, paramTable, paramTable, score, setBackpropGradientsViewArray, setCacheMode, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, setParam, setParams, setParamsViewArray, setParamTable, update, update, updaterDivideByMinibatch
-
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface org.deeplearning4j.nn.api.Layer
getIterationCount, setIterationCount
-
-
-
-
Field Detail
-
helper
protected LocalResponseNormalizationHelper helper
-
helperCountFail
protected int helperCountFail
-
LOCAL_RESPONSE_NORM_CUDNN_HELPER_CLASS_NAME
public static final String LOCAL_RESPONSE_NORM_CUDNN_HELPER_CLASS_NAME
- See Also:
- Constant Field Values
-
-
Constructor Detail
-
LocalResponseNormalization
public LocalResponseNormalization(NeuralNetConfiguration conf, DataType dataType)
-
-
Method Detail
-
calcRegularizationScore
public double calcRegularizationScore(boolean backpropParamsOnly)
Description copied from interface:Layer
Calculate the regularization component of the score, for the parameters in this layer
For example, the L1, L2 and/or weight decay components of the loss function- Specified by:
calcRegularizationScore
in interfaceLayer
- Overrides:
calcRegularizationScore
in classAbstractLayer<LocalResponseNormalization>
- Parameters:
backpropParamsOnly
- If true: calculate regularization score based on backprop params only. If false: calculate based on all params (including pretrain params, if any)- Returns:
- the regularization score of
-
type
public Layer.Type type()
Description copied from interface:Layer
Returns the layer type- Specified by:
type
in interfaceLayer
- Overrides:
type
in classAbstractLayer<LocalResponseNormalization>
- Returns:
-
fit
public void fit(INDArray input, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:Model
Fit the model to the given data- Specified by:
fit
in interfaceModel
- Overrides:
fit
in classAbstractLayer<LocalResponseNormalization>
- Parameters:
input
- the data to fit the model to
-
backpropGradient
public Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:Layer
Calculate the gradient relative to the error in the next layer- Parameters:
epsilon
- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C is cost function a=sigma(z) is activation.workspaceMgr
- Workspace manager- Returns:
- Pair
where Gradient is gradient for this layer, INDArray is epsilon (activation gradient) needed by next layer, but before element-wise multiply by sigmaPrime(z). So for standard feed-forward layer, if this layer is L, then return.getSecond() == dL/dIn = (w^(L)*(delta^(L))^T)^T. Note that the returned array should be placed in the ArrayType.ACTIVATION_GRAD
workspace via the workspace manager
-
activate
public INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:Layer
Perform forward pass and return the activations array with the last set input- Parameters:
training
- training or test modeworkspaceMgr
- Workspace manager- Returns:
- the activation (layer output) of the last specified input. Note that the returned array should be placed
in the
ArrayType.ACTIVATIONS
workspace via the workspace manager
-
isPretrainLayer
public boolean isPretrainLayer()
Description copied from interface:Layer
Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)- Returns:
- true if the layer can be pretrained (using fit(INDArray), false otherwise
-
clearNoiseWeightParams
public void clearNoiseWeightParams()
-
getHelper
public LayerHelper getHelper()
- Specified by:
getHelper
in interfaceLayer
- Overrides:
getHelper
in classAbstractLayer<LocalResponseNormalization>
- Returns:
- Get the layer helper, if any
-
params
public INDArray params()
Description copied from class:AbstractLayer
Returns the parameters of the neural network as a flattened row vector- Specified by:
params
in interfaceModel
- Specified by:
params
in interfaceTrainable
- Overrides:
params
in classAbstractLayer<LocalResponseNormalization>
- Returns:
- the parameters of the neural network
-
getParam
public INDArray getParam(String param)
Description copied from interface:Model
Get the parameter- Specified by:
getParam
in interfaceModel
- Overrides:
getParam
in classAbstractLayer<LocalResponseNormalization>
- Parameters:
param
- the key of the parameter- Returns:
- the parameter vector/matrix with that particular key
-
setParams
public void setParams(INDArray params)
Description copied from interface:Model
Set the parameters for this model. This expects a linear ndarray which then be unpacked internally relative to the expected ordering of the model- Specified by:
setParams
in interfaceModel
- Overrides:
setParams
in classAbstractLayer<LocalResponseNormalization>
- Parameters:
params
- the parameters for the model
-
-