Package org.deeplearning4j.nn.layers
Class FrozenLayer
- java.lang.Object
-
- org.deeplearning4j.nn.layers.wrapper.BaseWrapperLayer
-
- org.deeplearning4j.nn.layers.FrozenLayer
-
- All Implemented Interfaces:
Serializable,Cloneable,Layer,Model,Trainable
public class FrozenLayer extends BaseWrapperLayer
- See Also:
- Serialized Form
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from interface org.deeplearning4j.nn.api.Layer
Layer.TrainingMode, Layer.Type
-
-
Field Summary
-
Fields inherited from class org.deeplearning4j.nn.layers.wrapper.BaseWrapperLayer
underlying
-
-
Constructor Summary
Constructors Constructor Description FrozenLayer(Layer insideLayer)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description INDArrayactivate(boolean training, LayerWorkspaceMgr workspaceMgr)Perform forward pass and return the activations array with the last set inputINDArrayactivate(INDArray input, boolean training, LayerWorkspaceMgr workspaceMgr)Perform forward pass and return the activations array with the specified inputvoidapplyConstraints(int iteration, int epoch)Apply any constraints to the modelPair<Gradient,INDArray>backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)Calculate the gradient relative to the error in the next layerdoublecalcRegularizationScore(boolean backpropParamsOnly)Calculate the regularization component of the score, for the parameters in this layer
For example, the L1, L2 and/or weight decay components of the loss functionvoidcomputeGradientAndScore(LayerWorkspaceMgr workspaceMgr)Update the scorevoidfit()All models have a fit methodvoidfit(INDArray data, LayerWorkspaceMgr workspaceMgr)Fit the model to the given dataTrainingConfiggetConfig()LayergetInsideLayer()Gradientgradient()Get the gradient.Pair<Gradient,Double>gradientAndScore()Get the gradient and scorevoidinit()Init the modelprotected StringlayerId()voidlogTestMode(boolean training)voidlogTestMode(Layer.TrainingMode training)voidsetBackpropGradientsViewArray(INDArray gradients)Set the gradients array as a view of the full (backprop) network parameters NOTE: this is intended to be used internally in MultiLayerNetwork and ComputationGraph, not by users.voidsetCacheMode(CacheMode mode)This method sets given CacheMode for current layervoidupdate(Gradient gradient)Update layer weights and biases with gradient changevoidupdate(INDArray gradient, String paramType)Perform one update applying the gradient-
Methods inherited from class org.deeplearning4j.nn.layers.wrapper.BaseWrapperLayer
addListeners, allowInputModification, batchSize, clear, clearNoiseWeightParams, close, conf, feedForwardMaskArray, getEpochCount, getGradientsViewArray, getHelper, getIndex, getInputMiniBatchSize, getIterationCount, getListeners, getMaskArray, getOptimizer, getParam, input, isPretrainLayer, numParams, numParams, params, paramTable, paramTable, score, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setIterationCount, setListeners, setListeners, setMaskArray, setParam, setParams, setParamsViewArray, setParamTable, type, updaterDivideByMinibatch
-
-
-
-
Constructor Detail
-
FrozenLayer
public FrozenLayer(Layer insideLayer)
-
-
Method Detail
-
setCacheMode
public void setCacheMode(CacheMode mode)
Description copied from interface:LayerThis method sets given CacheMode for current layer- Specified by:
setCacheModein interfaceLayer- Overrides:
setCacheModein classBaseWrapperLayer
-
layerId
protected String layerId()
-
calcRegularizationScore
public double calcRegularizationScore(boolean backpropParamsOnly)
Description copied from interface:LayerCalculate the regularization component of the score, for the parameters in this layer
For example, the L1, L2 and/or weight decay components of the loss function- Specified by:
calcRegularizationScorein interfaceLayer- Overrides:
calcRegularizationScorein classBaseWrapperLayer- Parameters:
backpropParamsOnly- If true: calculate regularization score based on backprop params only. If false: calculate based on all params (including pretrain params, if any)- Returns:
- the regularization score of
-
backpropGradient
public Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerCalculate the gradient relative to the error in the next layer- Specified by:
backpropGradientin interfaceLayer- Overrides:
backpropGradientin classBaseWrapperLayer- Parameters:
epsilon- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C is cost function a=sigma(z) is activation.workspaceMgr- Workspace manager- Returns:
- Pair
where Gradient is gradient for this layer, INDArray is epsilon (activation gradient) needed by next layer, but before element-wise multiply by sigmaPrime(z). So for standard feed-forward layer, if this layer is L, then return.getSecond() == dL/dIn = (w^(L)*(delta^(L))^T)^T. Note that the returned array should be placed in the ArrayType.ACTIVATION_GRADworkspace via the workspace manager
-
activate
public INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerPerform forward pass and return the activations array with the last set input- Specified by:
activatein interfaceLayer- Overrides:
activatein classBaseWrapperLayer- Parameters:
training- training or test modeworkspaceMgr- Workspace manager- Returns:
- the activation (layer output) of the last specified input. Note that the returned array should be placed
in the
ArrayType.ACTIVATIONSworkspace via the workspace manager
-
activate
public INDArray activate(INDArray input, boolean training, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerPerform forward pass and return the activations array with the specified input- Specified by:
activatein interfaceLayer- Overrides:
activatein classBaseWrapperLayer- Parameters:
input- the input to usetraining- train or test modeworkspaceMgr- Workspace manager.- Returns:
- Activations array. Note that the returned array should be placed in the
ArrayType.ACTIVATIONSworkspace via the workspace manager
-
fit
public void fit()
Description copied from interface:ModelAll models have a fit method- Specified by:
fitin interfaceModel- Overrides:
fitin classBaseWrapperLayer
-
update
public void update(Gradient gradient)
Description copied from interface:ModelUpdate layer weights and biases with gradient change- Specified by:
updatein interfaceModel- Overrides:
updatein classBaseWrapperLayer
-
update
public void update(INDArray gradient, String paramType)
Description copied from interface:ModelPerform one update applying the gradient- Specified by:
updatein interfaceModel- Overrides:
updatein classBaseWrapperLayer- Parameters:
gradient- the gradient to apply
-
computeGradientAndScore
public void computeGradientAndScore(LayerWorkspaceMgr workspaceMgr)
Description copied from interface:ModelUpdate the score- Specified by:
computeGradientAndScorein interfaceModel- Overrides:
computeGradientAndScorein classBaseWrapperLayer
-
setBackpropGradientsViewArray
public void setBackpropGradientsViewArray(INDArray gradients)
Description copied from interface:ModelSet the gradients array as a view of the full (backprop) network parameters NOTE: this is intended to be used internally in MultiLayerNetwork and ComputationGraph, not by users.- Specified by:
setBackpropGradientsViewArrayin interfaceModel- Overrides:
setBackpropGradientsViewArrayin classBaseWrapperLayer- Parameters:
gradients- a 1 x nParams row vector that is a view of the larger (MLN/CG) gradients array
-
fit
public void fit(INDArray data, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:ModelFit the model to the given data- Specified by:
fitin interfaceModel- Overrides:
fitin classBaseWrapperLayer- Parameters:
data- the data to fit the model to
-
gradient
public Gradient gradient()
Description copied from interface:ModelGet the gradient. Note that this method will not calculate the gradient, it will rather return the gradient that has been computed before. For calculating the gradient, seeModel.computeGradientAndScore(LayerWorkspaceMgr)} .- Specified by:
gradientin interfaceModel- Overrides:
gradientin classBaseWrapperLayer- Returns:
- the gradient for this model, as calculated before
-
gradientAndScore
public Pair<Gradient,Double> gradientAndScore()
Description copied from interface:ModelGet the gradient and score- Specified by:
gradientAndScorein interfaceModel- Overrides:
gradientAndScorein classBaseWrapperLayer- Returns:
- the gradient and score
-
applyConstraints
public void applyConstraints(int iteration, int epoch)Description copied from interface:ModelApply any constraints to the model- Specified by:
applyConstraintsin interfaceModel- Overrides:
applyConstraintsin classBaseWrapperLayer
-
init
public void init()
Init the model- Specified by:
initin interfaceModel- Overrides:
initin classBaseWrapperLayer
-
logTestMode
public void logTestMode(boolean training)
-
logTestMode
public void logTestMode(Layer.TrainingMode training)
-
getInsideLayer
public Layer getInsideLayer()
-
getConfig
public TrainingConfig getConfig()
- Specified by:
getConfigin interfaceTrainable- Overrides:
getConfigin classBaseWrapperLayer- Returns:
- Training configuration
-
-