Package org.deeplearning4j.nn.layers
Class FrozenLayer
- java.lang.Object
-
- org.deeplearning4j.nn.layers.wrapper.BaseWrapperLayer
-
- org.deeplearning4j.nn.layers.FrozenLayer
-
- All Implemented Interfaces:
Serializable
,Cloneable
,Layer
,Model
,Trainable
public class FrozenLayer extends BaseWrapperLayer
- See Also:
- Serialized Form
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from interface org.deeplearning4j.nn.api.Layer
Layer.TrainingMode, Layer.Type
-
-
Field Summary
-
Fields inherited from class org.deeplearning4j.nn.layers.wrapper.BaseWrapperLayer
underlying
-
-
Constructor Summary
Constructors Constructor Description FrozenLayer(Layer insideLayer)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description INDArray
activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the last set inputINDArray
activate(INDArray input, boolean training, LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the specified inputvoid
applyConstraints(int iteration, int epoch)
Apply any constraints to the modelPair<Gradient,INDArray>
backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Calculate the gradient relative to the error in the next layerdouble
calcRegularizationScore(boolean backpropParamsOnly)
Calculate the regularization component of the score, for the parameters in this layer
For example, the L1, L2 and/or weight decay components of the loss functionvoid
computeGradientAndScore(LayerWorkspaceMgr workspaceMgr)
Update the scorevoid
fit()
All models have a fit methodvoid
fit(INDArray data, LayerWorkspaceMgr workspaceMgr)
Fit the model to the given dataTrainingConfig
getConfig()
Layer
getInsideLayer()
Gradient
gradient()
Get the gradient.Pair<Gradient,Double>
gradientAndScore()
Get the gradient and scorevoid
init()
Init the modelprotected String
layerId()
void
logTestMode(boolean training)
void
logTestMode(Layer.TrainingMode training)
void
setBackpropGradientsViewArray(INDArray gradients)
Set the gradients array as a view of the full (backprop) network parameters NOTE: this is intended to be used internally in MultiLayerNetwork and ComputationGraph, not by users.void
setCacheMode(CacheMode mode)
This method sets given CacheMode for current layervoid
update(Gradient gradient)
Update layer weights and biases with gradient changevoid
update(INDArray gradient, String paramType)
Perform one update applying the gradient-
Methods inherited from class org.deeplearning4j.nn.layers.wrapper.BaseWrapperLayer
addListeners, allowInputModification, batchSize, clear, clearNoiseWeightParams, close, conf, feedForwardMaskArray, getEpochCount, getGradientsViewArray, getHelper, getIndex, getInputMiniBatchSize, getIterationCount, getListeners, getMaskArray, getOptimizer, getParam, input, isPretrainLayer, numParams, numParams, params, paramTable, paramTable, score, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setIterationCount, setListeners, setListeners, setMaskArray, setParam, setParams, setParamsViewArray, setParamTable, type, updaterDivideByMinibatch
-
-
-
-
Constructor Detail
-
FrozenLayer
public FrozenLayer(Layer insideLayer)
-
-
Method Detail
-
setCacheMode
public void setCacheMode(CacheMode mode)
Description copied from interface:Layer
This method sets given CacheMode for current layer- Specified by:
setCacheMode
in interfaceLayer
- Overrides:
setCacheMode
in classBaseWrapperLayer
-
layerId
protected String layerId()
-
calcRegularizationScore
public double calcRegularizationScore(boolean backpropParamsOnly)
Description copied from interface:Layer
Calculate the regularization component of the score, for the parameters in this layer
For example, the L1, L2 and/or weight decay components of the loss function- Specified by:
calcRegularizationScore
in interfaceLayer
- Overrides:
calcRegularizationScore
in classBaseWrapperLayer
- Parameters:
backpropParamsOnly
- If true: calculate regularization score based on backprop params only. If false: calculate based on all params (including pretrain params, if any)- Returns:
- the regularization score of
-
backpropGradient
public Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:Layer
Calculate the gradient relative to the error in the next layer- Specified by:
backpropGradient
in interfaceLayer
- Overrides:
backpropGradient
in classBaseWrapperLayer
- Parameters:
epsilon
- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C is cost function a=sigma(z) is activation.workspaceMgr
- Workspace manager- Returns:
- Pair
where Gradient is gradient for this layer, INDArray is epsilon (activation gradient) needed by next layer, but before element-wise multiply by sigmaPrime(z). So for standard feed-forward layer, if this layer is L, then return.getSecond() == dL/dIn = (w^(L)*(delta^(L))^T)^T. Note that the returned array should be placed in the ArrayType.ACTIVATION_GRAD
workspace via the workspace manager
-
activate
public INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:Layer
Perform forward pass and return the activations array with the last set input- Specified by:
activate
in interfaceLayer
- Overrides:
activate
in classBaseWrapperLayer
- Parameters:
training
- training or test modeworkspaceMgr
- Workspace manager- Returns:
- the activation (layer output) of the last specified input. Note that the returned array should be placed
in the
ArrayType.ACTIVATIONS
workspace via the workspace manager
-
activate
public INDArray activate(INDArray input, boolean training, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:Layer
Perform forward pass and return the activations array with the specified input- Specified by:
activate
in interfaceLayer
- Overrides:
activate
in classBaseWrapperLayer
- Parameters:
input
- the input to usetraining
- train or test modeworkspaceMgr
- Workspace manager.- Returns:
- Activations array. Note that the returned array should be placed in the
ArrayType.ACTIVATIONS
workspace via the workspace manager
-
fit
public void fit()
Description copied from interface:Model
All models have a fit method- Specified by:
fit
in interfaceModel
- Overrides:
fit
in classBaseWrapperLayer
-
update
public void update(Gradient gradient)
Description copied from interface:Model
Update layer weights and biases with gradient change- Specified by:
update
in interfaceModel
- Overrides:
update
in classBaseWrapperLayer
-
update
public void update(INDArray gradient, String paramType)
Description copied from interface:Model
Perform one update applying the gradient- Specified by:
update
in interfaceModel
- Overrides:
update
in classBaseWrapperLayer
- Parameters:
gradient
- the gradient to apply
-
computeGradientAndScore
public void computeGradientAndScore(LayerWorkspaceMgr workspaceMgr)
Description copied from interface:Model
Update the score- Specified by:
computeGradientAndScore
in interfaceModel
- Overrides:
computeGradientAndScore
in classBaseWrapperLayer
-
setBackpropGradientsViewArray
public void setBackpropGradientsViewArray(INDArray gradients)
Description copied from interface:Model
Set the gradients array as a view of the full (backprop) network parameters NOTE: this is intended to be used internally in MultiLayerNetwork and ComputationGraph, not by users.- Specified by:
setBackpropGradientsViewArray
in interfaceModel
- Overrides:
setBackpropGradientsViewArray
in classBaseWrapperLayer
- Parameters:
gradients
- a 1 x nParams row vector that is a view of the larger (MLN/CG) gradients array
-
fit
public void fit(INDArray data, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:Model
Fit the model to the given data- Specified by:
fit
in interfaceModel
- Overrides:
fit
in classBaseWrapperLayer
- Parameters:
data
- the data to fit the model to
-
gradient
public Gradient gradient()
Description copied from interface:Model
Get the gradient. Note that this method will not calculate the gradient, it will rather return the gradient that has been computed before. For calculating the gradient, seeModel.computeGradientAndScore(LayerWorkspaceMgr)
} .- Specified by:
gradient
in interfaceModel
- Overrides:
gradient
in classBaseWrapperLayer
- Returns:
- the gradient for this model, as calculated before
-
gradientAndScore
public Pair<Gradient,Double> gradientAndScore()
Description copied from interface:Model
Get the gradient and score- Specified by:
gradientAndScore
in interfaceModel
- Overrides:
gradientAndScore
in classBaseWrapperLayer
- Returns:
- the gradient and score
-
applyConstraints
public void applyConstraints(int iteration, int epoch)
Description copied from interface:Model
Apply any constraints to the model- Specified by:
applyConstraints
in interfaceModel
- Overrides:
applyConstraints
in classBaseWrapperLayer
-
init
public void init()
Init the model- Specified by:
init
in interfaceModel
- Overrides:
init
in classBaseWrapperLayer
-
logTestMode
public void logTestMode(boolean training)
-
logTestMode
public void logTestMode(Layer.TrainingMode training)
-
getInsideLayer
public Layer getInsideLayer()
-
getConfig
public TrainingConfig getConfig()
- Specified by:
getConfig
in interfaceTrainable
- Overrides:
getConfig
in classBaseWrapperLayer
- Returns:
- Training configuration
-
-