Class SameDiffLayer
- java.lang.Object
-
- org.deeplearning4j.nn.layers.AbstractLayer<AbstractSameDiffLayer>
-
- org.deeplearning4j.nn.layers.samediff.SameDiffLayer
-
- All Implemented Interfaces:
Serializable,Cloneable,Layer,Model,Trainable
public class SameDiffLayer extends AbstractLayer<AbstractSameDiffLayer>
- See Also:
- Serialized Form
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from interface org.deeplearning4j.nn.api.Layer
Layer.TrainingMode, Layer.Type
-
-
Field Summary
Fields Modifier and Type Field Description protected ExternalErrorsFunctionfnprotected INDArraygradientsprotected Map<String,INDArray>gradTablestatic StringINPUT_KEYstatic StringMASK_KEYprotected StringoutputKeyprotected SDVariableoutputVarprotected INDArrayparamsprotected Map<String,INDArray>paramTableprotected SameDiffsameDiff-
Fields inherited from class org.deeplearning4j.nn.layers.AbstractLayer
cacheMode, conf, dataType, dropoutApplied, epochCount, index, input, inputModificationAllowed, iterationCount, maskArray, maskState, preOutput, trainingListeners
-
-
Constructor Summary
Constructors Constructor Description SameDiffLayer(NeuralNetConfiguration conf, DataType dataType)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description INDArrayactivate(boolean training, LayerWorkspaceMgr workspaceMgr)Perform forward pass and return the activations array with the last set inputPair<Gradient,INDArray>backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)Calculate the gradient relative to the error in the next layervoidclearNoiseWeightParams()Layerclone()protected voiddoInit()Pair<INDArray,MaskState>feedForwardMaskArray(INDArray maskArray, MaskState currentMaskState, int minibatchSize)Feed forward the input mask array, setting in the layer as appropriate.INDArraygetGradientsViewArray()INDArraygetParam(String param)Get the parameterbooleanisPretrainLayer()Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)longnumParams()The number of parameters for the modelINDArrayparams()Returns the parameters of the neural network as a flattened row vectorMap<String,INDArray>paramTable()The param tableMap<String,INDArray>paramTable(boolean backpropParamsOnly)Table of parameters by key, for backprop For many models (dense layers, etc) - all parameters are backprop parametersvoidsetBackpropGradientsViewArray(INDArray gradients)Set the gradients array as a view of the full (backprop) network parameters NOTE: this is intended to be used internally in MultiLayerNetwork and ComputationGraph, not by users.voidsetParam(String key, INDArray val)Set the parameter with a new ndarrayvoidsetParams(INDArray params)Set the parameters for this model.protected voidsetParams(INDArray params, char order)voidsetParamsViewArray(INDArray params)Set the initial parameters array as a view of the full (backprop) network parameters NOTE: this is intended to be used internally in MultiLayerNetwork and ComputationGraph, not by users.voidsetParamTable(Map<String,INDArray> paramTable)Setter for the param table-
Methods inherited from class org.deeplearning4j.nn.layers.AbstractLayer
activate, addListeners, allowInputModification, applyConstraints, applyDropOutIfNecessary, applyMask, assertInputSet, backpropDropOutIfPresent, batchSize, calcRegularizationScore, clear, close, computeGradientAndScore, conf, fit, fit, getConfig, getEpochCount, getHelper, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, getOptimizer, gradient, gradientAndScore, init, input, layerConf, layerId, numParams, score, setCacheMode, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, type, update, update, updaterDivideByMinibatch
-
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface org.deeplearning4j.nn.api.Layer
getIterationCount, setIterationCount
-
-
-
-
Field Detail
-
INPUT_KEY
public static final String INPUT_KEY
- See Also:
- Constant Field Values
-
MASK_KEY
public static final String MASK_KEY
- See Also:
- Constant Field Values
-
sameDiff
protected SameDiff sameDiff
-
outputVar
protected SDVariable outputVar
-
fn
protected ExternalErrorsFunction fn
-
outputKey
protected String outputKey
-
params
protected INDArray params
-
gradients
protected INDArray gradients
-
-
Constructor Detail
-
SameDiffLayer
public SameDiffLayer(NeuralNetConfiguration conf, DataType dataType)
-
-
Method Detail
-
isPretrainLayer
public boolean isPretrainLayer()
Description copied from interface:LayerReturns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)- Returns:
- true if the layer can be pretrained (using fit(INDArray), false otherwise
-
clearNoiseWeightParams
public void clearNoiseWeightParams()
-
activate
public INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerPerform forward pass and return the activations array with the last set input- Parameters:
training- training or test modeworkspaceMgr- Workspace manager- Returns:
- the activation (layer output) of the last specified input. Note that the returned array should be placed
in the
ArrayType.ACTIVATIONSworkspace via the workspace manager
-
backpropGradient
public Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerCalculate the gradient relative to the error in the next layer- Parameters:
epsilon- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C is cost function a=sigma(z) is activation.workspaceMgr- Workspace manager- Returns:
- Pair
where Gradient is gradient for this layer, INDArray is epsilon (activation gradient) needed by next layer, but before element-wise multiply by sigmaPrime(z). So for standard feed-forward layer, if this layer is L, then return.getSecond() == dL/dIn = (w^(L)*(delta^(L))^T)^T. Note that the returned array should be placed in the ArrayType.ACTIVATION_GRADworkspace via the workspace manager
-
params
public INDArray params()
Returns the parameters of the neural network as a flattened row vector- Specified by:
paramsin interfaceModel- Specified by:
paramsin interfaceTrainable- Overrides:
paramsin classAbstractLayer<AbstractSameDiffLayer>- Returns:
- the parameters of the neural network
-
getParam
public INDArray getParam(String param)
Description copied from interface:ModelGet the parameter- Specified by:
getParamin interfaceModel- Overrides:
getParamin classAbstractLayer<AbstractSameDiffLayer>- Parameters:
param- the key of the parameter- Returns:
- the parameter vector/matrix with that particular key
-
numParams
public long numParams()
Description copied from class:AbstractLayerThe number of parameters for the model- Specified by:
numParamsin interfaceModel- Specified by:
numParamsin interfaceTrainable- Overrides:
numParamsin classAbstractLayer<AbstractSameDiffLayer>- Returns:
- the number of parameters for the model
-
setParam
public void setParam(String key, INDArray val)
Description copied from interface:ModelSet the parameter with a new ndarray- Specified by:
setParamin interfaceModel- Overrides:
setParamin classAbstractLayer<AbstractSameDiffLayer>- Parameters:
key- the key to se tval- the new ndarray
-
setParams
public void setParams(INDArray params)
Description copied from interface:ModelSet the parameters for this model. This expects a linear ndarray which then be unpacked internally relative to the expected ordering of the model- Specified by:
setParamsin interfaceModel- Overrides:
setParamsin classAbstractLayer<AbstractSameDiffLayer>- Parameters:
params- the parameters for the model
-
setParams
protected void setParams(INDArray params, char order)
- Overrides:
setParamsin classAbstractLayer<AbstractSameDiffLayer>
-
setParamsViewArray
public void setParamsViewArray(INDArray params)
Description copied from interface:ModelSet the initial parameters array as a view of the full (backprop) network parameters NOTE: this is intended to be used internally in MultiLayerNetwork and ComputationGraph, not by users.- Specified by:
setParamsViewArrayin interfaceModel- Overrides:
setParamsViewArrayin classAbstractLayer<AbstractSameDiffLayer>- Parameters:
params- a 1 x nParams row vector that is a view of the larger (MLN/CG) parameters array
-
getGradientsViewArray
public INDArray getGradientsViewArray()
- Specified by:
getGradientsViewArrayin interfaceModel- Specified by:
getGradientsViewArrayin interfaceTrainable- Overrides:
getGradientsViewArrayin classAbstractLayer<AbstractSameDiffLayer>- Returns:
- 1D gradients view array
-
setBackpropGradientsViewArray
public void setBackpropGradientsViewArray(INDArray gradients)
Description copied from interface:ModelSet the gradients array as a view of the full (backprop) network parameters NOTE: this is intended to be used internally in MultiLayerNetwork and ComputationGraph, not by users.- Specified by:
setBackpropGradientsViewArrayin interfaceModel- Overrides:
setBackpropGradientsViewArrayin classAbstractLayer<AbstractSameDiffLayer>- Parameters:
gradients- a 1 x nParams row vector that is a view of the larger (MLN/CG) gradients array
-
setParamTable
public void setParamTable(Map<String,INDArray> paramTable)
Description copied from interface:ModelSetter for the param table- Specified by:
setParamTablein interfaceModel- Overrides:
setParamTablein classAbstractLayer<AbstractSameDiffLayer>
-
paramTable
public Map<String,INDArray> paramTable()
Description copied from interface:ModelThe param table- Specified by:
paramTablein interfaceModel- Overrides:
paramTablein classAbstractLayer<AbstractSameDiffLayer>- Returns:
-
paramTable
public Map<String,INDArray> paramTable(boolean backpropParamsOnly)
Description copied from interface:ModelTable of parameters by key, for backprop For many models (dense layers, etc) - all parameters are backprop parameters- Specified by:
paramTablein interfaceModel- Specified by:
paramTablein interfaceTrainable- Overrides:
paramTablein classAbstractLayer<AbstractSameDiffLayer>- Parameters:
backpropParamsOnly- If true, return backprop params only. If false: return all params (equivalent to paramsTable())- Returns:
- Parameter table
-
doInit
protected void doInit()
-
feedForwardMaskArray
public Pair<INDArray,MaskState> feedForwardMaskArray(INDArray maskArray, MaskState currentMaskState, int minibatchSize)
Description copied from interface:LayerFeed forward the input mask array, setting in the layer as appropriate. This allows different layers to handle masks differently - for example, bidirectional RNNs and normal RNNs operate differently with masks (the former sets activations to 0 outside of the data present region (and keeps the mask active for future layers like dense layers), whereas normal RNNs don't zero out the activations/errors )instead relying on backpropagated error arrays to handle the variable length case.
This is also used for example for networks that contain global pooling layers, arbitrary preprocessors, etc.- Specified by:
feedForwardMaskArrayin interfaceLayer- Overrides:
feedForwardMaskArrayin classAbstractLayer<AbstractSameDiffLayer>- Parameters:
maskArray- Mask array to setcurrentMaskState- Current state of the mask - seeMaskStateminibatchSize- Current minibatch size. Needs to be known as it cannot always be inferred from the activations array due to reshaping (such as a DenseLayer within a recurrent neural network)- Returns:
- New mask array after this layer, along with the new mask state.
-
-