Class TimeDistributedLayer
- java.lang.Object
-
- org.deeplearning4j.nn.layers.wrapper.BaseWrapperLayer
-
- org.deeplearning4j.nn.layers.recurrent.TimeDistributedLayer
-
- All Implemented Interfaces:
Serializable
,Cloneable
,Layer
,Model
,Trainable
public class TimeDistributedLayer extends BaseWrapperLayer
- See Also:
- Serialized Form
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from interface org.deeplearning4j.nn.api.Layer
Layer.TrainingMode, Layer.Type
-
-
Field Summary
-
Fields inherited from class org.deeplearning4j.nn.layers.wrapper.BaseWrapperLayer
underlying
-
-
Constructor Summary
Constructors Constructor Description TimeDistributedLayer(Layer underlying, RNNFormat rnnDataFormat)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description INDArray
activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the last set inputINDArray
activate(INDArray input, boolean training, LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the specified inputPair<Gradient,INDArray>
backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Calculate the gradient relative to the error in the next layerPair<INDArray,MaskState>
feedForwardMaskArray(INDArray maskArray, MaskState currentMaskState, int minibatchSize)
Feed forward the input mask array, setting in the layer as appropriate.protected int[]
permuteAxes(int rank, int timeAxis)
protected INDArray
reshape(INDArray array)
protected INDArray
revertReshape(INDArray toRevert, long minibatch)
void
setMaskArray(INDArray maskArray)
Set the mask array.-
Methods inherited from class org.deeplearning4j.nn.layers.wrapper.BaseWrapperLayer
addListeners, allowInputModification, applyConstraints, batchSize, calcRegularizationScore, clear, clearNoiseWeightParams, close, computeGradientAndScore, conf, fit, fit, getConfig, getEpochCount, getGradientsViewArray, getHelper, getIndex, getInputMiniBatchSize, getIterationCount, getListeners, getMaskArray, getOptimizer, getParam, gradient, gradientAndScore, init, input, isPretrainLayer, numParams, numParams, params, paramTable, paramTable, score, setBackpropGradientsViewArray, setCacheMode, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setIterationCount, setListeners, setListeners, setParam, setParams, setParamsViewArray, setParamTable, type, update, update, updaterDivideByMinibatch
-
-
-
-
Method Detail
-
backpropGradient
public Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:Layer
Calculate the gradient relative to the error in the next layer- Specified by:
backpropGradient
in interfaceLayer
- Overrides:
backpropGradient
in classBaseWrapperLayer
- Parameters:
epsilon
- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C is cost function a=sigma(z) is activation.workspaceMgr
- Workspace manager- Returns:
- Pair
where Gradient is gradient for this layer, INDArray is epsilon (activation gradient) needed by next layer, but before element-wise multiply by sigmaPrime(z). So for standard feed-forward layer, if this layer is L, then return.getSecond() == dL/dIn = (w^(L)*(delta^(L))^T)^T. Note that the returned array should be placed in the ArrayType.ACTIVATION_GRAD
workspace via the workspace manager
-
activate
public INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:Layer
Perform forward pass and return the activations array with the last set input- Specified by:
activate
in interfaceLayer
- Overrides:
activate
in classBaseWrapperLayer
- Parameters:
training
- training or test modeworkspaceMgr
- Workspace manager- Returns:
- the activation (layer output) of the last specified input. Note that the returned array should be placed
in the
ArrayType.ACTIVATIONS
workspace via the workspace manager
-
activate
public INDArray activate(INDArray input, boolean training, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:Layer
Perform forward pass and return the activations array with the specified input- Specified by:
activate
in interfaceLayer
- Overrides:
activate
in classBaseWrapperLayer
- Parameters:
input
- the input to usetraining
- train or test modeworkspaceMgr
- Workspace manager.- Returns:
- Activations array. Note that the returned array should be placed in the
ArrayType.ACTIVATIONS
workspace via the workspace manager
-
permuteAxes
protected int[] permuteAxes(int rank, int timeAxis)
-
setMaskArray
public void setMaskArray(INDArray maskArray)
Description copied from interface:Layer
Set the mask array. Note: In general,Layer.feedForwardMaskArray(INDArray, MaskState, int)
should be used in preference to this.- Specified by:
setMaskArray
in interfaceLayer
- Overrides:
setMaskArray
in classBaseWrapperLayer
- Parameters:
maskArray
- Mask array to set
-
feedForwardMaskArray
public Pair<INDArray,MaskState> feedForwardMaskArray(INDArray maskArray, MaskState currentMaskState, int minibatchSize)
Description copied from interface:Layer
Feed forward the input mask array, setting in the layer as appropriate. This allows different layers to handle masks differently - for example, bidirectional RNNs and normal RNNs operate differently with masks (the former sets activations to 0 outside of the data present region (and keeps the mask active for future layers like dense layers), whereas normal RNNs don't zero out the activations/errors )instead relying on backpropagated error arrays to handle the variable length case.
This is also used for example for networks that contain global pooling layers, arbitrary preprocessors, etc.- Specified by:
feedForwardMaskArray
in interfaceLayer
- Overrides:
feedForwardMaskArray
in classBaseWrapperLayer
- Parameters:
maskArray
- Mask array to setcurrentMaskState
- Current state of the mask - seeMaskState
minibatchSize
- Current minibatch size. Needs to be known as it cannot always be inferred from the activations array due to reshaping (such as a DenseLayer within a recurrent neural network)- Returns:
- New mask array after this layer, along with the new mask state.
-
-