Package org.deeplearning4j.nn.api.layers
Interface RecurrentLayer
-
- All Superinterfaces:
Cloneable
,Layer
,Model
,Serializable
,Trainable
- All Known Implementing Classes:
BaseRecurrentLayer
,BidirectionalLayer
,GravesBidirectionalLSTM
,GravesLSTM
,LSTM
,SimpleRnn
public interface RecurrentLayer extends Layer
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from interface org.deeplearning4j.nn.api.Layer
Layer.TrainingMode, Layer.Type
-
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description INDArray
rnnActivateUsingStoredState(INDArray input, boolean training, boolean storeLastForTBPTT, LayerWorkspaceMgr workspaceMg)
Similar to rnnTimeStep, this method is used for activations using the state stored in the stateMap as the initialization.void
rnnClearPreviousState()
Reset/clear the stateMap for rnnTimeStep() and tBpttStateMap for rnnActivateUsingStoredState()Map<String,INDArray>
rnnGetPreviousState()
Returns a shallow copy of the RNN stateMap (that contains the stored history for use in methods such as rnnTimeStepMap<String,INDArray>
rnnGetTBPTTState()
Get the RNN truncated backpropagations through time (TBPTT) state for the recurrent layer.void
rnnSetPreviousState(Map<String,INDArray> stateMap)
Set the stateMap (stored history).void
rnnSetTBPTTState(Map<String,INDArray> state)
Set the RNN truncated backpropagations through time (TBPTT) state for the recurrent layer.INDArray
rnnTimeStep(INDArray input, LayerWorkspaceMgr workspaceMgr)
Do one or more time steps using the previous time step state stored in stateMap.
Can be used to efficiently do forward pass one or n-steps at a time (instead of doing forward pass always from t=0)
If stateMap is empty, default initialization (usually zeros) is used
Implementations also update stateMap at the end of this methodPair<Gradient,INDArray>
tbpttBackpropGradient(INDArray epsilon, int tbpttBackLength, LayerWorkspaceMgr workspaceMgr)
Truncated BPTT equivalent of Layer.backpropGradient().-
Methods inherited from interface org.deeplearning4j.nn.api.Layer
activate, activate, allowInputModification, backpropGradient, calcRegularizationScore, clearNoiseWeightParams, feedForwardMaskArray, getEpochCount, getHelper, getIndex, getInputMiniBatchSize, getIterationCount, getListeners, getMaskArray, isPretrainLayer, setCacheMode, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setIterationCount, setListeners, setListeners, setMaskArray, type
-
Methods inherited from interface org.deeplearning4j.nn.api.Model
addListeners, applyConstraints, batchSize, clear, close, computeGradientAndScore, conf, fit, fit, getGradientsViewArray, getOptimizer, getParam, gradient, gradientAndScore, init, input, numParams, numParams, params, paramTable, paramTable, score, setBackpropGradientsViewArray, setConf, setParam, setParams, setParamsViewArray, setParamTable, update, update
-
Methods inherited from interface org.deeplearning4j.nn.api.Trainable
getConfig, getGradientsViewArray, numParams, params, paramTable, updaterDivideByMinibatch
-
-
-
-
Method Detail
-
rnnTimeStep
INDArray rnnTimeStep(INDArray input, LayerWorkspaceMgr workspaceMgr)
Do one or more time steps using the previous time step state stored in stateMap.
Can be used to efficiently do forward pass one or n-steps at a time (instead of doing forward pass always from t=0)
If stateMap is empty, default initialization (usually zeros) is used
Implementations also update stateMap at the end of this method- Parameters:
input
- Input to this layer- Returns:
- activations
-
rnnGetPreviousState
Map<String,INDArray> rnnGetPreviousState()
Returns a shallow copy of the RNN stateMap (that contains the stored history for use in methods such as rnnTimeStep
-
rnnSetPreviousState
void rnnSetPreviousState(Map<String,INDArray> stateMap)
Set the stateMap (stored history). Values set using this method will be used in next call to rnnTimeStep()
-
rnnClearPreviousState
void rnnClearPreviousState()
Reset/clear the stateMap for rnnTimeStep() and tBpttStateMap for rnnActivateUsingStoredState()
-
rnnActivateUsingStoredState
INDArray rnnActivateUsingStoredState(INDArray input, boolean training, boolean storeLastForTBPTT, LayerWorkspaceMgr workspaceMg)
Similar to rnnTimeStep, this method is used for activations using the state stored in the stateMap as the initialization. However, unlike rnnTimeStep this method does not alter the stateMap; therefore, unlike rnnTimeStep, multiple calls to this method (with identical input) will:
(a) result in the same output
(b) leave the state maps (both stateMap and tBpttStateMap) in an identical state- Parameters:
input
- Layer inputtraining
- if true: training. Otherwise: teststoreLastForTBPTT
- If true: store the final state in tBpttStateMap for use in truncated BPTT training- Returns:
- Layer activations
-
rnnGetTBPTTState
Map<String,INDArray> rnnGetTBPTTState()
Get the RNN truncated backpropagations through time (TBPTT) state for the recurrent layer. The TBPTT state is used to store intermediate activations/state between updating parameters when doing TBPTT learning- Returns:
- State for the RNN layer
-
rnnSetTBPTTState
void rnnSetTBPTTState(Map<String,INDArray> state)
Set the RNN truncated backpropagations through time (TBPTT) state for the recurrent layer. The TBPTT state is used to store intermediate activations/state between updating parameters when doing TBPTT learning- Parameters:
state
- TBPTT state to set
-
tbpttBackpropGradient
Pair<Gradient,INDArray> tbpttBackpropGradient(INDArray epsilon, int tbpttBackLength, LayerWorkspaceMgr workspaceMgr)
Truncated BPTT equivalent of Layer.backpropGradient(). Primary difference here is that forward pass in the context of BPTT is that we do forward pass using stored state for truncated BPTT vs. from zero initialization for standard BPTT.
-
-