public class SimpleRnn extends BaseRecurrentLayer<SimpleRnn>
Layer.TrainingMode, Layer.Type
Modifier and Type | Field and Description |
---|---|
static String |
STATE_KEY_PREV_ACTIVATION |
helperCountFail, stateMap, tBpttStateMap
gradient, gradientsFlattened, gradientViews, optimizer, params, paramsFlattened, score, solver, weightNoiseParams
cacheMode, conf, dataType, dropoutApplied, epochCount, index, input, inputModificationAllowed, iterationCount, maskArray, maskState, preOutput, trainingListeners
Constructor and Description |
---|
SimpleRnn(NeuralNetConfiguration conf,
DataType dataType) |
Modifier and Type | Method and Description |
---|---|
INDArray |
activate(boolean training,
LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the last set input
|
Pair<Gradient,INDArray> |
backpropGradient(INDArray epsilon,
LayerWorkspaceMgr workspaceMgr)
Calculate the gradient relative to the error in the next layer
|
boolean |
hasLayerNorm()
Does this layer support and is it enabled layer normalization? Only Dense and SimpleRNN Layers support
layer normalization.
|
boolean |
isPretrainLayer()
Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)
|
INDArray |
rnnActivateUsingStoredState(INDArray input,
boolean training,
boolean storeLastForTBPTT,
LayerWorkspaceMgr workspaceMgr)
Similar to rnnTimeStep, this method is used for activations using the state
stored in the stateMap as the initialization.
|
INDArray |
rnnTimeStep(INDArray input,
LayerWorkspaceMgr workspaceMgr)
Do one or more time steps using the previous time step state stored in stateMap.
Can be used to efficiently do forward pass one or n-steps at a time (instead of doing forward pass always from t=0) If stateMap is empty, default initialization (usually zeros) is used Implementations also update stateMap at the end of this method |
Pair<Gradient,INDArray> |
tbpttBackpropGradient(INDArray epsilon,
int tbpttBackLength,
LayerWorkspaceMgr workspaceMgr)
Truncated BPTT equivalent of Layer.backpropGradient().
|
getDataFormat, permuteIfNWC, rnnClearPreviousState, rnnGetPreviousState, rnnGetTBPTTState, rnnSetPreviousState, rnnSetTBPTTState
calcRegularizationScore, clear, clearNoiseWeightParams, clone, computeGradientAndScore, fit, fit, getGradientsViewArray, getOptimizer, getParam, getParamWithNoise, gradient, hasBias, layerConf, numParams, params, paramTable, paramTable, preOutput, preOutputWithPreNorm, score, setBackpropGradientsViewArray, setParam, setParams, setParams, setParamsViewArray, setParamTable, setScoreWithZ, toString, update, update
activate, addListeners, allowInputModification, applyConstraints, applyDropOutIfNecessary, applyMask, assertInputSet, backpropDropOutIfPresent, batchSize, close, conf, feedForwardMaskArray, getConfig, getEpochCount, getHelper, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, gradientAndScore, init, input, layerId, numParams, setCacheMode, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, type, updaterDivideByMinibatch
equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
activate, allowInputModification, calcRegularizationScore, clearNoiseWeightParams, feedForwardMaskArray, getEpochCount, getHelper, getIndex, getInputMiniBatchSize, getIterationCount, getListeners, getMaskArray, setCacheMode, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setIterationCount, setListeners, setListeners, setMaskArray, type
addListeners, applyConstraints, batchSize, clear, close, computeGradientAndScore, conf, fit, fit, getGradientsViewArray, getOptimizer, getParam, gradient, gradientAndScore, init, input, numParams, numParams, params, paramTable, paramTable, score, setBackpropGradientsViewArray, setConf, setParam, setParams, setParamsViewArray, setParamTable, update, update
getConfig, getGradientsViewArray, numParams, params, paramTable, updaterDivideByMinibatch
public static final String STATE_KEY_PREV_ACTIVATION
public SimpleRnn(NeuralNetConfiguration conf, DataType dataType)
public INDArray rnnTimeStep(INDArray input, LayerWorkspaceMgr workspaceMgr)
RecurrentLayer
input
- Input to this layerpublic INDArray rnnActivateUsingStoredState(INDArray input, boolean training, boolean storeLastForTBPTT, LayerWorkspaceMgr workspaceMgr)
RecurrentLayer
input
- Layer inputtraining
- if true: training. Otherwise: teststoreLastForTBPTT
- If true: store the final state in tBpttStateMap for use in truncated BPTT trainingpublic Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Layer
backpropGradient
in interface Layer
backpropGradient
in class BaseLayer<SimpleRnn>
epsilon
- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.workspaceMgr
- Workspace managerArrayType.ACTIVATION_GRAD
workspace via the workspace managerpublic Pair<Gradient,INDArray> tbpttBackpropGradient(INDArray epsilon, int tbpttBackLength, LayerWorkspaceMgr workspaceMgr)
RecurrentLayer
public boolean isPretrainLayer()
Layer
public INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Layer
activate
in interface Layer
activate
in class BaseLayer<SimpleRnn>
training
- training or test modeworkspaceMgr
- Workspace managerArrayType.ACTIVATIONS
workspace via the workspace managerpublic boolean hasLayerNorm()
BaseLayer
hasLayerNorm
in class BaseLayer<SimpleRnn>
Copyright © 2020. All rights reserved.