Class RnnOutputLayer
- java.lang.Object
-
- org.deeplearning4j.nn.layers.AbstractLayer<LayerConfT>
-
- org.deeplearning4j.nn.layers.BaseLayer<LayerConfT>
-
- org.deeplearning4j.nn.layers.BaseOutputLayer<RnnOutputLayer>
-
- org.deeplearning4j.nn.layers.recurrent.RnnOutputLayer
-
- All Implemented Interfaces:
Serializable,Cloneable,Classifier,Layer,IOutputLayer,Model,Trainable
public class RnnOutputLayer extends BaseOutputLayer<RnnOutputLayer>
- See Also:
- Serialized Form
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from interface org.deeplearning4j.nn.api.Layer
Layer.TrainingMode, Layer.Type
-
-
Field Summary
-
Fields inherited from class org.deeplearning4j.nn.layers.BaseOutputLayer
inputMaskArray, inputMaskArrayState, labels
-
Fields inherited from class org.deeplearning4j.nn.layers.BaseLayer
gradient, gradientsFlattened, gradientViews, optimizer, params, paramsFlattened, score, weightNoiseParams
-
Fields inherited from class org.deeplearning4j.nn.layers.AbstractLayer
cacheMode, conf, dataType, dropoutApplied, epochCount, index, input, inputModificationAllowed, iterationCount, maskArray, maskState, preOutput, trainingListeners
-
-
Constructor Summary
Constructors Constructor Description RnnOutputLayer(NeuralNetConfiguration conf, DataType dataType)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description INDArrayactivate(boolean training, LayerWorkspaceMgr workspaceMgr)Perform forward pass and return the activations array with the last set inputPair<Gradient,INDArray>backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)Calculate the gradient relative to the error in the next layerINDArraycomputeScoreForExamples(double fullNetRegTerm, LayerWorkspaceMgr workspaceMgr)Compute the score for each example individually, after labels and input have been set.doublef1Score(INDArray examples, INDArray labels)Returns the f1 score for the given examples.Pair<INDArray,MaskState>feedForwardMaskArray(INDArray maskArray, MaskState currentMaskState, int minibatchSize)Feed forward the input mask array, setting in the layer as appropriate.INDArraygetInput()protected INDArraygetLabels2d(LayerWorkspaceMgr workspaceMgr, ArrayType arrayType)protected INDArraypreOutput2d(boolean training, LayerWorkspaceMgr workspaceMgr)voidsetMaskArray(INDArray maskArray)Set the mask array.Layer.Typetype()Returns the layer type-
Methods inherited from class org.deeplearning4j.nn.layers.BaseOutputLayer
activate, applyMask, clear, computeGradientAndScore, computeScore, f1Score, fit, fit, fit, fit, fit, getLabels, gradient, gradientAndScore, hasBias, isPretrainLayer, needsLabels, numLabels, predict, predict, setLabels, setScoreWithZ
-
Methods inherited from class org.deeplearning4j.nn.layers.BaseLayer
calcRegularizationScore, clearNoiseWeightParams, clone, fit, getGradientsViewArray, getOptimizer, getParam, getParamWithNoise, hasLayerNorm, layerConf, numParams, params, paramTable, paramTable, preOutput, preOutputWithPreNorm, score, setBackpropGradientsViewArray, setParam, setParams, setParams, setParamsViewArray, setParamTable, toString, update, update
-
Methods inherited from class org.deeplearning4j.nn.layers.AbstractLayer
addListeners, allowInputModification, applyConstraints, applyDropOutIfNecessary, assertInputSet, backpropDropOutIfPresent, batchSize, close, conf, getConfig, getEpochCount, getHelper, getIndex, getInputMiniBatchSize, getListeners, getMaskArray, init, input, layerId, numParams, setCacheMode, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, updaterDivideByMinibatch
-
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
-
Methods inherited from interface org.deeplearning4j.nn.api.Layer
allowInputModification, calcRegularizationScore, clearNoiseWeightParams, getEpochCount, getHelper, getIndex, getInputMiniBatchSize, getIterationCount, getListeners, getMaskArray, setCacheMode, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setIterationCount, setListeners, setListeners
-
Methods inherited from interface org.deeplearning4j.nn.api.Model
addListeners, applyConstraints, batchSize, close, conf, fit, getGradientsViewArray, getOptimizer, getParam, init, input, numParams, numParams, params, paramTable, paramTable, score, setBackpropGradientsViewArray, setConf, setParam, setParams, setParamsViewArray, setParamTable, update, update
-
Methods inherited from interface org.deeplearning4j.nn.api.Trainable
getConfig, getGradientsViewArray, numParams, params, paramTable, updaterDivideByMinibatch
-
-
-
-
Constructor Detail
-
RnnOutputLayer
public RnnOutputLayer(NeuralNetConfiguration conf, DataType dataType)
-
-
Method Detail
-
backpropGradient
public Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerCalculate the gradient relative to the error in the next layer- Specified by:
backpropGradientin interfaceLayer- Overrides:
backpropGradientin classBaseOutputLayer<RnnOutputLayer>- Parameters:
epsilon- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C is cost function a=sigma(z) is activation.workspaceMgr- Workspace manager- Returns:
- Pair
where Gradient is gradient for this layer, INDArray is epsilon (activation gradient) needed by next layer, but before element-wise multiply by sigmaPrime(z). So for standard feed-forward layer, if this layer is L, then return.getSecond() == dL/dIn = (w^(L)*(delta^(L))^T)^T. Note that the returned array should be placed in the ArrayType.ACTIVATION_GRADworkspace via the workspace manager
-
f1Score
public double f1Score(INDArray examples, INDArray labels)
Returns the f1 score for the given examples.- Specified by:
f1Scorein interfaceClassifier- Overrides:
f1Scorein classBaseOutputLayer<RnnOutputLayer>- Parameters:
examples- te the examples to classify (one example in each row)labels- the true labels- Returns:
- the scores for each ndarray
-
getInput
public INDArray getInput()
- Overrides:
getInputin classAbstractLayer<RnnOutputLayer>
-
type
public Layer.Type type()
Description copied from interface:LayerReturns the layer type- Specified by:
typein interfaceLayer- Overrides:
typein classAbstractLayer<RnnOutputLayer>- Returns:
-
preOutput2d
protected INDArray preOutput2d(boolean training, LayerWorkspaceMgr workspaceMgr)
- Overrides:
preOutput2din classBaseOutputLayer<RnnOutputLayer>
-
getLabels2d
protected INDArray getLabels2d(LayerWorkspaceMgr workspaceMgr, ArrayType arrayType)
- Specified by:
getLabels2din classBaseOutputLayer<RnnOutputLayer>
-
activate
public INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerPerform forward pass and return the activations array with the last set input- Specified by:
activatein interfaceLayer- Overrides:
activatein classBaseLayer<RnnOutputLayer>- Parameters:
training- training or test modeworkspaceMgr- Workspace manager- Returns:
- the activation (layer output) of the last specified input. Note that the returned array should be placed
in the
ArrayType.ACTIVATIONSworkspace via the workspace manager
-
setMaskArray
public void setMaskArray(INDArray maskArray)
Description copied from interface:LayerSet the mask array. Note: In general,Layer.feedForwardMaskArray(INDArray, MaskState, int)should be used in preference to this.- Specified by:
setMaskArrayin interfaceLayer- Overrides:
setMaskArrayin classAbstractLayer<RnnOutputLayer>- Parameters:
maskArray- Mask array to set
-
feedForwardMaskArray
public Pair<INDArray,MaskState> feedForwardMaskArray(INDArray maskArray, MaskState currentMaskState, int minibatchSize)
Description copied from interface:LayerFeed forward the input mask array, setting in the layer as appropriate. This allows different layers to handle masks differently - for example, bidirectional RNNs and normal RNNs operate differently with masks (the former sets activations to 0 outside of the data present region (and keeps the mask active for future layers like dense layers), whereas normal RNNs don't zero out the activations/errors )instead relying on backpropagated error arrays to handle the variable length case.
This is also used for example for networks that contain global pooling layers, arbitrary preprocessors, etc.- Specified by:
feedForwardMaskArrayin interfaceLayer- Overrides:
feedForwardMaskArrayin classAbstractLayer<RnnOutputLayer>- Parameters:
maskArray- Mask array to setcurrentMaskState- Current state of the mask - seeMaskStateminibatchSize- Current minibatch size. Needs to be known as it cannot always be inferred from the activations array due to reshaping (such as a DenseLayer within a recurrent neural network)- Returns:
- New mask array after this layer, along with the new mask state.
-
computeScoreForExamples
public INDArray computeScoreForExamples(double fullNetRegTerm, LayerWorkspaceMgr workspaceMgr)
Compute the score for each example individually, after labels and input have been set.- Specified by:
computeScoreForExamplesin interfaceIOutputLayer- Overrides:
computeScoreForExamplesin classBaseOutputLayer<RnnOutputLayer>- Parameters:
fullNetRegTerm- Regularization score term for the entire network (or, 0.0 to not include regularization)- Returns:
- A column INDArray of shape [numExamples,1], where entry i is the score of the ith example
-
-