public class TimeDistributedLayer extends BaseWrapperLayer
.layer(new TimeDistributed(new DenseLayer.Builder()....build(), timeAxis))Layer.TrainingMode, Layer.Typeunderlying| Constructor and Description |
|---|
TimeDistributedLayer(Layer underlying,
RNNFormat rnnDataFormat) |
| Modifier and Type | Method and Description |
|---|---|
INDArray |
activate(boolean training,
LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the last set input
|
INDArray |
activate(INDArray input,
boolean training,
LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the specified input
|
Pair<Gradient,INDArray> |
backpropGradient(INDArray epsilon,
LayerWorkspaceMgr workspaceMgr)
Calculate the gradient relative to the error in the next layer
|
Pair<INDArray,MaskState> |
feedForwardMaskArray(INDArray maskArray,
MaskState currentMaskState,
int minibatchSize)
Feed forward the input mask array, setting in the layer as appropriate.
|
protected int[] |
permuteAxes(int rank,
int timeAxis) |
protected INDArray |
reshape(INDArray array) |
protected INDArray |
revertReshape(INDArray toRevert,
long minibatch) |
void |
setMaskArray(INDArray maskArray)
Set the mask array.
|
addListeners, allowInputModification, applyConstraints, batchSize, calcRegularizationScore, clear, clearNoiseWeightParams, close, computeGradientAndScore, conf, fit, fit, getConfig, getEpochCount, getGradientsViewArray, getHelper, getIndex, getInputMiniBatchSize, getIterationCount, getListeners, getMaskArray, getOptimizer, getParam, gradient, gradientAndScore, init, input, isPretrainLayer, numParams, numParams, params, paramTable, paramTable, score, setBackpropGradientsViewArray, setCacheMode, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setIterationCount, setListeners, setListeners, setParam, setParams, setParamsViewArray, setParamTable, type, update, update, updaterDivideByMinibatchpublic Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
LayerbackpropGradient in interface LayerbackpropGradient in class BaseWrapperLayerepsilon - w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.workspaceMgr - Workspace managerArrayType.ACTIVATION_GRAD workspace via the workspace managerpublic INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Layeractivate in interface Layeractivate in class BaseWrapperLayertraining - training or test modeworkspaceMgr - Workspace managerArrayType.ACTIVATIONS workspace via the workspace managerpublic INDArray activate(INDArray input, boolean training, LayerWorkspaceMgr workspaceMgr)
Layeractivate in interface Layeractivate in class BaseWrapperLayerinput - the input to usetraining - train or test modeworkspaceMgr - Workspace manager.ArrayType.ACTIVATIONS workspace via the workspace managerprotected int[] permuteAxes(int rank,
int timeAxis)
public void setMaskArray(INDArray maskArray)
LayerLayer.feedForwardMaskArray(INDArray, MaskState, int) should be used in
preference to this.setMaskArray in interface LayersetMaskArray in class BaseWrapperLayermaskArray - Mask array to setpublic Pair<INDArray,MaskState> feedForwardMaskArray(INDArray maskArray, MaskState currentMaskState, int minibatchSize)
LayerfeedForwardMaskArray in interface LayerfeedForwardMaskArray in class BaseWrapperLayermaskArray - Mask array to setcurrentMaskState - Current state of the mask - see MaskStateminibatchSize - Current minibatch size. Needs to be known as it cannot always be inferred from the activations
array due to reshaping (such as a DenseLayer within a recurrent neural network)Copyright © 2020. All rights reserved.