Class EmbeddingSequenceLayer
- java.lang.Object
-
- org.deeplearning4j.nn.layers.AbstractLayer<LayerConfT>
-
- org.deeplearning4j.nn.layers.BaseLayer<EmbeddingSequenceLayer>
-
- org.deeplearning4j.nn.layers.feedforward.embedding.EmbeddingSequenceLayer
-
- All Implemented Interfaces:
Serializable,Cloneable,Layer,Model,Trainable
public class EmbeddingSequenceLayer extends BaseLayer<EmbeddingSequenceLayer>
- See Also:
- Serialized Form
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from interface org.deeplearning4j.nn.api.Layer
Layer.TrainingMode, Layer.Type
-
-
Field Summary
-
Fields inherited from class org.deeplearning4j.nn.layers.BaseLayer
gradient, gradientsFlattened, gradientViews, optimizer, params, paramsFlattened, score, solver, weightNoiseParams
-
Fields inherited from class org.deeplearning4j.nn.layers.AbstractLayer
cacheMode, conf, dataType, dropoutApplied, epochCount, index, input, inputModificationAllowed, iterationCount, maskArray, maskState, preOutput, trainingListeners
-
-
Constructor Summary
Constructors Constructor Description EmbeddingSequenceLayer(NeuralNetConfiguration conf, DataType dataType)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description INDArrayactivate(boolean training, LayerWorkspaceMgr workspaceMgr)Perform forward pass and return the activations array with the last set inputprotected voidapplyDropOutIfNecessary(boolean training, LayerWorkspaceMgr workspaceMgr)Pair<Gradient,INDArray>backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)Calculate the gradient relative to the error in the next layervoidclear()Clear inputbooleanhasBias()Does this layer have no bias term? Many layers (dense, convolutional, output, embedding) have biases by default, but no-bias versions are possible via configurationbooleanisPretrainLayer()Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)protected INDArraypreOutput(boolean training, LayerWorkspaceMgr workspaceMgr)Layer.Typetype()Returns the layer type-
Methods inherited from class org.deeplearning4j.nn.layers.BaseLayer
calcRegularizationScore, clearNoiseWeightParams, clone, computeGradientAndScore, fit, fit, getGradientsViewArray, getOptimizer, getParam, getParamWithNoise, gradient, hasLayerNorm, layerConf, numParams, params, paramTable, paramTable, preOutputWithPreNorm, score, setBackpropGradientsViewArray, setParam, setParams, setParams, setParamsViewArray, setParamTable, setScoreWithZ, toString, update, update
-
Methods inherited from class org.deeplearning4j.nn.layers.AbstractLayer
activate, addListeners, allowInputModification, applyConstraints, applyMask, assertInputSet, backpropDropOutIfPresent, batchSize, close, conf, feedForwardMaskArray, getConfig, getEpochCount, getHelper, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, gradientAndScore, init, input, layerId, numParams, setCacheMode, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, updaterDivideByMinibatch
-
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
-
Methods inherited from interface org.deeplearning4j.nn.api.Layer
getIterationCount, setIterationCount
-
-
-
-
Constructor Detail
-
EmbeddingSequenceLayer
public EmbeddingSequenceLayer(NeuralNetConfiguration conf, DataType dataType)
-
-
Method Detail
-
backpropGradient
public Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerCalculate the gradient relative to the error in the next layer- Specified by:
backpropGradientin interfaceLayer- Overrides:
backpropGradientin classBaseLayer<EmbeddingSequenceLayer>- Parameters:
epsilon- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C is cost function a=sigma(z) is activation.workspaceMgr- Workspace manager- Returns:
- Pair
where Gradient is gradient for this layer, INDArray is epsilon (activation gradient) needed by next layer, but before element-wise multiply by sigmaPrime(z). So for standard feed-forward layer, if this layer is L, then return.getSecond() == dL/dIn = (w^(L)*(delta^(L))^T)^T. Note that the returned array should be placed in the ArrayType.ACTIVATION_GRADworkspace via the workspace manager
-
preOutput
protected INDArray preOutput(boolean training, LayerWorkspaceMgr workspaceMgr)
- Overrides:
preOutputin classBaseLayer<EmbeddingSequenceLayer>
-
activate
public INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerPerform forward pass and return the activations array with the last set input- Specified by:
activatein interfaceLayer- Overrides:
activatein classBaseLayer<EmbeddingSequenceLayer>- Parameters:
training- training or test modeworkspaceMgr- Workspace manager- Returns:
- the activation (layer output) of the last specified input. Note that the returned array should be placed
in the
ArrayType.ACTIVATIONSworkspace via the workspace manager
-
hasBias
public boolean hasBias()
Description copied from class:BaseLayerDoes this layer have no bias term? Many layers (dense, convolutional, output, embedding) have biases by default, but no-bias versions are possible via configuration- Overrides:
hasBiasin classBaseLayer<EmbeddingSequenceLayer>- Returns:
- True if a bias term is present, false otherwise
-
isPretrainLayer
public boolean isPretrainLayer()
Description copied from interface:LayerReturns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)- Returns:
- true if the layer can be pretrained (using fit(INDArray), false otherwise
-
applyDropOutIfNecessary
protected void applyDropOutIfNecessary(boolean training, LayerWorkspaceMgr workspaceMgr)- Overrides:
applyDropOutIfNecessaryin classAbstractLayer<EmbeddingSequenceLayer>
-
type
public Layer.Type type()
Description copied from interface:LayerReturns the layer type- Specified by:
typein interfaceLayer- Overrides:
typein classAbstractLayer<EmbeddingSequenceLayer>- Returns:
-
-