Class Upsampling1D
- java.lang.Object
-
- org.deeplearning4j.nn.layers.AbstractLayer<Upsampling2D>
-
- org.deeplearning4j.nn.layers.convolution.upsampling.Upsampling2D
-
- org.deeplearning4j.nn.layers.convolution.upsampling.Upsampling1D
-
- All Implemented Interfaces:
Serializable,Cloneable,Layer,Model,Trainable
public class Upsampling1D extends Upsampling2D
- See Also:
- Serialized Form
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from interface org.deeplearning4j.nn.api.Layer
Layer.TrainingMode, Layer.Type
-
-
Field Summary
-
Fields inherited from class org.deeplearning4j.nn.layers.AbstractLayer
cacheMode, conf, dataType, dropoutApplied, epochCount, index, input, inputModificationAllowed, iterationCount, maskArray, maskState, preOutput, trainingListeners
-
-
Constructor Summary
Constructors Constructor Description Upsampling1D(NeuralNetConfiguration conf, DataType dataType)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description INDArrayactivate(boolean training, LayerWorkspaceMgr workspaceMgr)Perform forward pass and return the activations array with the last set inputPair<Gradient,INDArray>backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)Calculate the gradient relative to the error in the next layerprotected CNN2DFormatgetFormat()protected int[]getSize()protected INDArraypreOutput(boolean training, boolean forBackprop, LayerWorkspaceMgr workspaceMgr)-
Methods inherited from class org.deeplearning4j.nn.layers.convolution.upsampling.Upsampling2D
clearNoiseWeightParams, fit, fit, getParam, gradient, isPretrainLayer, numParams, params, score, setParams, type, update
-
Methods inherited from class org.deeplearning4j.nn.layers.AbstractLayer
activate, addListeners, allowInputModification, applyConstraints, applyDropOutIfNecessary, applyMask, assertInputSet, backpropDropOutIfPresent, batchSize, calcRegularizationScore, clear, close, computeGradientAndScore, conf, feedForwardMaskArray, getConfig, getEpochCount, getGradientsViewArray, getHelper, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, getOptimizer, gradientAndScore, init, input, layerConf, layerId, numParams, paramTable, paramTable, setBackpropGradientsViewArray, setCacheMode, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, setParam, setParams, setParamsViewArray, setParamTable, update, updaterDivideByMinibatch
-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface org.deeplearning4j.nn.api.Layer
getIterationCount, setIterationCount
-
-
-
-
Constructor Detail
-
Upsampling1D
public Upsampling1D(NeuralNetConfiguration conf, DataType dataType)
-
-
Method Detail
-
getFormat
protected CNN2DFormat getFormat()
- Overrides:
getFormatin classUpsampling2D
-
backpropGradient
public Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerCalculate the gradient relative to the error in the next layer- Specified by:
backpropGradientin interfaceLayer- Overrides:
backpropGradientin classUpsampling2D- Parameters:
epsilon- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C is cost function a=sigma(z) is activation.workspaceMgr- Workspace manager- Returns:
- Pair
where Gradient is gradient for this layer, INDArray is epsilon (activation gradient) needed by next layer, but before element-wise multiply by sigmaPrime(z). So for standard feed-forward layer, if this layer is L, then return.getSecond() == dL/dIn = (w^(L)*(delta^(L))^T)^T. Note that the returned array should be placed in the ArrayType.ACTIVATION_GRADworkspace via the workspace manager
-
getSize
protected int[] getSize()
- Overrides:
getSizein classUpsampling2D
-
activate
public INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Description copied from interface:LayerPerform forward pass and return the activations array with the last set input- Specified by:
activatein interfaceLayer- Overrides:
activatein classUpsampling2D- Parameters:
training- training or test modeworkspaceMgr- Workspace manager- Returns:
- the activation (layer output) of the last specified input. Note that the returned array should be placed
in the
ArrayType.ACTIVATIONSworkspace via the workspace manager
-
preOutput
protected INDArray preOutput(boolean training, boolean forBackprop, LayerWorkspaceMgr workspaceMgr)
- Overrides:
preOutputin classUpsampling2D
-
-