public class GlobalPoolingLayer extends AbstractLayer<GlobalPoolingLayer>
PoolingType
s: SUM, AVG, MAX, PNORM
Behaviour with default settings:
- 3d (time series) input with shape [miniBatchSize, vectorSize, timeSeriesLength] -> 2d output [miniBatchSize, vectorSize]
- 4d (CNN) input with shape [miniBatchSize, channels, height, width] -> 2d output [miniBatchSize, channels]
- 5d (CNN3D) input with shape [miniBatchSize, channels, depth, height, width] -> 2d output [miniBatchSize, channels]
Alternatively, by setting collapseDimensions = false in the configuration, it is possible to retain the reduced dimensions
as 1s: this gives
- [miniBatchSize, vectorSize, 1] for RNN output,
- [miniBatchSize, channels, 1, 1] for CNN output, and
- [miniBatchSize, channels, 1, 1, 1] for CNN3D output.
Layer.TrainingMode, Layer.Type
cacheMode, conf, dataType, dropoutApplied, epochCount, index, input, inputModificationAllowed, iterationCount, maskArray, maskState, preOutput, trainingListeners
Constructor and Description |
---|
GlobalPoolingLayer(NeuralNetConfiguration conf,
DataType dataType) |
Modifier and Type | Method and Description |
---|---|
INDArray |
activate(boolean training,
LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the last set input
|
Pair<Gradient,INDArray> |
backpropGradient(INDArray epsilon,
LayerWorkspaceMgr workspaceMgr)
Calculate the gradient relative to the error in the next layer
|
void |
clearNoiseWeightParams() |
Layer |
clone() |
Pair<INDArray,MaskState> |
feedForwardMaskArray(INDArray maskArray,
MaskState currentMaskState,
int minibatchSize)
Feed forward the input mask array, setting in the layer as appropriate.
|
boolean |
isPretrainLayer()
Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)
|
Layer.Type |
type()
Returns the layer type
|
activate, addListeners, allowInputModification, applyConstraints, applyDropOutIfNecessary, applyMask, assertInputSet, backpropDropOutIfPresent, batchSize, calcRegularizationScore, clear, close, computeGradientAndScore, conf, fit, fit, getConfig, getEpochCount, getGradientsViewArray, getHelper, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, getOptimizer, getParam, gradient, gradientAndScore, init, input, layerConf, layerId, numParams, numParams, params, paramTable, paramTable, score, setBackpropGradientsViewArray, setCacheMode, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, setParam, setParams, setParams, setParamsViewArray, setParamTable, update, update, updaterDivideByMinibatch
equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
getIterationCount, setIterationCount
public GlobalPoolingLayer(NeuralNetConfiguration conf, DataType dataType)
public boolean isPretrainLayer()
Layer
public void clearNoiseWeightParams()
public Layer.Type type()
Layer
type
in interface Layer
type
in class AbstractLayer<GlobalPoolingLayer>
public INDArray activate(boolean training, LayerWorkspaceMgr workspaceMgr)
Layer
training
- training or test modeworkspaceMgr
- Workspace managerArrayType.ACTIVATIONS
workspace via the workspace managerpublic Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
Layer
epsilon
- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.workspaceMgr
- Workspace managerArrayType.ACTIVATION_GRAD
workspace via the workspace managerpublic Pair<INDArray,MaskState> feedForwardMaskArray(INDArray maskArray, MaskState currentMaskState, int minibatchSize)
Layer
feedForwardMaskArray
in interface Layer
feedForwardMaskArray
in class AbstractLayer<GlobalPoolingLayer>
maskArray
- Mask array to setcurrentMaskState
- Current state of the mask - see MaskState
minibatchSize
- Current minibatch size. Needs to be known as it cannot always be inferred from the activations
array due to reshaping (such as a DenseLayer within a recurrent neural network)Copyright © 2020. All rights reserved.