public abstract class BasePretrainNetwork<LayerConfT extends BasePretrainNetwork> extends BaseLayer<LayerConfT>
Layer.TrainingMode, Layer.Typegradient, gradientsFlattened, gradientViews, optimizer, params, paramsFlattened, score, solver, weightNoiseParamscacheMode, conf, dataType, dropoutApplied, epochCount, index, input, inputModificationAllowed, iterationCount, maskArray, maskState, preOutput, trainingListeners| Constructor and Description |
|---|
BasePretrainNetwork(NeuralNetConfiguration conf,
DataType dataType) |
| Modifier and Type | Method and Description |
|---|---|
Pair<Gradient,INDArray> |
backpropGradient(INDArray epsilon,
LayerWorkspaceMgr workspaceMgr)
Calculate the gradient relative to the error in the next layer
|
double |
calcRegularizationScore(boolean backpropParamsOnly)
Calculate the regularization component of the score, for the parameters in this layer
For example, the L1, L2 and/or weight decay components of the loss function |
protected Gradient |
createGradient(INDArray wGradient,
INDArray vBiasGradient,
INDArray hBiasGradient) |
INDArray |
getCorruptedInput(INDArray x,
double corruptionLevel)
Corrupts the given input by doing a binomial sampling
given the corruption level
|
long |
numParams()
The number of parameters for the model, for backprop (i.e., excluding visible bias)
|
long |
numParams(boolean backwards)
the number of parameters for the model
|
INDArray |
params()
Returns the parameters of the neural network as a flattened row vector
|
Map<String,INDArray> |
paramTable(boolean backpropParamsOnly)
Table of parameters by key, for backprop
For many models (dense layers, etc) - all parameters are backprop parameters
|
abstract Pair<INDArray,INDArray> |
sampleHiddenGivenVisible(INDArray v)
Sample the hidden distribution given the visible
|
abstract Pair<INDArray,INDArray> |
sampleVisibleGivenHidden(INDArray h)
Sample the visible distribution given the hidden
|
void |
setParams(INDArray params)
Set the parameters for this model.
|
protected void |
setScoreWithZ(INDArray z) |
activate, clear, clearNoiseWeightParams, clone, computeGradientAndScore, fit, fit, getGradientsViewArray, getOptimizer, getParam, getParamWithNoise, gradient, hasBias, hasLayerNorm, layerConf, paramTable, preOutput, preOutputWithPreNorm, score, setBackpropGradientsViewArray, setParam, setParams, setParamsViewArray, setParamTable, toString, update, updateactivate, addListeners, allowInputModification, applyConstraints, applyDropOutIfNecessary, applyMask, assertInputSet, backpropDropOutIfPresent, batchSize, close, conf, feedForwardMaskArray, getConfig, getEpochCount, getHelper, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, gradientAndScore, init, input, layerId, setCacheMode, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, type, updaterDivideByMinibatchequals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitgetIterationCount, isPretrainLayer, setIterationCountpublic BasePretrainNetwork(NeuralNetConfiguration conf, DataType dataType)
public INDArray getCorruptedInput(INDArray x, double corruptionLevel)
x - the input to corruptcorruptionLevel - the corruption valueprotected Gradient createGradient(INDArray wGradient, INDArray vBiasGradient, INDArray hBiasGradient)
public long numParams(boolean backwards)
ModelnumParams in interface ModelnumParams in class AbstractLayer<LayerConfT extends BasePretrainNetwork>public abstract Pair<INDArray,INDArray> sampleHiddenGivenVisible(INDArray v)
v - the visible to sample frompublic abstract Pair<INDArray,INDArray> sampleVisibleGivenHidden(INDArray h)
h - the hidden to sample fromprotected void setScoreWithZ(INDArray z)
setScoreWithZ in class BaseLayer<LayerConfT extends BasePretrainNetwork>public Map<String,INDArray> paramTable(boolean backpropParamsOnly)
ModelparamTable in interface ModelparamTable in interface TrainableparamTable in class BaseLayer<LayerConfT extends BasePretrainNetwork>backpropParamsOnly - If true, return backprop params only. If false: return all params (equivalent to
paramsTable())public INDArray params()
BaseLayerparams in interface Modelparams in interface Trainableparams in class BaseLayer<LayerConfT extends BasePretrainNetwork>public long numParams()
numParams in interface ModelnumParams in interface TrainablenumParams in class BaseLayer<LayerConfT extends BasePretrainNetwork>public void setParams(INDArray params)
ModelsetParams in interface ModelsetParams in class BaseLayer<LayerConfT extends BasePretrainNetwork>params - the parameters for the modelpublic Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
LayerbackpropGradient in interface LayerbackpropGradient in class BaseLayer<LayerConfT extends BasePretrainNetwork>epsilon - w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.workspaceMgr - Workspace managerArrayType.ACTIVATION_GRAD workspace via the workspace managerpublic double calcRegularizationScore(boolean backpropParamsOnly)
LayercalcRegularizationScore in interface LayercalcRegularizationScore in class BaseLayer<LayerConfT extends BasePretrainNetwork>backpropParamsOnly - If true: calculate regularization score based on backprop params only. If false: calculate
based on all params (including pretrain params, if any)Copyright © 2020. All rights reserved.