public abstract class BaseOutputLayer<LayerConfT extends BaseOutputLayer> extends BaseLayer<LayerConfT> implements Serializable, IOutputLayer
Layer.TrainingMode, Layer.Type| Modifier and Type | Field and Description |
|---|---|
protected INDArray |
inputMaskArray |
protected MaskState |
inputMaskArrayState |
protected INDArray |
labels |
gradient, gradientsFlattened, gradientViews, optimizer, params, paramsFlattened, score, weightNoiseParamscacheMode, conf, dataType, dropoutApplied, epochCount, index, input, inputModificationAllowed, iterationCount, maskArray, maskState, preOutput, trainingListeners| Constructor and Description |
|---|
BaseOutputLayer(NeuralNetConfiguration conf,
DataType dataType) |
| Modifier and Type | Method and Description |
|---|---|
INDArray |
activate(INDArray input,
boolean training,
LayerWorkspaceMgr workspaceMgr)
Perform forward pass and return the activations array with the specified input
|
protected void |
applyMask(INDArray to) |
Pair<Gradient,INDArray> |
backpropGradient(INDArray epsilon,
LayerWorkspaceMgr workspaceMgr)
Calculate the gradient relative to the error in the next layer
|
void |
clear()
Clear input
|
void |
computeGradientAndScore(LayerWorkspaceMgr workspaceMgr)
Update the score
|
double |
computeScore(double fullNetRegTerm,
boolean training,
LayerWorkspaceMgr workspaceMgr)
Compute score after labels and input have been set.
|
INDArray |
computeScoreForExamples(double fullNetRegTerm,
LayerWorkspaceMgr workspaceMgr)
Compute the score for each example individually, after labels and input have been set.
|
double |
f1Score(DataSet data)
Sets the input and labels and returns a score for the prediction
wrt true labels
|
double |
f1Score(INDArray examples,
INDArray labels)
Returns the f1 score for the given examples.
|
void |
fit(DataSet data)
Fit the model
|
void |
fit(DataSetIterator iter)
Train the model based on the datasetiterator
|
void |
fit(INDArray input,
INDArray labels)
Fit the model
|
void |
fit(INDArray examples,
int[] labels)
Fit the model
|
void |
fit(INDArray data,
LayerWorkspaceMgr workspaceMgr)
Fit the model to the given data
|
INDArray |
getLabels()
Get the labels array previously set with
IOutputLayer.setLabels(INDArray) |
protected abstract INDArray |
getLabels2d(LayerWorkspaceMgr workspaceMgr,
ArrayType arrayType) |
Gradient |
gradient()
Gets the gradient from one training iteration
|
Pair<Gradient,Double> |
gradientAndScore()
Get the gradient and score
|
boolean |
hasBias()
Does this layer have no bias term? Many layers (dense, convolutional, output, embedding) have biases by
default, but no-bias versions are possible via configuration
|
boolean |
isPretrainLayer()
Returns true if the layer can be trained in an unsupervised/pretrain manner (AE, VAE, etc)
|
boolean |
needsLabels()
Returns true if labels are required
for this output layer
|
int |
numLabels()
Returns the number of possible labels
|
List<String> |
predict(DataSet dataSet)
Return predicted label names
|
int[] |
predict(INDArray input)
Returns the predictions for each example in the dataset
|
protected INDArray |
preOutput2d(boolean training,
LayerWorkspaceMgr workspaceMgr) |
void |
setLabels(INDArray labels)
Set the labels array for this output layer
|
protected void |
setScoreWithZ(INDArray z) |
activate, calcRegularizationScore, clearNoiseWeightParams, clone, fit, getGradientsViewArray, getOptimizer, getParam, getParamWithNoise, hasLayerNorm, layerConf, numParams, params, paramTable, paramTable, preOutput, preOutputWithPreNorm, score, setBackpropGradientsViewArray, setParam, setParams, setParams, setParamsViewArray, setParamTable, toString, update, updateaddListeners, allowInputModification, applyConstraints, applyDropOutIfNecessary, assertInputSet, backpropDropOutIfPresent, batchSize, close, conf, feedForwardMaskArray, getConfig, getEpochCount, getHelper, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, init, input, layerId, numParams, setCacheMode, setConf, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, type, updaterDivideByMinibatchequals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitactivate, allowInputModification, calcRegularizationScore, clearNoiseWeightParams, feedForwardMaskArray, getEpochCount, getHelper, getIndex, getInputMiniBatchSize, getIterationCount, getListeners, getMaskArray, setCacheMode, setEpochCount, setIndex, setInput, setInputMiniBatchSize, setIterationCount, setListeners, setListeners, setMaskArray, typegetConfig, getGradientsViewArray, numParams, params, paramTable, updaterDivideByMinibatchaddListeners, applyConstraints, batchSize, close, conf, fit, getGradientsViewArray, getOptimizer, getParam, init, input, numParams, numParams, params, paramTable, paramTable, score, setBackpropGradientsViewArray, setConf, setParam, setParams, setParamsViewArray, setParamTable, update, updateprotected INDArray labels
protected INDArray inputMaskArray
protected MaskState inputMaskArrayState
public BaseOutputLayer(NeuralNetConfiguration conf, DataType dataType)
public double computeScore(double fullNetRegTerm,
boolean training,
LayerWorkspaceMgr workspaceMgr)
computeScore in interface IOutputLayerfullNetRegTerm - Regularization score term for the entire networktraining - whether score should be calculated at train or test time (this affects things like application of
dropout, etc)public boolean needsLabels()
IOutputLayerneedsLabels in interface IOutputLayerpublic INDArray computeScoreForExamples(double fullNetRegTerm, LayerWorkspaceMgr workspaceMgr)
computeScoreForExamples in interface IOutputLayerfullNetRegTerm - Regularization score term for the entire network (or, 0.0 to not include regularization)public void computeGradientAndScore(LayerWorkspaceMgr workspaceMgr)
ModelcomputeGradientAndScore in interface ModelcomputeGradientAndScore in class BaseLayer<LayerConfT extends BaseOutputLayer>protected void setScoreWithZ(INDArray z)
setScoreWithZ in class BaseLayer<LayerConfT extends BaseOutputLayer>public Pair<Gradient,Double> gradientAndScore()
ModelgradientAndScore in interface ModelgradientAndScore in class AbstractLayer<LayerConfT extends BaseOutputLayer>public Pair<Gradient,INDArray> backpropGradient(INDArray epsilon, LayerWorkspaceMgr workspaceMgr)
LayerbackpropGradient in interface LayerbackpropGradient in class BaseLayer<LayerConfT extends BaseOutputLayer>epsilon - w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.workspaceMgr - Workspace managerArrayType.ACTIVATION_GRAD workspace via the workspace managerpublic Gradient gradient()
gradient in interface Modelgradient in class BaseLayer<LayerConfT extends BaseOutputLayer>public INDArray activate(INDArray input, boolean training, LayerWorkspaceMgr workspaceMgr)
Layeractivate in interface Layeractivate in class AbstractLayer<LayerConfT extends BaseOutputLayer>input - the input to usetraining - train or test modeworkspaceMgr - Workspace manager.ArrayType.ACTIVATIONS workspace via the workspace managerpublic double f1Score(DataSet data)
f1Score in interface Classifierdata - the data to scorepublic double f1Score(INDArray examples, INDArray labels)
f1Score in interface Classifierexamples - te the examples to classify (one example in each row)labels - the true labelspublic int numLabels()
numLabels in interface Classifierpublic void fit(DataSetIterator iter)
Classifierfit in interface Classifieriter - the iterator to train onpublic int[] predict(INDArray input)
predict in interface Classifierinput - the matrix to predictpublic List<String> predict(DataSet dataSet)
predict in interface ClassifierdataSet - to predictpublic void fit(INDArray input, INDArray labels)
fit in interface Classifierinput - the examples to classify (one example in each row)labels - the example labels(a binary outcome matrix)public void fit(DataSet data)
fit in interface Classifierdata - the data to train onpublic void fit(INDArray examples, int[] labels)
fit in interface Classifierexamples - the examples to classify (one example in each row)labels - the labels for each example (the number of labels must matchpublic void clear()
Modelclear in interface Modelclear in class BaseLayer<LayerConfT extends BaseOutputLayer>public void fit(INDArray data, LayerWorkspaceMgr workspaceMgr)
Modelfit in interface Modelfit in class BaseLayer<LayerConfT extends BaseOutputLayer>data - the data to fit the model topublic INDArray getLabels()
IOutputLayerIOutputLayer.setLabels(INDArray)getLabels in interface IOutputLayerpublic void setLabels(INDArray labels)
IOutputLayersetLabels in interface IOutputLayerlabels - Labels array to setprotected INDArray preOutput2d(boolean training, LayerWorkspaceMgr workspaceMgr)
protected void applyMask(INDArray to)
applyMask in class AbstractLayer<LayerConfT extends BaseOutputLayer>protected abstract INDArray getLabels2d(LayerWorkspaceMgr workspaceMgr, ArrayType arrayType)
public boolean isPretrainLayer()
LayerisPretrainLayer in interface Layerpublic boolean hasBias()
BaseLayerhasBias in class BaseLayer<LayerConfT extends BaseOutputLayer>Copyright © 2020. All rights reserved.