public class CenterLossOutputLayer extends BaseOutputLayer<CenterLossOutputLayer>
Layer.TrainingMode, Layer.TypeinputMaskArray, inputMaskArrayState, labelsgradient, gradientsFlattened, gradientViews, optimizer, params, paramsFlattened, scorecacheMode, conf, dropoutApplied, dropoutMask, index, input, iterationListeners, maskArray, maskState, preOutput| Constructor and Description |
|---|
CenterLossOutputLayer(NeuralNetConfiguration conf) |
CenterLossOutputLayer(NeuralNetConfiguration conf,
org.nd4j.linalg.api.ndarray.INDArray input) |
| Modifier and Type | Method and Description |
|---|---|
Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> |
backpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon)
Calculate the gradient relative to the error in the next layer
|
void |
computeGradientAndScore()
Update the score
|
double |
computeScore(double fullNetworkL1,
double fullNetworkL2,
boolean training)
Compute score after labels and input have been set.
|
org.nd4j.linalg.api.ndarray.INDArray |
computeScoreForExamples(double fullNetworkL1,
double fullNetworkL2)
Compute the score for each example individually, after labels and input have been set.
|
Gradient |
gradient()
Gets the gradient from one training iteration
|
Pair<Gradient,Double> |
gradientAndScore()
Get the gradient and score
|
protected void |
setScoreWithZ(org.nd4j.linalg.api.ndarray.INDArray z) |
activate, activate, activate, applyMask, clear, f1Score, f1Score, fit, fit, fit, fit, fit, getLabels, getLabels2d, isPretrainLayer, iterate, labelProbabilities, numLabels, output, output, output, predict, predict, preOutput2d, setLabelsaccumulateScore, activate, activate, activate, activationMean, applyLearningRateScoreDecay, calcGradient, calcL1, calcL2, clone, error, fit, getGradientsViewArray, getOptimizer, getParam, initParams, layerConf, merge, numParams, params, paramTable, paramTable, preOutput, preOutput, score, setBackpropGradientsViewArray, setParam, setParams, setParams, setParamsViewArray, setParamTable, toString, transpose, update, updateaddListeners, applyDropOutIfNecessary, batchSize, conf, derivativeActivation, feedForwardMaskArray, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, init, input, layerId, numParams, preOutput, preOutput, setCacheMode, setConf, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, type, validateInputequals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitactivate, activate, activate, activationMean, calcGradient, calcL1, calcL2, clone, derivativeActivation, error, feedForwardMaskArray, getIndex, getInputMiniBatchSize, getListeners, getMaskArray, merge, preOutput, preOutput, preOutput, setCacheMode, setIndex, setInput, setInputMiniBatchSize, setListeners, setListeners, setMaskArray, transpose, typeaccumulateScore, addListeners, applyLearningRateScoreDecay, batchSize, conf, fit, getGradientsViewArray, getOptimizer, getParam, init, initParams, input, numParams, numParams, params, paramTable, paramTable, score, setBackpropGradientsViewArray, setConf, setParam, setParams, setParamsViewArray, setParamTable, update, update, validateInputpublic CenterLossOutputLayer(NeuralNetConfiguration conf)
public CenterLossOutputLayer(NeuralNetConfiguration conf, org.nd4j.linalg.api.ndarray.INDArray input)
public double computeScore(double fullNetworkL1,
double fullNetworkL2,
boolean training)
computeScore in interface IOutputLayercomputeScore in class BaseOutputLayer<CenterLossOutputLayer>fullNetworkL1 - L1 regularization term for the entire networkfullNetworkL2 - L2 regularization term for the entire networktraining - whether score should be calculated at train or test time (this affects things like application of
dropout, etc)public org.nd4j.linalg.api.ndarray.INDArray computeScoreForExamples(double fullNetworkL1,
double fullNetworkL2)
computeScoreForExamples in interface IOutputLayercomputeScoreForExamples in class BaseOutputLayer<CenterLossOutputLayer>fullNetworkL1 - L1 regularization term for the entire network (or, 0.0 to not include regularization)fullNetworkL2 - L2 regularization term for the entire network (or, 0.0 to not include regularization)public void computeGradientAndScore()
ModelcomputeGradientAndScore in interface ModelcomputeGradientAndScore in class BaseOutputLayer<CenterLossOutputLayer>protected void setScoreWithZ(org.nd4j.linalg.api.ndarray.INDArray z)
setScoreWithZ in class BaseOutputLayer<CenterLossOutputLayer>public Pair<Gradient,Double> gradientAndScore()
ModelgradientAndScore in interface ModelgradientAndScore in class BaseOutputLayer<CenterLossOutputLayer>public Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> backpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon)
LayerbackpropGradient in interface LayerbackpropGradient in class BaseOutputLayer<CenterLossOutputLayer>epsilon - w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.public Gradient gradient()
gradient in interface Modelgradient in class BaseOutputLayer<CenterLossOutputLayer>Copyright © 2017. All rights reserved.