public class SDLoss extends SDOps
Modifier and Type | Method and Description |
---|---|
SDVariable |
absoluteDifference(SDVariable label,
SDVariable predictions,
SDVariable weights)
Absolute difference loss:
sum_i abs( label[i] - predictions[i] ) |
SDVariable |
absoluteDifference(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Absolute difference loss:
sum_i abs( label[i] - predictions[i] ) |
SDVariable |
absoluteDifference(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights)
Absolute difference loss:
sum_i abs( label[i] - predictions[i] ) |
SDVariable |
absoluteDifference(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Absolute difference loss:
sum_i abs( label[i] - predictions[i] ) |
SDVariable |
cosineDistance(SDVariable label,
SDVariable predictions,
SDVariable weights,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i] , which isequivalent to cosine distance when both the predictions and labels are normalized. Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...) along the cosine distance dimension (with keepDims=true). |
SDVariable |
cosineDistance(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i] , which isequivalent to cosine distance when both the predictions and labels are normalized. Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...) along the cosine distance dimension (with keepDims=true). |
SDVariable |
cosineDistance(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i] , which isequivalent to cosine distance when both the predictions and labels are normalized. Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...) along the cosine distance dimension (with keepDims=true). |
SDVariable |
cosineDistance(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
int dimension)
Cosine distance loss:
1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i] , which isequivalent to cosine distance when both the predictions and labels are normalized. Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm. If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...) along the cosine distance dimension (with keepDims=true). |
SDVariable |
hingeLoss(SDVariable label,
SDVariable predictions,
SDVariable weights)
Hinge loss: a loss function used for training classifiers.
Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}from the user specified {0,1}. |
SDVariable |
hingeLoss(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Hinge loss: a loss function used for training classifiers.
Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}from the user specified {0,1}. |
SDVariable |
hingeLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights)
Hinge loss: a loss function used for training classifiers.
Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}from the user specified {0,1}. |
SDVariable |
hingeLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Hinge loss: a loss function used for training classifiers.
Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}from the user specified {0,1}. |
SDVariable |
huberLoss(SDVariable label,
SDVariable predictions,
SDVariable weights,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
huberLoss(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
huberLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
huberLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double delta)
Huber loss function, used for robust regression.
|
SDVariable |
l2Loss(SDVariable var)
L2 loss: 1/2 * sum(x^2)
|
SDVariable |
l2Loss(String name,
SDVariable var)
L2 loss: 1/2 * sum(x^2)
|
SDVariable |
logLoss(SDVariable label,
SDVariable predictions)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
logLoss(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
logLoss(String name,
SDVariable label,
SDVariable predictions)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
logLoss(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
double epsilon)
Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification.
|
SDVariable |
logPoisson(SDVariable label,
SDVariable predictions,
SDVariable weights,
boolean full)
Log poisson loss: a loss function used for training classifiers.
Implements L = exp(c) - z * c where c is log(predictions) and z is labels. |
SDVariable |
logPoisson(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
boolean full)
Log poisson loss: a loss function used for training classifiers.
Implements L = exp(c) - z * c where c is log(predictions) and z is labels. |
SDVariable |
logPoisson(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
boolean full)
Log poisson loss: a loss function used for training classifiers.
Implements L = exp(c) - z * c where c is log(predictions) and z is labels. |
SDVariable |
logPoisson(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce,
boolean full)
Log poisson loss: a loss function used for training classifiers.
Implements L = exp(c) - z * c where c is log(predictions) and z is labels. |
SDVariable |
meanPairwiseSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is: [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3 |
SDVariable |
meanPairwiseSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is: [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3 |
SDVariable |
meanPairwiseSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is: [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3 |
SDVariable |
meanPairwiseSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean pairwise squared error.
MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays. For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is: [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3 |
SDVariable |
meanSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights)
Mean squared error loss function.
|
SDVariable |
meanSquaredError(SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean squared error loss function.
|
SDVariable |
meanSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights)
Mean squared error loss function.
|
SDVariable |
meanSquaredError(String name,
SDVariable label,
SDVariable predictions,
SDVariable weights,
LossReduce lossReduce)
Mean squared error loss function.
|
SDVariable |
sigmoidCrossEntropy(SDVariable label,
SDVariable predictionLogits,
SDVariable weights)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function. |
SDVariable |
sigmoidCrossEntropy(SDVariable label,
SDVariable predictionLogits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function. |
SDVariable |
sigmoidCrossEntropy(String name,
SDVariable label,
SDVariable predictionLogits,
SDVariable weights)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function. |
SDVariable |
sigmoidCrossEntropy(String name,
SDVariable label,
SDVariable predictionLogits,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
and implements the binary cross entropy loss function. |
SDVariable |
softmaxCrossEntropy(SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits) If LossReduce.NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;otherwise, the output is a scalar. |
SDVariable |
softmaxCrossEntropy(SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits) If LossReduce.NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;otherwise, the output is a scalar. |
SDVariable |
softmaxCrossEntropy(String name,
SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits) If LossReduce.NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;otherwise, the output is a scalar. |
SDVariable |
softmaxCrossEntropy(String name,
SDVariable oneHotLabels,
SDVariable logitPredictions,
SDVariable weights,
LossReduce lossReduce,
double labelSmoothing)
Applies the softmax activation function to the input, then implement multi-class cross entropy:
-sum_classes label[i] * log(p[c]) where p = softmax(logits) If LossReduce.NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;otherwise, the output is a scalar. |
SDVariable |
sparseSoftmaxCrossEntropy(SDVariable logits,
SDVariable labels)
As per softmaxCrossEntropy(String, SDVariable, SDVariable, LossReduce) but the labels variable
is represented as an integer array instead of the equivalent one-hot array. i.e., if logits are rank N, then labels have rank N-1 |
SDVariable |
sparseSoftmaxCrossEntropy(String name,
SDVariable logits,
SDVariable labels)
As per softmaxCrossEntropy(String, SDVariable, SDVariable, LossReduce) but the labels variable
is represented as an integer array instead of the equivalent one-hot array. i.e., if logits are rank N, then labels have rank N-1 |
SDVariable |
weightedCrossEntropyWithLogits(SDVariable targets,
SDVariable inputs,
SDVariable weights)
Weighted cross entropy loss with logits
|
SDVariable |
weightedCrossEntropyWithLogits(String name,
SDVariable targets,
SDVariable inputs,
SDVariable weights)
Weighted cross entropy loss with logits
|
public SDLoss(SameDiff sameDiff)
public SDVariable absoluteDifference(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)
sum_i abs( label[i] - predictions[i] )
label
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
public SDVariable absoluteDifference(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)
sum_i abs( label[i] - predictions[i] )
name
- name May be null. Name for the output variablelabel
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
public SDVariable absoluteDifference(SDVariable label, SDVariable predictions, SDVariable weights)
sum_i abs( label[i] - predictions[i] )
label
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)public SDVariable absoluteDifference(String name, SDVariable label, SDVariable predictions, SDVariable weights)
sum_i abs( label[i] - predictions[i] )
name
- name May be null. Name for the output variablelabel
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)public SDVariable cosineDistance(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, int dimension)
1 - cosineSimilarity(x,y)
or 1 - sum_i label[i] * prediction[i]
, which islabel
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is use (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
dimension
- Dimension to perform the cosine distance overpublic SDVariable cosineDistance(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, int dimension)
1 - cosineSimilarity(x,y)
or 1 - sum_i label[i] * prediction[i]
, which isname
- name May be null. Name for the output variablelabel
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is use (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
dimension
- Dimension to perform the cosine distance overpublic SDVariable cosineDistance(SDVariable label, SDVariable predictions, SDVariable weights, int dimension)
1 - cosineSimilarity(x,y)
or 1 - sum_i label[i] * prediction[i]
, which islabel
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is use (NUMERIC type)dimension
- Dimension to perform the cosine distance overpublic SDVariable cosineDistance(String name, SDVariable label, SDVariable predictions, SDVariable weights, int dimension)
1 - cosineSimilarity(x,y)
or 1 - sum_i label[i] * prediction[i]
, which isname
- name May be null. Name for the output variablelabel
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is use (NUMERIC type)dimension
- Dimension to perform the cosine distance overpublic SDVariable hingeLoss(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)
L = max(0, 1 - t * predictions)
where t is the label values after internally converting to {-1,1}label
- Label array. Each value should be 0.0 or 1.0 (internally -1 to 1 is used) (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
public SDVariable hingeLoss(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)
L = max(0, 1 - t * predictions)
where t is the label values after internally converting to {-1,1}name
- name May be null. Name for the output variablelabel
- Label array. Each value should be 0.0 or 1.0 (internally -1 to 1 is used) (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
public SDVariable hingeLoss(SDVariable label, SDVariable predictions, SDVariable weights)
L = max(0, 1 - t * predictions)
where t is the label values after internally converting to {-1,1}label
- Label array. Each value should be 0.0 or 1.0 (internally -1 to 1 is used) (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)public SDVariable hingeLoss(String name, SDVariable label, SDVariable predictions, SDVariable weights)
L = max(0, 1 - t * predictions)
where t is the label values after internally converting to {-1,1}name
- name May be null. Name for the output variablelabel
- Label array. Each value should be 0.0 or 1.0 (internally -1 to 1 is used) (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)public SDVariable huberLoss(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, double delta)
L = 0.5 * (label[i] - predictions[i])^2 if abs(label[i] - predictions[i]) < delta
L = delta * abs(label[i] - predictions[i]) - 0.5 * delta^2 otherwise
label
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
delta
- Loss function delta valuepublic SDVariable huberLoss(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, double delta)
L = 0.5 * (label[i] - predictions[i])^2 if abs(label[i] - predictions[i]) < delta
L = delta * abs(label[i] - predictions[i]) - 0.5 * delta^2 otherwise
name
- name May be null. Name for the output variablelabel
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
delta
- Loss function delta valuepublic SDVariable huberLoss(SDVariable label, SDVariable predictions, SDVariable weights, double delta)
L = 0.5 * (label[i] - predictions[i])^2 if abs(label[i] - predictions[i]) < delta
L = delta * abs(label[i] - predictions[i]) - 0.5 * delta^2 otherwise
label
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)delta
- Loss function delta valuepublic SDVariable huberLoss(String name, SDVariable label, SDVariable predictions, SDVariable weights, double delta)
L = 0.5 * (label[i] - predictions[i])^2 if abs(label[i] - predictions[i]) < delta
L = delta * abs(label[i] - predictions[i]) - 0.5 * delta^2 otherwise
name
- name May be null. Name for the output variablelabel
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)delta
- Loss function delta valuepublic SDVariable l2Loss(SDVariable var)
var
- Variable to calculate L2 loss of (NUMERIC type)public SDVariable l2Loss(String name, SDVariable var)
name
- name May be null. Name for the output variablevar
- Variable to calculate L2 loss of (NUMERIC type)public SDVariable logLoss(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, double epsilon)
-1/numExamples * sum_i (labels[i] * log(predictions[i] + epsilon) + (1-labels[i]) * log(1-predictions[i] + epsilon))
label
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
epsilon
- epsilonpublic SDVariable logLoss(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, double epsilon)
-1/numExamples * sum_i (labels[i] * log(predictions[i] + epsilon) + (1-labels[i]) * log(1-predictions[i] + epsilon))
name
- name May be null. Name for the output variablelabel
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
epsilon
- epsilonpublic SDVariable logLoss(SDVariable label, SDVariable predictions)
-1/numExamples * sum_i (labels[i] * log(predictions[i] + epsilon) + (1-labels[i]) * log(1-predictions[i] + epsilon))
label
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)public SDVariable logLoss(String name, SDVariable label, SDVariable predictions)
-1/numExamples * sum_i (labels[i] * log(predictions[i] + epsilon) + (1-labels[i]) * log(1-predictions[i] + epsilon))
name
- name May be null. Name for the output variablelabel
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)public SDVariable logPoisson(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, boolean full)
L = exp(c) - z * c
where c is log(predictions) and z is labels.label
- Label array. Each value should be 0.0 or 1.0 (NUMERIC type)predictions
- Predictions array (has to be log(x) of actual predictions) (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
full
- Boolean flag. true for logPoissonFull, false for logPoissonpublic SDVariable logPoisson(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce, boolean full)
L = exp(c) - z * c
where c is log(predictions) and z is labels.name
- name May be null. Name for the output variablelabel
- Label array. Each value should be 0.0 or 1.0 (NUMERIC type)predictions
- Predictions array (has to be log(x) of actual predictions) (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
full
- Boolean flag. true for logPoissonFull, false for logPoissonpublic SDVariable logPoisson(SDVariable label, SDVariable predictions, SDVariable weights, boolean full)
L = exp(c) - z * c
where c is log(predictions) and z is labels.label
- Label array. Each value should be 0.0 or 1.0 (NUMERIC type)predictions
- Predictions array (has to be log(x) of actual predictions) (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)full
- Boolean flag. true for logPoissonFull, false for logPoissonpublic SDVariable logPoisson(String name, SDVariable label, SDVariable predictions, SDVariable weights, boolean full)
L = exp(c) - z * c
where c is log(predictions) and z is labels.name
- name May be null. Name for the output variablelabel
- Label array. Each value should be 0.0 or 1.0 (NUMERIC type)predictions
- Predictions array (has to be log(x) of actual predictions) (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)full
- Boolean flag. true for logPoissonFull, false for logPoissonpublic SDVariable meanPairwiseSquaredError(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)
[((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3
label
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used. Must be either null, scalar, or have shape [batchSize] (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
public SDVariable meanPairwiseSquaredError(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)
[((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3
name
- name May be null. Name for the output variablelabel
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used. Must be either null, scalar, or have shape [batchSize] (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
public SDVariable meanPairwiseSquaredError(SDVariable label, SDVariable predictions, SDVariable weights)
[((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3
label
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used. Must be either null, scalar, or have shape [batchSize] (NUMERIC type)public SDVariable meanPairwiseSquaredError(String name, SDVariable label, SDVariable predictions, SDVariable weights)
[((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3
name
- name May be null. Name for the output variablelabel
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used. Must be either null, scalar, or have shape [batchSize] (NUMERIC type)public SDVariable meanSquaredError(SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)
(label[i] - prediction[i])^2
- i.e., squared error on a per-element basis.LossReduce.MEAN_BY_WEIGHT
or LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
(the default))label
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
public SDVariable meanSquaredError(String name, SDVariable label, SDVariable predictions, SDVariable weights, LossReduce lossReduce)
(label[i] - prediction[i])^2
- i.e., squared error on a per-element basis.LossReduce.MEAN_BY_WEIGHT
or LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
(the default))name
- name May be null. Name for the output variablelabel
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
public SDVariable meanSquaredError(SDVariable label, SDVariable predictions, SDVariable weights)
(label[i] - prediction[i])^2
- i.e., squared error on a per-element basis.LossReduce.MEAN_BY_WEIGHT
or LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
(the default))label
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)public SDVariable meanSquaredError(String name, SDVariable label, SDVariable predictions, SDVariable weights)
(label[i] - prediction[i])^2
- i.e., squared error on a per-element basis.LossReduce.MEAN_BY_WEIGHT
or LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
(the default))name
- name May be null. Name for the output variablelabel
- Label array (NUMERIC type)predictions
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)public SDVariable sigmoidCrossEntropy(SDVariable label, SDVariable predictionLogits, SDVariable weights, LossReduce lossReduce, double labelSmoothing)
-1/numExamples * sum_i (labels[i] * log(sigmoid(logits[i])) + (1-labels[i]) * log(1-sigmoid(logits[i])))
numClasses = labels.size(1);<br> label = (1.0 - labelSmoothing) * label + 0.5 * labelSmoothing
label
- Label array (NUMERIC type)predictionLogits
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
labelSmoothing
- Label smoothing value. Default value: 0public SDVariable sigmoidCrossEntropy(String name, SDVariable label, SDVariable predictionLogits, SDVariable weights, LossReduce lossReduce, double labelSmoothing)
-1/numExamples * sum_i (labels[i] * log(sigmoid(logits[i])) + (1-labels[i]) * log(1-sigmoid(logits[i])))
numClasses = labels.size(1);<br> label = (1.0 - labelSmoothing) * label + 0.5 * labelSmoothing
name
- name May be null. Name for the output variablelabel
- Label array (NUMERIC type)predictionLogits
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
labelSmoothing
- Label smoothing value. Default value: 0public SDVariable sigmoidCrossEntropy(SDVariable label, SDVariable predictionLogits, SDVariable weights)
-1/numExamples * sum_i (labels[i] * log(sigmoid(logits[i])) + (1-labels[i]) * log(1-sigmoid(logits[i])))
numClasses = labels.size(1);<br> label = (1.0 - labelSmoothing) * label + 0.5 * labelSmoothing
label
- Label array (NUMERIC type)predictionLogits
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)public SDVariable sigmoidCrossEntropy(String name, SDVariable label, SDVariable predictionLogits, SDVariable weights)
-1/numExamples * sum_i (labels[i] * log(sigmoid(logits[i])) + (1-labels[i]) * log(1-sigmoid(logits[i])))
numClasses = labels.size(1);<br> label = (1.0 - labelSmoothing) * label + 0.5 * labelSmoothing
name
- name May be null. Name for the output variablelabel
- Label array (NUMERIC type)predictionLogits
- Predictions array (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)public SDVariable softmaxCrossEntropy(SDVariable oneHotLabels, SDVariable logitPredictions, SDVariable weights, LossReduce lossReduce, double labelSmoothing)
-sum_classes label[i] * log(p[c])
where p = softmax(logits)
LossReduce.NONE
is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;
When label smoothing is > 0, the following label smoothing is used:
numClasses = labels.size(1);<br> oneHotLabel = (1.0 - labelSmoothing) * oneHotLabels + labelSmoothing/numClasses
oneHotLabels
- Label array. Should be one-hot per example and same shape as predictions (for example, [mb, nOut]) (NUMERIC type)logitPredictions
- Predictions array (pre-softmax) (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
labelSmoothing
- Label smoothing value. Default value: 0public SDVariable softmaxCrossEntropy(String name, SDVariable oneHotLabels, SDVariable logitPredictions, SDVariable weights, LossReduce lossReduce, double labelSmoothing)
-sum_classes label[i] * log(p[c])
where p = softmax(logits)
LossReduce.NONE
is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;
When label smoothing is > 0, the following label smoothing is used:
numClasses = labels.size(1);<br> oneHotLabel = (1.0 - labelSmoothing) * oneHotLabels + labelSmoothing/numClasses
name
- name May be null. Name for the output variableoneHotLabels
- Label array. Should be one-hot per example and same shape as predictions (for example, [mb, nOut]) (NUMERIC type)logitPredictions
- Predictions array (pre-softmax) (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)lossReduce
- Reduction type for the loss. See LossReduce
for more details. Default: LossReduce.MEAN_BY_NONZERO_WEIGHT_COUNT
labelSmoothing
- Label smoothing value. Default value: 0public SDVariable softmaxCrossEntropy(SDVariable oneHotLabels, SDVariable logitPredictions, SDVariable weights)
-sum_classes label[i] * log(p[c])
where p = softmax(logits)
LossReduce.NONE
is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;
When label smoothing is > 0, the following label smoothing is used:
numClasses = labels.size(1);<br> oneHotLabel = (1.0 - labelSmoothing) * oneHotLabels + labelSmoothing/numClasses
oneHotLabels
- Label array. Should be one-hot per example and same shape as predictions (for example, [mb, nOut]) (NUMERIC type)logitPredictions
- Predictions array (pre-softmax) (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)public SDVariable softmaxCrossEntropy(String name, SDVariable oneHotLabels, SDVariable logitPredictions, SDVariable weights)
-sum_classes label[i] * log(p[c])
where p = softmax(logits)
LossReduce.NONE
is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;
When label smoothing is > 0, the following label smoothing is used:
numClasses = labels.size(1);<br> oneHotLabel = (1.0 - labelSmoothing) * oneHotLabels + labelSmoothing/numClasses
name
- name May be null. Name for the output variableoneHotLabels
- Label array. Should be one-hot per example and same shape as predictions (for example, [mb, nOut]) (NUMERIC type)logitPredictions
- Predictions array (pre-softmax) (NUMERIC type)weights
- Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)public SDVariable sparseSoftmaxCrossEntropy(SDVariable logits, SDVariable labels)
logits
- Logits array ("pre-softmax activations") (NUMERIC type)labels
- Labels array. Must be an integer type. (INT type)public SDVariable sparseSoftmaxCrossEntropy(String name, SDVariable logits, SDVariable labels)
name
- name May be null. Name for the output variablelogits
- Logits array ("pre-softmax activations") (NUMERIC type)labels
- Labels array. Must be an integer type. (INT type)public SDVariable weightedCrossEntropyWithLogits(SDVariable targets, SDVariable inputs, SDVariable weights)
targets
- targets array (NUMERIC type)inputs
- input array (NUMERIC type)weights
- eights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)public SDVariable weightedCrossEntropyWithLogits(String name, SDVariable targets, SDVariable inputs, SDVariable weights)
name
- name May be null. Name for the output variabletargets
- targets array (NUMERIC type)inputs
- input array (NUMERIC type)weights
- eights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)Copyright © 2020. All rights reserved.