This loss function measures the Binary Cross Entropy between the target and the output loss(o, t) = - 1/n sum_i (t[i] * log(o[i]) + (1 - t[i]) * log(1 - o[i])) or in the case of the weights argument being specified: loss(o, t) = - 1/n sum_i weights[i] * (t[i] * log(o[i]) + (1 - t[i]) * log(1 - o[i]))
This is same with cross entropy criterion except the target tensor is a one-hot tensor
The negative of the mean cosine proximity between predictions and targets.
The negative of the mean cosine proximity between predictions and targets. The cosine proximity is defined as below: x'(i) = x(i) / sqrt(max(sum(x(i)2), 1e-12)) y'(i) = y(i) / sqrt(max(sum(x(i)2), 1e-12)) cosine_proximity(x, y) = mean(-1 * x'(i) * y'(i))
Creates a criterion that optimizes a two-class classification hinge loss (margin-based loss) between input x (a Tensor of dimension 1) and output y.
Loss calculated as: y_true = K.clip(y_true, K.epsilon(), 1) y_pred = K.clip(y_pred, K.epsilon(), 1) and output K.sum(y_true * K.log(y_true / y_pred), axis=-1)
The base class for Keras-style API objectives in Analytics Zoo.
The base class for Keras-style API objectives in Analytics Zoo.
Input data type.
Target data type.
A loss that measures the mean absolute value of the element-wise difference between the input and the target.
It caculates diff = K.abs((y - x) / K.clip(K.abs(y), K.epsilon(), Double.MaxValue)) and return 100 * K.mean(diff) as outpout
The mean squared error criterion e.g.
The mean squared error criterion e.g. input: a, target: b, total elements: n loss(a, b) = 1/n \sum |a_i - b_i|^2
It calculates: first_log = K.log(K.clip(y, K.epsilon(), Double.MaxValue) + 1.) second_log = K.log(K.clip(x, K.epsilon(), Double.MaxValue) + 1.) and output K.mean(K.square(first_log - second_log))
Loss calculated as: K.mean(y_pred - y_true * K.log(y_pred + K.epsilon()), axis=-1)
Hinge loss for pairwise ranking problems.
A loss often used in multi-class classification problems with SoftMax as the last layer of the neural network.
A loss often used in multi-class classification problems with SoftMax as the last layer of the neural network.
By default, input(y_pred) is supposed to be probabilities of each class, and target(y_true) is supposed to be the class label starting from 0.
Creates a criterion that optimizes a two-class classification squared hinge loss (margin-based loss) between input x (a Tensor of dimension 1) and output y.
A subclass of LossFunction where input and target are both Tensors.