Adds regularization to the loss value
Adds regularization to the loss value
The updated loss is oldLoss + lambda * 1/2*||w||_2^2
where
lambdaw
is the weight vector, and is the regularization parameter
The loss to be updated
The weights used to update the loss
The regularization parameter to be applied
Updated loss
Calculates the new weights based on the gradient and L2 regularization penalty
Calculates the new weights based on the gradient and L2 regularization penalty
The updated weight is w - learningRate * (gradient + lambda * w)
where
w
is the weight vector, and lambda
is the regularization parameter.
The weights to be updated
The gradient according to which we will update the weights
The regularization parameter to be applied
The effective step size for this iteration
Updated weights
L_2
regularization penalty.The regularization function is the square of the L2 norm
1/2*||w||_2^2
with
wbeing the weight vector. The function penalizes large weights, favoring solutions with more small weights rather than few large ones.