the current point being considered
f(x)
f.gradientAt(x)
f(x) + r(x), where r is any regularization added to the objective. For LBFGS, this is f(x).
f'(x) + r'(x), where r is any regularization added to the objective. For LBFGS, this is f'(x).
what iteration number we are on.
f(x_0) + r(x_0), used for checking convergence
any information needed by the optimizer to do updates.
did the line search fail?
f'(x) + r'(x), where r is any regularization added to the objective.
f'(x) + r'(x), where r is any regularization added to the objective. For LBFGS, this is f'(x).
f(x) + r(x), where r is any regularization added to the objective.
f(x) + r(x), where r is any regularization added to the objective. For LBFGS, this is f(x).
f.gradientAt(x)
any information needed by the optimizer to do updates.
f(x_0) + r(x_0), used for checking convergence
what iteration number we are on.
did the line search fail?
f(x)
the current point being considered
Tracks the information about the optimizer, including the current point, its value, gradient, and then any history. Also includes information for checking convergence.
the current point being considered
f(x)
f.gradientAt(x)
f(x) + r(x), where r is any regularization added to the objective. For LBFGS, this is f(x).
f'(x) + r'(x), where r is any regularization added to the objective. For LBFGS, this is f'(x).
what iteration number we are on.
f(x_0) + r(x_0), used for checking convergence
any information needed by the optimizer to do updates.
did the line search fail?