package optimize
- Alphabetic
- By Inheritance
- optimize
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- class AdaDeltaGradientDescent[T] extends StochasticGradientDescent[T]
Created by jda on 3/17/15.
- class ApproximateGradientFunction[K, T] extends DiffFunction[T]
Approximates a gradient by finite differences.
- trait ApproximateLineSearch extends MinimizingLineSearch
A line search optimizes a function of one variable without analytic gradient information.
A line search optimizes a function of one variable without analytic gradient information. It's often used approximately (e.g. in backtracking line search), where there is no intrinsic termination criterion, only extrinsic
- class BacktrackingLineSearch extends ApproximateLineSearch
Implements the Backtracking Linesearch like that in LBFGS-C (which is (c) 2007-2010 Naoaki Okazaki under BSD)
Implements the Backtracking Linesearch like that in LBFGS-C (which is (c) 2007-2010 Naoaki Okazaki under BSD)
Basic idea is that we need to find an alpha that is sufficiently smaller than f(0), and also possibly requiring that the slope of f decrease by the right amount (wolfe conditions)
- trait BatchDiffFunction[T] extends DiffFunction[T] with (T, IndexedSeq[Int]) => Double
A diff function that supports subsets of the data.
A diff function that supports subsets of the data. By default it evaluates on all the data
- case class BatchSize(size: Int) extends OptimizationOption with Product with Serializable
- class CachedBatchDiffFunction[T] extends BatchDiffFunction[T]
- class CachedDiffFunction[T] extends DiffFunction[T]
- class CompactHessian extends NumericOps[CompactHessian]
- abstract class CubicLineSearch extends SerializableLogging with MinimizingLineSearch
- trait DiffFunction[T] extends StochasticDiffFunction[T] with NumericOps[DiffFunction[T]]
Represents a differentiable function whose output is guaranteed to be consistent
- sealed trait DiffFunctionOpImplicits extends AnyRef
- class EmpiricalHessian[T] extends AnyRef
The empirical hessian evaluates the derivative for multiplcation.
The empirical hessian evaluates the derivative for multiplcation.
H * d = \lim_e -> 0 (f'(x + e * d) - f'(x))/e
- sealed class FirstOrderException extends RuntimeException
- abstract class FirstOrderMinimizer[T, DF <: StochasticDiffFunction[T]] extends Minimizer[T, DF] with SerializableLogging
- class FisherDiffFunction[T] extends SecondOrderFunction[T, FisherMatrix[T]]
- class FisherMatrix[T] extends AnyRef
The Fisher matrix approximates the Hessian by E[grad grad'].
The Fisher matrix approximates the Hessian by E[grad grad']. We further approximate this with a monte carlo approximation to the expectation.
- trait IterableOptimizationPackage[Function, Vector, State] extends OptimizationPackage[Function, Vector]
- case class L1Regularization(value: Double = 1.0) extends OptimizationOption with Product with Serializable
- case class L2Regularization(value: Double = 1.0) extends OptimizationOption with Product with Serializable
- class LBFGS[T] extends FirstOrderMinimizer[T, DiffFunction[T]] with SerializableLogging
Port of LBFGS to Scala.
Port of LBFGS to Scala.
Special note for LBFGS: If you use it in published work, you must cite one of: * J. Nocedal. Updating Quasi-Newton Matrices with Limited Storage (1980), Mathematics of Computation 35, pp. 773-782. * D.C. Liu and J. Nocedal. On the Limited mem Method for Large Scale Optimization (1989), Mathematical Programming B, 45, 3, pp. 503-528.
- class LBFGSB extends FirstOrderMinimizer[DenseVector[Double], DiffFunction[DenseVector[Double]]] with SerializableLogging
This algorithm is refered the paper "A LIMITED MEMOR Y ALGORITHM F OR BOUND CONSTRAINED OPTIMIZA TION" written by Richard H.Byrd Peihuang Lu Jorge Nocedal and Ciyou Zhu Created by fanming.chen on 2015/3/7 0007.
This algorithm is refered the paper "A LIMITED MEMOR Y ALGORITHM F OR BOUND CONSTRAINED OPTIMIZA TION" written by Richard H.Byrd Peihuang Lu Jorge Nocedal and Ciyou Zhu Created by fanming.chen on 2015/3/7 0007. If StrongWolfeLineSearch(maxZoomIter,maxLineSearchIter) is small, the wolfeRuleSearch.minimize may throw FirstOrderException, it should increase the two variables to appropriate value
- trait LineSearch extends ApproximateLineSearch
A line search optimizes a function of one variable without analytic gradient information.
A line search optimizes a function of one variable without analytic gradient information. Differs only in whether or not it tries to find an exact minimizer
- class LineSearchFailed extends FirstOrderException
- case class MaxIterations(num: Int) extends OptimizationOption with Product with Serializable
- trait Minimizer[T, -F] extends AnyRef
Anything that can minimize a function
- trait MinimizingLineSearch extends AnyRef
- class NaNHistory extends FirstOrderException
- class OWLQN[K, T] extends LBFGS[T] with SerializableLogging
Implements the Orthant-wise Limited Memory QuasiNewton method, which is a variant of LBFGS that handles L1 regularization.
Implements the Orthant-wise Limited Memory QuasiNewton method, which is a variant of LBFGS that handles L1 regularization.
Paper is Andrew and Gao (2007) Scalable Training of L1-Regularized Log-Linear Models
- sealed trait OptimizationOption extends (OptParams) => OptParams
- trait OptimizationPackage[Function, Vector] extends AnyRef
- sealed trait OptimizationPackageLowPriority extends OptimizationPackageLowPriority2
- sealed trait OptimizationPackageLowPriority2 extends AnyRef
- class ProjectedQuasiNewton extends FirstOrderMinimizer[DenseVector[Double], DiffFunction[DenseVector[Double]]] with Projecting[DenseVector[Double]] with SerializableLogging
- trait Projecting[T] extends AnyRef
- trait SecondOrderFunction[T, H] extends DiffFunction[T]
Represents a function for which we can easily compute the Hessian.
Represents a function for which we can easily compute the Hessian.
For conjugate gradient methods, you can play tricks with the hessian, returning an object that only supports multiplication.
- class SpectralProjectedGradient[T] extends FirstOrderMinimizer[T, DiffFunction[T]] with Projecting[T] with SerializableLogging
SPG is a Spectral Projected Gradient minimizer; it minimizes a differentiable function subject to the optimum being in some set, given by the projection operator projection
SPG is a Spectral Projected Gradient minimizer; it minimizes a differentiable function subject to the optimum being in some set, given by the projection operator projection
- T
vector type
- class StepSizeOverflow extends FirstOrderException
- case class StepSizeScale(alpha: Double = 1.0) extends OptimizationOption with Product with Serializable
- class StepSizeUnderflow extends FirstOrderException
- class StochasticAveragedGradient[T] extends FirstOrderMinimizer[T, BatchDiffFunction[T]]
- trait StochasticDiffFunction[T] extends (T) => Double with NumericOps[StochasticDiffFunction[T]]
A differentiable function whose output is not guaranteed to be the same across consecutive invocations.
- abstract class StochasticGradientDescent[T] extends FirstOrderMinimizer[T, StochasticDiffFunction[T]] with SerializableLogging
Minimizes a function using stochastic gradient descent
- class StrongWolfeLineSearch extends CubicLineSearch
- case class Tolerance(fvalTolerance: Double = 1E-5, gvalTolerance: Double = 1e-6) extends OptimizationOption with Product with Serializable
- class TruncatedNewtonMinimizer[T, H] extends Minimizer[T, SecondOrderFunction[T, H]] with SerializableLogging
Implements a TruncatedNewton Trust region method (like Tron).
Implements a TruncatedNewton Trust region method (like Tron). Also implements "Hessian Free learning". We have a few extra tricks though... :)
Value Members
- def iterations[Objective, Vector, State](fn: Objective, init: Vector, options: OptimizationOption*)(implicit optimization: IterableOptimizationPackage[Objective, Vector, State]): Iterator[State]
Returns a sequence of states representing the iterates of a solver, given an breeze.optimize.IterableOptimizationPackage that knows how to minimize The actual state class varies with the kind of function passed in.
Returns a sequence of states representing the iterates of a solver, given an breeze.optimize.IterableOptimizationPackage that knows how to minimize The actual state class varies with the kind of function passed in. Typically, they have a .x value of type Vector that is the current point being evaluated, and .value is the current objective value
- def minimize[Objective, Vector](fn: Objective, init: Vector, options: OptimizationOption*)(implicit optimization: OptimizationPackage[Objective, Vector]): Vector
Minimizes a function, given an breeze.optimize.OptimizationPackage that knows how to minimize
- object AdaptiveGradientDescent
Implements the L2^2 and L1 updates from Duchi et al 2010 Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.
Implements the L2^2 and L1 updates from Duchi et al 2010 Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.
Basically, we use "forward regularization" and an adaptive step size based on the previous gradients.
- object BatchDiffFunction
- object DiffFunction extends DiffFunctionOpImplicits
- object EmpiricalHessian
- object FirstOrderMinimizer extends Serializable
- object FisherMatrix
- object GradientTester extends SerializableLogging
Class that compares the computed gradient with an empirical gradient based on finite differences.
Class that compares the computed gradient with an empirical gradient based on finite differences. Essential for debugging dynamic programs.
- object LBFGS extends Serializable
- object LBFGSB extends Serializable
- object LineSearch
- object OptimizationOption
- object OptimizationPackage extends OptimizationPackageLowPriority
- case object PreferBatch extends OptimizationOption with Product with Serializable
- case object PreferOnline extends OptimizationOption with Product with Serializable
- object ProjectedQuasiNewton extends SerializableLogging
- object RootFinding
Root finding algorithms
- object SecondOrderFunction
- object StochasticGradientDescent extends Serializable