breeze.optimize.FirstOrderMinimizer

OptParams

case class OptParams(batchSize: Int = 512, regularization: Double = 0.0, alpha: Double = 0.5, maxIterations: Int = 1000, useL1: Boolean = false, tolerance: Double = 1.0E-5, useStochastic: Boolean = false) extends Product with Serializable

OptParams is a Configuration-compatible case class that can be used to select optimization routines at runtime.

Configurations: 1) useStochastic=false,useL1=false: LBFGS with L2 regularization 2) useStochastic=false,useL1=true: OWLQN with L1 regularization 3) useStochastic=true,useL1=false: AdaptiveGradientDescent with L2 regularization 3) useStochastic=true,useL1=true: AdaptiveGradientDescent with L1 regularization

batchSize

size of batches to use if useStochastic and you give a BatchDiffFunction

regularization

regularization constant to use.

alpha

rate of change to use, only applies to SGD.

useL1

if true, use L1 regularization. Otherwise, use L2.

tolerance

convergence tolerance, looking at both average improvement and the norm of the gradient.

useStochastic

if false, use LBFGS or OWLQN. If true, use some variant of Stochastic Gradient Descent.

Linear Supertypes
Serializable, Serializable, Product, Equals, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. OptParams
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. AnyRef
  7. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new OptParams(batchSize: Int = 512, regularization: Double = 0.0, alpha: Double = 0.5, maxIterations: Int = 1000, useL1: Boolean = false, tolerance: Double = 1.0E-5, useStochastic: Boolean = false)

    batchSize

    size of batches to use if useStochastic and you give a BatchDiffFunction

    regularization

    regularization constant to use.

    alpha

    rate of change to use, only applies to SGD.

    useL1

    if true, use L1 regularization. Otherwise, use L2.

    tolerance

    convergence tolerance, looking at both average improvement and the norm of the gradient.

    useStochastic

    if false, use LBFGS or OWLQN. If true, use some variant of Stochastic Gradient Descent.

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. val alpha: Double

    rate of change to use, only applies to SGD.

  7. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  8. val batchSize: Int

    size of batches to use if useStochastic and you give a BatchDiffFunction

  9. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws()
  10. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  11. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws()
  12. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  13. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  14. def iterations[T](f: DiffFunction[T], init: T)(implicit vspace: MutableCoordinateSpace[T, Double]): Iterator[LBFGS.State]

  15. def iterations[T](f: StochasticDiffFunction[T], init: T)(implicit arith: MutableCoordinateSpace[T, Double]): Iterator[State]

  16. def iterations[T](f: BatchDiffFunction[T], init: T)(implicit arith: MutableCoordinateSpace[T, Double]): Iterator[State]

  17. val maxIterations: Int

  18. def minimize[T](f: DiffFunction[T], init: T)(implicit arith: MutableCoordinateSpace[T, Double]): T

  19. def minimize[T](f: BatchDiffFunction[T], init: T)(implicit arith: MutableCoordinateSpace[T, Double]): T

  20. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  21. final def notify(): Unit

    Definition Classes
    AnyRef
  22. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  23. val regularization: Double

    regularization constant to use.

  24. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  25. val tolerance: Double

    convergence tolerance, looking at both average improvement and the norm of the gradient.

  26. val useL1: Boolean

    if true, use L1 regularization.

    if true, use L1 regularization. Otherwise, use L2.

  27. val useStochastic: Boolean

    if false, use LBFGS or OWLQN.

    if false, use LBFGS or OWLQN. If true, use some variant of Stochastic Gradient Descent.

  28. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws()
  29. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws()
  30. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws()

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from AnyRef

Inherited from Any

Ungrouped