breeze.optimize

StochasticGradientDescent

abstract class StochasticGradientDescent[T] extends FirstOrderMinimizer[T, StochasticDiffFunction[T]] with SerializableLogging

Minimizes a function using stochastic gradient descent

Linear Supertypes
FirstOrderMinimizer[T, StochasticDiffFunction[T]], SerializableLogging, Serializable, Serializable, Minimizer[T, StochasticDiffFunction[T]], AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. StochasticGradientDescent
  2. FirstOrderMinimizer
  3. SerializableLogging
  4. Serializable
  5. Serializable
  6. Minimizer
  7. AnyRef
  8. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new StochasticGradientDescent(defaultStepSize: Double, maxIter: Int, tolerance: Double = 1E-5, improvementTol: Double = 1E-4, minImprovementWindow: Int = 50)(implicit vspace: NormedModule[T, Double])

Type Members

  1. abstract type History

    Any history the derived minimization function needs to do its updates.

    Any history the derived minimization function needs to do its updates. typically an approximation to the second derivative/hessian matrix.

    Definition Classes
    FirstOrderMinimizer
  2. case class State(x: T, value: Double, grad: T, adjustedValue: Double, adjustedGradient: T, iter: Int, initialAdjVal: Double, history: History, fVals: IndexedSeq[Double] = Vector(Double.PositiveInfinity), numImprovementFailures: Int = 0, searchFailed: Boolean = false) extends Product with Serializable

    Tracks the information about the optimizer, including the current point, its value, gradient, and then any history.

    Tracks the information about the optimizer, including the current point, its value, gradient, and then any history. Also includes information for checking convergence.

    x

    the current point being considered

    value

    f(x)

    grad

    f.gradientAt(x)

    adjustedValue

    f(x) + r(x), where r is any regularization added to the objective. For LBFGS, this is f(x).

    adjustedGradient

    f'(x) + r'(x), where r is any regularization added to the objective. For LBFGS, this is f'(x).

    iter

    what iteration number we are on.

    initialAdjVal

    f(x_0) + r(x_0), used for checking convergence

    history

    any information needed by the optimizer to do updates.

    fVals

    the sequence of the last minImprovementWindow values, used for checking if the "value" isn't improving

    numImprovementFailures

    the number of times in a row the objective hasn't improved, mostly for SGD

    searchFailed

    did the line search fail?

    Definition Classes
    FirstOrderMinimizer

Abstract Value Members

  1. abstract def initialHistory(f: StochasticDiffFunction[T], init: T): History

    Attributes
    protected
    Definition Classes
    FirstOrderMinimizer
  2. abstract def updateHistory(newX: T, newGrad: T, newVal: Double, f: StochasticDiffFunction[T], oldState: State): History

    Attributes
    protected
    Definition Classes
    FirstOrderMinimizer

Concrete Value Members

  1. final def !=(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  4. def adjust(newX: T, newGrad: T, newVal: Double): (Double, T)

    Attributes
    protected
    Definition Classes
    FirstOrderMinimizer
  5. def adjustFunction(f: StochasticDiffFunction[T]): StochasticDiffFunction[T]

    Attributes
    protected
    Definition Classes
    FirstOrderMinimizer
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def calculateObjective(f: StochasticDiffFunction[T], x: T, history: History): (Double, T)

    Attributes
    protected
    Definition Classes
    FirstOrderMinimizer
  8. def chooseDescentDirection(state: State, fn: StochasticDiffFunction[T]): T

    Attributes
    protected
    Definition Classes
    StochasticGradientDescentFirstOrderMinimizer
  9. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  10. val defaultStepSize: Double

  11. def determineStepSize(state: State, f: StochasticDiffFunction[T], dir: T): Double

    Choose a step size scale for this iteration.

    Choose a step size scale for this iteration.

    Default is eta / math.pow(state.iter + 1,2.0 / 3.0)

    Definition Classes
    StochasticGradientDescentFirstOrderMinimizer
  12. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  13. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  14. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  15. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  16. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  17. def infiniteIterations(f: StochasticDiffFunction[T], state: State): Iterator[State]

    Definition Classes
    FirstOrderMinimizer
  18. def initialState(f: StochasticDiffFunction[T], init: T): State

    Attributes
    protected
    Definition Classes
    FirstOrderMinimizer
  19. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  20. def iterations(f: StochasticDiffFunction[T], init: T): Iterator[State]

    Definition Classes
    FirstOrderMinimizer
  21. def logger: LazyLogger

    Attributes
    protected
    Definition Classes
    SerializableLogging
  22. val maxIter: Int

  23. val minImprovementWindow: Int

    How many iterations to improve function by at least improvementTol

    How many iterations to improve function by at least improvementTol

    Definition Classes
    FirstOrderMinimizer
  24. def minimize(f: StochasticDiffFunction[T], init: T): T

    Definition Classes
    FirstOrderMinimizerMinimizer
  25. def minimizeAndReturnState(f: StochasticDiffFunction[T], init: T): State

    Definition Classes
    FirstOrderMinimizer
  26. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  27. final def notify(): Unit

    Definition Classes
    AnyRef
  28. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  29. val numberOfImprovementFailures: Int

    Definition Classes
    FirstOrderMinimizer
  30. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  31. def takeStep(state: State, dir: T, stepSize: Double): T

    Projects the vector x onto whatever ball is needed.

    Projects the vector x onto whatever ball is needed. Can also incorporate regularization, or whatever.

    Default just takes a step

    Attributes
    protected
    Definition Classes
    StochasticGradientDescentFirstOrderMinimizer
  32. def toString(): String

    Definition Classes
    AnyRef → Any
  33. def updateFValWindow(oldState: State, newAdjVal: Double): IndexedSeq[Double]

    Attributes
    protected
    Definition Classes
    StochasticGradientDescentFirstOrderMinimizer
  34. implicit val vspace: NormedModule[T, Double]

    Attributes
    protected
  35. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  36. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  37. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from SerializableLogging

Inherited from Serializable

Inherited from Serializable

Inherited from Minimizer[T, StochasticDiffFunction[T]]

Inherited from AnyRef

Inherited from Any

Ungrouped