breeze.optimize.AdaptiveGradientDescent

L2Regularization

trait L2Regularization[T] extends StochasticGradientDescent[T]

Implements the L2 regularization update.

Each step is:

x_{t+1}i = (s_{ti} * x_{ti} - \eta * g_ti) / (eta * regularization + delta + s_ti)

where g_ti is the gradient and s_ti = \sqrt(\sum_t'{t} g_ti2)

Linear Supertypes
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. L2Regularization
  2. StochasticGradientDescent
  3. FirstOrderMinimizer
  4. Logging
  5. Minimizer
  6. AnyRef
  7. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Type Members

  1. case class History(sumOfSquaredGradients: T) extends Product with Serializable

    Definition Classes
    L2RegularizationFirstOrderMinimizer
  2. case class State(x: T, value: Double, grad: T, adjustedValue: Double, adjustedGradient: T, iter: Int, initialAdjVal: Double, history: History, fVals: IndexedSeq[Double] = ..., numImprovementFailures: Int = 0, searchFailed: Boolean = false) extends Product with Serializable

    Definition Classes
    FirstOrderMinimizer

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. def adjust(newX: T, newGrad: T, newVal: Double): (Double, T)

    Attributes
    protected
    Definition Classes
    L2RegularizationFirstOrderMinimizer
  7. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  8. def chooseDescentDirection(state: State): T

    Attributes
    protected
    Definition Classes
    StochasticGradientDescentFirstOrderMinimizer
  9. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws()
  10. val defaultStepSize: Double

    Definition Classes
    StochasticGradientDescent
  11. val delta: Double

  12. def determineStepSize(state: State, f: StochasticDiffFunction[T], dir: T): Double

    Choose a step size scale for this iteration.

    Choose a step size scale for this iteration.

    Default is eta / math.pow(state.iter + 1,2.0 / 3.0)

    Definition Classes
    L2RegularizationStochasticGradientDescentFirstOrderMinimizer
  13. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  14. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  15. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws()
  16. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  17. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  18. def initialHistory(f: StochasticDiffFunction[T], init: T): History

    Definition Classes
    L2RegularizationFirstOrderMinimizer
  19. def initialState(f: StochasticDiffFunction[T], init: T): State

    Attributes
    protected
    Definition Classes
    FirstOrderMinimizer
  20. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  21. def iterations(f: StochasticDiffFunction[T], init: T): Iterator[State]

    Definition Classes
    FirstOrderMinimizer
  22. val lambda: Double

  23. lazy val logger: Logger

    Attributes
    protected
    Definition Classes
    Logging
  24. val maxIter: Int

    Definition Classes
    StochasticGradientDescent
  25. val minImprovementWindow: Int

    Definition Classes
    FirstOrderMinimizer
  26. def minimize(f: StochasticDiffFunction[T], init: T): T

    Definition Classes
    FirstOrderMinimizerMinimizer
  27. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  28. final def notify(): Unit

    Definition Classes
    AnyRef
  29. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  30. def numDepthChargeSteps: Int

    Take this many steps and then reset x to the original x.

    Take this many steps and then reset x to the original x. Mostly for Adagrad, which can behave oddly until it has a good sense of rate of change.

    Attributes
    protected
    Definition Classes
    L2RegularizationFirstOrderMinimizer
  31. val numberOfImprovementFailures: Int

    Definition Classes
    FirstOrderMinimizer
  32. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  33. def takeStep(state: State, dir: T, stepSize: Double): T

    Projects the vector x onto whatever ball is needed.

    Projects the vector x onto whatever ball is needed. Can also incorporate regularization, or whatever.

    Default just takes a step

    Attributes
    protected
    Definition Classes
    L2RegularizationStochasticGradientDescentFirstOrderMinimizer
  34. def toString(): String

    Definition Classes
    AnyRef → Any
  35. def updateFValWindow(oldState: State, newAdjVal: Double): IndexedSeq[Double]

    Attributes
    protected
    Definition Classes
    StochasticGradientDescentFirstOrderMinimizer
  36. def updateHistory(newX: T, newGrad: T, newValue: Double, oldState: State): History

    Definition Classes
    L2RegularizationFirstOrderMinimizer
  37. implicit val vspace: MutableCoordinateSpace[T, Double]

    Attributes
    protected
    Definition Classes
    StochasticGradientDescent
  38. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws()
  39. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws()
  40. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws()

Inherited from StochasticGradientDescent[T]

Inherited from Logging

Inherited from Minimizer[T, StochasticDiffFunction[T]]

Inherited from AnyRef

Inherited from Any

Ungrouped