class L2Regularization[T] extends StochasticGradientDescent[T]
Implements the L2 regularization update.
Each step is:
x_{t+1}i = (s_{ti} * x_{ti} - \eta * g_ti) / (eta * regularization + delta + s_ti)
where g_ti is the gradient and s_ti = \sqrt(\sum_t'{t} g_ti2)
- Alphabetic
- By Inheritance
- L2Regularization
- StochasticGradientDescent
- FirstOrderMinimizer
- SerializableLogging
- Serializable
- Minimizer
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
Type Members
- case class History(sumOfSquaredGradients: T) extends Product with Serializable
- type State = FirstOrderMinimizer.State[T, Info, History]
- Definition Classes
- FirstOrderMinimizer
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- def adjust(newX: T, newGrad: T, newVal: Double): (Double, T)
- Attributes
- protected
- Definition Classes
- L2Regularization → FirstOrderMinimizer
- def adjustFunction(f: StochasticDiffFunction[T]): StochasticDiffFunction[T]
- Attributes
- protected
- Definition Classes
- FirstOrderMinimizer
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def calculateObjective(f: StochasticDiffFunction[T], x: T, history: History): (Double, T)
- Attributes
- protected
- Definition Classes
- FirstOrderMinimizer
- def chooseDescentDirection(state: State, fn: StochasticDiffFunction[T]): T
- Attributes
- protected
- Definition Classes
- StochasticGradientDescent → FirstOrderMinimizer
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native() @IntrinsicCandidate()
- val convergenceCheck: ConvergenceCheck[T]
- Definition Classes
- FirstOrderMinimizer
- val defaultStepSize: Double
- Definition Classes
- StochasticGradientDescent
- val delta: Double
- def determineStepSize(state: State, f: StochasticDiffFunction[T], dir: T): Double
Choose a step size scale for this iteration.
Choose a step size scale for this iteration.
Default is eta / math.pow(state.iter + 1,2.0 / 3.0)
- Definition Classes
- L2Regularization → StochasticGradientDescent → FirstOrderMinimizer
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @IntrinsicCandidate()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @IntrinsicCandidate()
- def infiniteIterations(f: StochasticDiffFunction[T], state: State): Iterator[State]
- Definition Classes
- FirstOrderMinimizer
- def initialHistory(f: StochasticDiffFunction[T], init: T): History
- Definition Classes
- L2Regularization → FirstOrderMinimizer
- def initialState(f: StochasticDiffFunction[T], init: T): State
- Attributes
- protected
- Definition Classes
- FirstOrderMinimizer
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def iterations(f: StochasticDiffFunction[T], init: T): Iterator[State]
- Definition Classes
- FirstOrderMinimizer
- def logger: LazyLogger
- Attributes
- protected
- Definition Classes
- SerializableLogging
- val maxIter: Int
- Definition Classes
- StochasticGradientDescent
- def minimize(f: StochasticDiffFunction[T], init: T): T
- Definition Classes
- FirstOrderMinimizer → Minimizer
- def minimizeAndReturnState(f: StochasticDiffFunction[T], init: T): State
- Definition Classes
- FirstOrderMinimizer
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @IntrinsicCandidate()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @IntrinsicCandidate()
- val regularizationConstant: Double
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def takeStep(state: State, dir: T, stepSize: Double): T
Projects the vector x onto whatever ball is needed.
Projects the vector x onto whatever ball is needed. Can also incorporate regularization, or whatever.
Default just takes a step
- Attributes
- protected
- Definition Classes
- L2Regularization → StochasticGradientDescent → FirstOrderMinimizer
- def toString(): String
- Definition Classes
- AnyRef → Any
- def updateHistory(newX: T, newGrad: T, newValue: Double, f: StochasticDiffFunction[T], oldState: State): History
- Definition Classes
- L2Regularization → FirstOrderMinimizer
- implicit val vspace: NormedModule[T, Double]
- Attributes
- protected
- Definition Classes
- StochasticGradientDescent
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])