Object/Class

com.intel.analytics.bigdl.optim

SGD

Related Docs: class SGD | package optim

Permalink

object SGD extends Serializable

Linear Supertypes
Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. SGD
  2. Serializable
  3. Serializable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. case class Default() extends LearningRateSchedule with Product with Serializable

    Permalink

    It is the default learning rate schedule.

    It is the default learning rate schedule. For each iteration, the learning rate would update with the following formula:

    l_{n + 1} = l / (1 + n * learning_rate_decay)

    where l is the initial learning rate

  2. case class EpochDecay(decayType: (Int) ⇒ Double) extends LearningRateSchedule with Product with Serializable

    Permalink

    It is an epoch decay learning rate schedule The learning rate decays through a function argument on number of run epochs

    It is an epoch decay learning rate schedule The learning rate decays through a function argument on number of run epochs

    l_{n + 1} = l_{n} * 0.1 ^ decayType(epoch)

    decayType

    is a function with number of run epochs as the argument

  3. case class EpochDecayWithWarmUp(warmUpIteration: Int, warmUpDelta: Double, decayType: (Int) ⇒ Double) extends LearningRateSchedule with Product with Serializable

    Permalink

    Learning rate schedule based on warm up Iterations

    Learning rate schedule based on warm up Iterations

    warmUpIteration

    Warm up iteration number

    warmUpDelta

    Warm up dealta value applied to warm up iteration

    decayType

    A function to calculate decay on epochs

  4. case class EpochSchedule(regimes: Array[Regime]) extends LearningRateSchedule with Product with Serializable

    Permalink

    EpochSchedule is a learning rate schedule which configure the learning rate according to some pre-defined Regime.

    EpochSchedule is a learning rate schedule which configure the learning rate according to some pre-defined Regime. If the running epoch is within the interval of a regime r [r.startEpoch, r.endEpoch], then the learning rate will take the "learningRate" in r.config.

    regimes

    an array of pre-defined Regime.

  5. case class EpochStep(stepSize: Int, gamma: Double) extends LearningRateSchedule with Product with Serializable

    Permalink

    EpochStep is a learning rate schedule, which rescale the learning rate by gamma for each stepSize epochs.

    EpochStep is a learning rate schedule, which rescale the learning rate by gamma for each stepSize epochs.

    stepSize

    For how many epochs to update the learning rate once

    gamma

    the rescale factor

  6. case class Exponential(decayStep: Int, decayRate: Double, stairCase: Boolean = false) extends LearningRateSchedule with Product with Serializable

    Permalink

    Exponential is a learning rate schedule, which rescale the learning rate by lr_{n + 1} = lr * decayRate ^ (iter / decayStep)

    Exponential is a learning rate schedule, which rescale the learning rate by lr_{n + 1} = lr * decayRate ^ (iter / decayStep)

    decayStep

    the inteval for lr decay

    decayRate

    decay rate

    stairCase

    if true, iter / decayStep is an integer division and the decayed learning rate follows a staircase function.

  7. trait LearningRateSchedule extends AnyRef

    Permalink

    Hyper parameter schedule for SGD

  8. case class MultiStep(stepSizes: Array[Int], gamma: Double) extends LearningRateSchedule with Product with Serializable

    Permalink

    similar to step but it allows non uniform steps defined by stepSizes

    similar to step but it allows non uniform steps defined by stepSizes

    stepSizes

    the series of step sizes used for lr decay

    gamma

    coefficient of decay

  9. case class NaturalExp(decay_step: Int, gamma: Double) extends LearningRateSchedule with Product with Serializable

    Permalink

    NaturalExp is a learning rate schedule, which rescale the learning rate by exp ( -decay_rate * iter / decay_step ) referring to tensorflow's learning rate decay # natural_exp_decay

    NaturalExp is a learning rate schedule, which rescale the learning rate by exp ( -decay_rate * iter / decay_step ) referring to tensorflow's learning rate decay # natural_exp_decay

    decay_step

    how often to apply decay

    gamma

    the decay rate. e.g. 0.96

  10. case class Plateau(monitor: String, factor: Float = 0.1f, patience: Int = 10, mode: String = "min", epsilon: Float = 1e-4f, cooldown: Int = 0, minLr: Float = 0) extends LearningRateSchedule with Product with Serializable

    Permalink

    Plateau is the learning rate schedule when a metric has stopped improving.

    Plateau is the learning rate schedule when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. It monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.

    monitor

    quantity to be monitored, can be Loss or score

    factor

    factor by which the learning rate will be reduced. new_lr = lr * factor

    patience

    number of epochs with no improvement after which learning rate will be reduced.

    mode

    one of {min, max}. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing

    epsilon

    threshold for measuring the new optimum, to only focus on significant changes.

    cooldown

    number of epochs to wait before resuming normal operation after lr has been reduced.

    minLr

    lower bound on the learning rate.

  11. case class Poly(power: Double, maxIteration: Int) extends LearningRateSchedule with Product with Serializable

    Permalink

    A learning rate decay policy, where the effective learning rate follows a polynomial decay, to be zero by the max_iteration.

    A learning rate decay policy, where the effective learning rate follows a polynomial decay, to be zero by the max_iteration. Calculation: base_lr (1 - iter/maxIteration) ^ (power)

    power

    coeffient of decay, refer to calculation formula

    maxIteration

    max iteration when lr becomes zero

  12. case class Regime(startEpoch: Int, endEpoch: Int, config: Table) extends Product with Serializable

    Permalink

    A structure to specify hyper parameters by start epoch and end epoch.

    A structure to specify hyper parameters by start epoch and end epoch. Usually work with EpochSchedule.

    startEpoch

    start epoch

    endEpoch

    end epoch

    config

    config table contains hyper parameters

  13. case class SequentialSchedule(iterationPerEpoch: Int) extends LearningRateSchedule with Product with Serializable

    Permalink

    Stack several learning rate schedulers.

    Stack several learning rate schedulers.

    iterationPerEpoch

    iteration numbers per epoch

  14. case class Step(stepSize: Int, gamma: Double) extends LearningRateSchedule with Product with Serializable

    Permalink

    A learning rate decay policy, where the effective learning rate is calculated as base_lr * gamma ^ (floor(iter / stepSize))

    A learning rate decay policy, where the effective learning rate is calculated as base_lr * gamma ^ (floor(iter / stepSize))

    stepSize

    the inteval for lr decay

    gamma

    coefficient of decay, refer to calculation formula

  15. case class Warmup(delta: Double) extends LearningRateSchedule with Product with Serializable

    Permalink

    A learning rate gradual increase policy, where the effective learning rate increase delta after each iteration.

    A learning rate gradual increase policy, where the effective learning rate increase delta after each iteration. Calculation: base_lr + delta * iteration

    delta

    increase amount after each iteration

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  7. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  8. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  9. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  10. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  11. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  12. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  13. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  14. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  15. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  16. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  17. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  18. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  19. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped