Class/Object

org.platanios.tensorflow.api.ops.training.optimizers

AMSGrad

Related Docs: object AMSGrad | package optimizers

Permalink

class AMSGrad extends Optimizer

Optimizer that implements the AMSGrad optimization algorithm, presented in [On the Convergence of Adam and Beyond](https://openreview.net/pdf?id=ryQu7f-RZ).

Initialization:

m_0 = 0     // Initialize the 1st moment vector
v_0 = 0     // Initialize the 2nd moment vector
v_hat_0 = 0 // Initialize the 2nd moment max vector
t = 0       // Initialize the time step

The AMSGrad update for step t is as follows:

learningRate_t = initialLearningRate * sqrt(beta1 - beta2^t) / (1 - beta1^t)
m_t = beta1 * m_{t-1} + (1 - beta1) * gradient
v_t = beta2 * v_{t-1} + (1 - beta2) * gradient * gradient
v_hat_t = max(v_t, v_hat_{t-1})
variable -= learningRate_t * m_t / (sqrt(v_hat_t) + epsilon)

The default value of 1e-8 for epsilon might not be a good default in general. For example, when training an Inception network on ImageNet a current good choice is 1.0 or 0.1.

The sparse implementation of this algorithm (used when the gradient is an indexed slices object, typically because of tf.gather or an embedding lookup in the forward pass) does apply momentum to variable slices even if they were not used in the forward pass (meaning they have a gradient equal to zero). Momentum decay (beta1) is also applied to the entire momentum accumulator. This means that the sparse behavior is equivalent to the dense behavior (in contrast to some momentum implementations which ignore momentum unless a variable slice was actually used).

For more information on this algorithm, please refer to this [paper](https://openreview.net/pdf?id=ryQu7f-RZ).

Linear Supertypes
Known Subclasses
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. AMSGrad
  2. Optimizer
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new AMSGrad(learningRate: Float = 0.001f, decay: Schedule = FixedSchedule, beta1: Float = 0.9f, beta2: Float = 0.999f, useNesterov: Boolean = false, epsilon: Float = 1e-8f, useLocking: Boolean = false, learningRateSummaryTag: String = null, name: String = "AMSGrad")

    Permalink

    learningRate

    Learning rate. Must be > 0. If used with decay, then this argument specifies the initial value of the learning rate.

    decay

    Learning rate decay method to use for each update.

    beta1

    Exponential decay rate for the first moment estimates.

    beta2

    Exponential decay rate for the second moment estimates.

    useNesterov

    If true, Nesterov momentum is used for the updates.

    epsilon

    Small constant used for numerical stability. This epsilon corresponds to "epsilon hat" in the Kingma and Ba paper (in the formula just before Section 2.1), and not to the epsilon in Algorithm 1 of the paper.

    useLocking

    If true, the gradient descent updates will be protected by a lock. Otherwise, the behavior is undefined, but may exhibit less contention.

    learningRateSummaryTag

    Optional summary tag name to use for the learning rate value. If null, no summary is created for the learning rate. Otherwise, a scalar summary is created which can be monitored using TensorBoard.

    name

    Name for this optimizer.

    Attributes
    protected

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def applyDense(gradient: Output, variable: variables.Variable, iteration: Option[variables.Variable]): Op

    Permalink

    Applies the updates corresponding to the provided gradient, to the provided variable.

    Applies the updates corresponding to the provided gradient, to the provided variable.

    gradient

    Gradient tensor.

    variable

    Variable.

    iteration

    Option containing current iteration in the optimization loop, if one has been provided.

    returns

    Created op that applies the provided gradient to the provided variable.

    Definition Classes
    AMSGradOptimizer
  5. def applyGradients(gradientsAndVariables: Seq[(OutputLike, variables.Variable)], iteration: Option[variables.Variable] = None, name: String = this.name): Op

    Permalink

    Creates an op that applies the provided gradients to the provided variables.

    Creates an op that applies the provided gradients to the provided variables.

    gradientsAndVariables

    Sequence with gradient-variable pairs.

    iteration

    Optional Variable to increment by one after the variables have been updated.

    name

    Name for the created op.

    returns

    Created op.

    Definition Classes
    Optimizer
  6. def applySparse(gradient: OutputIndexedSlices, variable: variables.Variable, iteration: Option[variables.Variable]): Op

    Permalink

    Applies the updates corresponding to the provided gradient, to the provided variable.

    Applies the updates corresponding to the provided gradient, to the provided variable.

    The OutputIndexedSlices object specified by gradient in this function is by default pre-processed in applySparseDuplicateIndices to remove duplicate indices (refer to that function's documentation for details). Optimizers which can tolerate or have correct special cases for duplicate sparse indices may override applySparseDuplicateIndices instead of this function, avoiding that overhead.

    gradient

    Gradient tensor.

    variable

    Variable.

    iteration

    Option containing current iteration in the optimization loop, if one has been provided.

    returns

    Created op that applies the provided gradient to the provided variable.

    Definition Classes
    AMSGradOptimizer
  7. def applySparseDuplicateIndices(gradient: OutputIndexedSlices, variable: variables.Variable, iteration: Option[variables.Variable]): Op

    Permalink

    Applies the updates corresponding to the provided gradient (with potentially duplicate indices), to the provided variable.

    Applies the updates corresponding to the provided gradient (with potentially duplicate indices), to the provided variable.

    Optimizers which override this method must deal with OutputIndexedSlices objects such as the following: OutputIndexedSlices(indices=[0, 0], values=[1, 1], denseShape=[1]), which contain duplicate indices. The correct interpretation in that case should be: OutputIndexedSlices(values=[2], indices=[0], denseShape=[1]).

    Many optimizers deal incorrectly with repeated indices when updating based on sparse gradients (e.g. summing squares rather than squaring the sum, or applying momentum terms multiple times). Adding first is always the correct behavior, so this is enforced here by reconstructing the OutputIndexedSlices to have only unique indices, and then calling applySparse.

    Optimizers which deal correctly with repeated indices may instead override this method to avoid the induced overhead.

    gradient

    Gradient tensor.

    variable

    Variable.

    iteration

    Option containing current iteration in the optimization loop, if one has been provided.

    returns

    Created op that applies the provided gradient to the provided variable.

    Definition Classes
    Optimizer
  8. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  9. val beta1: Float

    Permalink

    Exponential decay rate for the first moment estimates.

  10. var beta1Tensor: Output

    Permalink
    Attributes
    protected
  11. val beta2: Float

    Permalink

    Exponential decay rate for the second moment estimates.

  12. var beta2Tensor: Output

    Permalink
    Attributes
    protected
  13. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  14. def computeGradients(loss: Output, lossGradients: Seq[OutputLike] = null, variables: Set[variables.Variable] = null, gradientsGatingMethod: GatingMethod = Gradients.OpGating, gradientsAggregationMethod: AggregationMethod = Gradients.AddAggregationMethod, colocateGradientsWithOps: Boolean = false): Seq[(OutputLike, variables.Variable)]

    Permalink

    Computes the gradients of loss with respect to the variables in variables, if provided, otherwise with respect to all the trainable variables in the graph where loss is defined.

    Computes the gradients of loss with respect to the variables in variables, if provided, otherwise with respect to all the trainable variables in the graph where loss is defined.

    loss

    Loss value whose gradients will be computed.

    lossGradients

    Optional gradients to back-propagate for loss.

    variables

    Optional list of variables for which to compute the gradients. Defaults to the set of trainable variables in the graph where loss is defined.

    gradientsGatingMethod

    Gating method for the gradients computation.

    gradientsAggregationMethod

    Aggregation method used to combine gradient terms.

    colocateGradientsWithOps

    Boolean value indicating whether to colocate the gradient ops with the original ops.

    returns

    Sequence of gradient-variable pairs.

    Definition Classes
    Optimizer
  15. def createSlots(variables: Seq[variables.Variable]): Unit

    Permalink

    Create all slots needed by this optimizer.

    Create all slots needed by this optimizer.

    Definition Classes
    AMSGradOptimizer
  16. val decay: Schedule

    Permalink

    Learning rate decay method to use for each update.

  17. val epsilon: Float

    Permalink

    Small constant used for numerical stability.

    Small constant used for numerical stability. This epsilon corresponds to "epsilon hat" in the Kingma and Ba paper (in the formula just before Section 2.1), and not to the epsilon in Algorithm 1 of the paper.

  18. var epsilonTensor: Output

    Permalink
    Attributes
    protected
  19. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  20. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  21. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  22. def finish(updateOps: Set[Op], nameScope: String): Op

    Permalink

    Creates an op that finishes the gradients application.

    Creates an op that finishes the gradients application. This function is called from within an op creation context that uses as its name scope the name that users have chosen for the application of gradients.

    updateOps

    Set of ops needed to apply the gradients and update the variable values.

    nameScope

    Name scope to use for all the ops created by this function.

    returns

    Created op output.

    Definition Classes
    AMSGradOptimizer
  23. def getBeta1(variable: variables.Variable): Output

    Permalink
    Attributes
    protected
  24. def getBeta2(variable: variables.Variable): Output

    Permalink
    Attributes
    protected
  25. def getBetaPowerAccumulators: (variables.Variable, variables.Variable)

    Permalink
    Attributes
    protected
  26. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  27. def getEpsilon(variable: variables.Variable): Output

    Permalink
    Attributes
    protected
  28. def getLearningRate(variable: variables.Variable, iteration: Option[variables.Variable]): Output

    Permalink
    Attributes
    protected
  29. final def getNonSlotVariable(name: String, graph: core.Graph = null): variables.Variable

    Permalink

    Gets a non-slot variable that has been added to this optimizer (or throws an error if no such non-slot variable could be found in this optimizer).

    Gets a non-slot variable that has been added to this optimizer (or throws an error if no such non-slot variable could be found in this optimizer).

    name

    Variable name.

    graph

    Graph in which the variable is defined.

    returns

    Obtained non-slot variable.

    Attributes
    protected
    Definition Classes
    Optimizer
  30. final def getNonSlotVariables: Iterable[variables.Variable]

    Permalink

    Gets all the non-slot variables that have been added to this optimizer.

    Gets all the non-slot variables that have been added to this optimizer.

    Attributes
    protected
    Definition Classes
    Optimizer
  31. final def getOrCreateNonSlotVariable(name: String, initialValue: tensors.Tensor[_ <: types.DataType], colocationOps: Set[Op] = Set.empty, ignoreExisting: Boolean = false): variables.Variable

    Permalink

    Gets or creates (and adds to this optimizer) a non-slot variable.

    Gets or creates (and adds to this optimizer) a non-slot variable.

    name

    Variable name.

    initialValue

    Variable initial value.

    colocationOps

    Set of colocation ops for the non-slot variable.

    returns

    Created non-slot variable.

    Attributes
    protected
    Definition Classes
    Optimizer
  32. final def getSlot(name: String, variable: variables.Variable): variables.Variable

    Permalink

    Gets an existing slot.

    Gets an existing slot.

    name

    Slot name.

    variable

    Slot primary variable.

    returns

    Requested slot variable, or null if it cannot be found.

    Attributes
    protected
    Definition Classes
    Optimizer
  33. final def getSlot(name: String, variable: variables.Variable, initializer: Initializer, shape: core.Shape, dataType: types.DataType, variableScope: String): variables.Variable

    Permalink

    Gets an existing slot or creates a new one if none exists, for the provided arguments.

    Gets an existing slot or creates a new one if none exists, for the provided arguments.

    name

    Slot name.

    variable

    Slot primary variable.

    initializer

    Slot variable initializer.

    shape

    Slot variable shape.

    dataType

    Slot variable data type.

    variableScope

    Name to use when scoping the variable that needs to be created for the slot.

    returns

    Requested slot variable.

    Attributes
    protected
    Definition Classes
    Optimizer
  34. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  35. val ignoreDuplicateSparseIndices: Boolean

    Permalink

    Boolean value indicating whether to ignore duplicate indices during sparse updates.

    Boolean value indicating whether to ignore duplicate indices during sparse updates.

    Definition Classes
    AMSGradOptimizer
  36. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  37. val learningRate: Float

    Permalink

    Learning rate.

    Learning rate. Must be > 0. If used with decay, then this argument specifies the initial value of the learning rate.

  38. val learningRateSummaryTag: String

    Permalink

    Optional summary tag name to use for the learning rate value.

    Optional summary tag name to use for the learning rate value. If null, no summary is created for the learning rate. Otherwise, a scalar summary is created which can be monitored using TensorBoard.

  39. var learningRateTensor: Output

    Permalink
    Attributes
    protected
  40. final def minimize(loss: Output, lossGradients: Seq[OutputLike] = null, variables: Set[variables.Variable] = null, gradientsGatingMethod: GatingMethod = Gradients.OpGating, gradientsAggregationMethod: AggregationMethod = Gradients.AddAggregationMethod, colocateGradientsWithOps: Boolean = false, iteration: Option[variables.Variable] = None, name: String = "Minimize"): Op

    Permalink

    Creates an op that makes a step towards minimizing loss by updating the values of the variables in variables.

    Creates an op that makes a step towards minimizing loss by updating the values of the variables in variables.

    This method simply combines calls computeGradients and applyGradients. If you want to process the gradients before applying them call computeGradients and applyGradients explicitly instead of using this method.

    loss

    Loss value whose gradients will be computed.

    lossGradients

    Optional gradients to back-propagate for loss.

    variables

    Optional list of variables for which to compute the gradients. Defaults to the set of trainable variables in the graph where loss is defined.

    gradientsGatingMethod

    Gating method for the gradients computation.

    gradientsAggregationMethod

    Aggregation method used to combine gradient terms.

    colocateGradientsWithOps

    Boolean value indicating whether to colocate the gradient ops with the original ops.

    iteration

    Optional Variable to increment by one after the variables have been updated.

    name

    Name for the created op.

    returns

    Created op.

    Definition Classes
    Optimizer
  41. val name: String

    Permalink

    Name for this optimizer.

    Name for this optimizer.

    Definition Classes
    AMSGradOptimizer
  42. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  43. final val nonSlotVariables: Map[(String, Option[core.Graph]), variables.Variable]

    Permalink

    Contains variables used by some optimizers that require no slots to be stored.

    Contains variables used by some optimizers that require no slots to be stored.

    Attributes
    protected
    Definition Classes
    Optimizer
  44. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  45. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  46. def prepare(iteration: Option[variables.Variable]): Unit

    Permalink

    Creates all necessary tensors before applying the gradients.

    Creates all necessary tensors before applying the gradients. This function is called from within an op creation context that uses as its name scope the name that users have chosen for the application of gradients.

    Definition Classes
    AMSGradOptimizer
  47. final def slotNames: Set[String]

    Permalink

    Returns the names of all slots used by this optimizer.

    Returns the names of all slots used by this optimizer.

    Attributes
    protected
    Definition Classes
    Optimizer
  48. final val slots: Map[String, Map[variables.Variable, variables.Variable]]

    Permalink

    Some Optimizer subclasses use additional variables.

    Some Optimizer subclasses use additional variables. For example, MomentumOptimizer and AdaGradOptimizer use variables to accumulate updates. This map is where these variables are stored.

    Attributes
    protected
    Definition Classes
    Optimizer
  49. val supportedDataTypes: Set[types.DataType]

    Permalink

    Supported data types for the loss function, the variables, and the gradients.

    Supported data types for the loss function, the variables, and the gradients. Subclasses should override this field allow other float types.

    Definition Classes
    Optimizer
  50. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  51. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  52. val useLocking: Boolean

    Permalink

    If true, the gradient descent updates will be protected by a lock.

    If true, the gradient descent updates will be protected by a lock. Otherwise, the behavior is undefined, but may exhibit less contention.

    Definition Classes
    AMSGradOptimizer
  53. val useNesterov: Boolean

    Permalink

    If true, Nesterov momentum is used for the updates.

  54. final def variables: Seq[variables.Variable]

    Permalink

    Returns a sequence of variables which encode the current state of this optimizer.

    Returns a sequence of variables which encode the current state of this optimizer. The returned variables include both slot variables and non-slot global variables created by this optimizer, in the current graph.

    Definition Classes
    Optimizer
  55. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  56. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  57. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  58. final def zerosSlot(name: String, variable: variables.Variable, variableScope: String): variables.Variable

    Permalink

    Gets an existing slot or creates a new one using an initial value of zeros, if none exists.

    Gets an existing slot or creates a new one using an initial value of zeros, if none exists.

    name

    Slot name.

    variable

    Slot primary variable.

    variableScope

    Name to use when scoping the variable that needs to be created for the slot.

    returns

    Requested slot variable.

    Attributes
    protected
    Definition Classes
    Optimizer

Inherited from Optimizer

Inherited from AnyRef

Inherited from Any

Ungrouped