Trait

org.platanios.tensorflow.api.ops.training.optimizers

Optimizer

Related Doc: package optimizers

Permalink

trait Optimizer extends AnyRef

Linear Supertypes
Known Subclasses
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. Optimizer
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Abstract Value Members

  1. abstract def applyDense(gradient: Output, variable: variables.Variable, iteration: Option[variables.Variable]): Op

    Permalink

    Applies the updates corresponding to the provided gradient, to the provided variable.

    Applies the updates corresponding to the provided gradient, to the provided variable.

    gradient

    Gradient tensor.

    variable

    Variable.

    iteration

    Option containing current iteration in the optimization loop, if one has been provided.

    returns

    Created op that applies the provided gradient to the provided variable.

  2. abstract def applySparse(gradient: OutputIndexedSlices, variable: variables.Variable, iteration: Option[variables.Variable]): Op

    Permalink

    Applies the updates corresponding to the provided gradient, to the provided variable.

    Applies the updates corresponding to the provided gradient, to the provided variable.

    The OutputIndexedSlices object specified by gradient in this function is by default pre-processed in applySparseDuplicateIndices to remove duplicate indices (refer to that function's documentation for details). Optimizers which can tolerate or have correct special cases for duplicate sparse indices may override applySparseDuplicateIndices instead of this function, avoiding that overhead.

    gradient

    Gradient tensor.

    variable

    Variable.

    iteration

    Option containing current iteration in the optimization loop, if one has been provided.

    returns

    Created op that applies the provided gradient to the provided variable.

  3. abstract val ignoreDuplicateSparseIndices: Boolean

    Permalink

    Boolean value indicating whether to ignore duplicate indices during sparse updates.

  4. abstract val name: String

    Permalink

    Name of this optimizer.

    Name of this optimizer. This name is used for the accumulators created for this optimizer.

  5. abstract val useLocking: Boolean

    Permalink

    Boolean value indicating whether to apply use locks to prevent concurrent updates to variables.

Concrete Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def applyGradients(gradientsAndVariables: Seq[(OutputLike, variables.Variable)], iteration: Option[variables.Variable] = None, name: String = this.name): Op

    Permalink

    Creates an op that applies the provided gradients to the provided variables.

    Creates an op that applies the provided gradients to the provided variables.

    gradientsAndVariables

    Sequence with gradient-variable pairs.

    iteration

    Optional Variable to increment by one after the variables have been updated.

    name

    Name for the created op.

    returns

    Created op.

  5. def applySparseDuplicateIndices(gradient: OutputIndexedSlices, variable: variables.Variable, iteration: Option[variables.Variable]): Op

    Permalink

    Applies the updates corresponding to the provided gradient (with potentially duplicate indices), to the provided variable.

    Applies the updates corresponding to the provided gradient (with potentially duplicate indices), to the provided variable.

    Optimizers which override this method must deal with OutputIndexedSlices objects such as the following: OutputIndexedSlices(indices=[0, 0], values=[1, 1], denseShape=[1]), which contain duplicate indices. The correct interpretation in that case should be: OutputIndexedSlices(values=[2], indices=[0], denseShape=[1]).

    Many optimizers deal incorrectly with repeated indices when updating based on sparse gradients (e.g. summing squares rather than squaring the sum, or applying momentum terms multiple times). Adding first is always the correct behavior, so this is enforced here by reconstructing the OutputIndexedSlices to have only unique indices, and then calling applySparse.

    Optimizers which deal correctly with repeated indices may instead override this method to avoid the induced overhead.

    gradient

    Gradient tensor.

    variable

    Variable.

    iteration

    Option containing current iteration in the optimization loop, if one has been provided.

    returns

    Created op that applies the provided gradient to the provided variable.

  6. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  7. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. def computeGradients(loss: Output, lossGradients: Seq[OutputLike] = null, variables: Set[variables.Variable] = null, gradientsGatingMethod: GatingMethod = Gradients.OpGating, gradientsAggregationMethod: AggregationMethod = Gradients.AddAggregationMethod, colocateGradientsWithOps: Boolean = false): Seq[(OutputLike, variables.Variable)]

    Permalink

    Computes the gradients of loss with respect to the variables in variables, if provided, otherwise with respect to all the trainable variables in the graph where loss is defined.

    Computes the gradients of loss with respect to the variables in variables, if provided, otherwise with respect to all the trainable variables in the graph where loss is defined.

    loss

    Loss value whose gradients will be computed.

    lossGradients

    Optional gradients to back-propagate for loss.

    variables

    Optional list of variables for which to compute the gradients. Defaults to the set of trainable variables in the graph where loss is defined.

    gradientsGatingMethod

    Gating method for the gradients computation.

    gradientsAggregationMethod

    Aggregation method used to combine gradient terms.

    colocateGradientsWithOps

    Boolean value indicating whether to colocate the gradient ops with the original ops.

    returns

    Sequence of gradient-variable pairs.

  9. def createSlots(variables: Seq[variables.Variable]): Unit

    Permalink

    Create all slots needed by this optimizer.

  10. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  11. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  12. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  13. def finish(updateOps: Set[Op], nameScope: String): Op

    Permalink

    Creates an op that finishes the gradients application.

    Creates an op that finishes the gradients application. This function is called from within an op creation context that uses as its name scope the name that users have chosen for the application of gradients.

    updateOps

    Set of ops needed to apply the gradients and update the variable values.

    nameScope

    Name scope to use for all the ops created by this function.

    returns

    Created op output.

  14. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  15. final def getNonSlotVariable(name: String, graph: core.Graph = null): variables.Variable

    Permalink

    Gets a non-slot variable that has been added to this optimizer (or throws an error if no such non-slot variable could be found in this optimizer).

    Gets a non-slot variable that has been added to this optimizer (or throws an error if no such non-slot variable could be found in this optimizer).

    name

    Variable name.

    graph

    Graph in which the variable is defined.

    returns

    Obtained non-slot variable.

    Attributes
    protected
  16. final def getNonSlotVariables: Iterable[variables.Variable]

    Permalink

    Gets all the non-slot variables that have been added to this optimizer.

    Gets all the non-slot variables that have been added to this optimizer.

    Attributes
    protected
  17. final def getOrCreateNonSlotVariable(name: String, initialValue: tensors.Tensor[_ <: types.DataType], colocationOps: Set[Op] = Set.empty, ignoreExisting: Boolean = false): variables.Variable

    Permalink

    Gets or creates (and adds to this optimizer) a non-slot variable.

    Gets or creates (and adds to this optimizer) a non-slot variable.

    name

    Variable name.

    initialValue

    Variable initial value.

    colocationOps

    Set of colocation ops for the non-slot variable.

    returns

    Created non-slot variable.

    Attributes
    protected
  18. final def getSlot(name: String, variable: variables.Variable): variables.Variable

    Permalink

    Gets an existing slot.

    Gets an existing slot.

    name

    Slot name.

    variable

    Slot primary variable.

    returns

    Requested slot variable, or null if it cannot be found.

    Attributes
    protected
  19. final def getSlot(name: String, variable: variables.Variable, initializer: Initializer, shape: core.Shape, dataType: types.DataType, variableScope: String): variables.Variable

    Permalink

    Gets an existing slot or creates a new one if none exists, for the provided arguments.

    Gets an existing slot or creates a new one if none exists, for the provided arguments.

    name

    Slot name.

    variable

    Slot primary variable.

    initializer

    Slot variable initializer.

    shape

    Slot variable shape.

    dataType

    Slot variable data type.

    variableScope

    Name to use when scoping the variable that needs to be created for the slot.

    returns

    Requested slot variable.

    Attributes
    protected
  20. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  21. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  22. final def minimize(loss: Output, lossGradients: Seq[OutputLike] = null, variables: Set[variables.Variable] = null, gradientsGatingMethod: GatingMethod = Gradients.OpGating, gradientsAggregationMethod: AggregationMethod = Gradients.AddAggregationMethod, colocateGradientsWithOps: Boolean = false, iteration: Option[variables.Variable] = None, name: String = "Minimize"): Op

    Permalink

    Creates an op that makes a step towards minimizing loss by updating the values of the variables in variables.

    Creates an op that makes a step towards minimizing loss by updating the values of the variables in variables.

    This method simply combines calls computeGradients and applyGradients. If you want to process the gradients before applying them call computeGradients and applyGradients explicitly instead of using this method.

    loss

    Loss value whose gradients will be computed.

    lossGradients

    Optional gradients to back-propagate for loss.

    variables

    Optional list of variables for which to compute the gradients. Defaults to the set of trainable variables in the graph where loss is defined.

    gradientsGatingMethod

    Gating method for the gradients computation.

    gradientsAggregationMethod

    Aggregation method used to combine gradient terms.

    colocateGradientsWithOps

    Boolean value indicating whether to colocate the gradient ops with the original ops.

    iteration

    Optional Variable to increment by one after the variables have been updated.

    name

    Name for the created op.

    returns

    Created op.

  23. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  24. final val nonSlotVariables: Map[(String, Option[core.Graph]), variables.Variable]

    Permalink

    Contains variables used by some optimizers that require no slots to be stored.

    Contains variables used by some optimizers that require no slots to be stored.

    Attributes
    protected
  25. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  26. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  27. def prepare(iteration: Option[variables.Variable]): Unit

    Permalink

    Creates all necessary tensors before applying the gradients.

    Creates all necessary tensors before applying the gradients. This function is called from within an op creation context that uses as its name scope the name that users have chosen for the application of gradients.

  28. final def slotNames: Set[String]

    Permalink

    Returns the names of all slots used by this optimizer.

    Returns the names of all slots used by this optimizer.

    Attributes
    protected
  29. final val slots: Map[String, Map[variables.Variable, variables.Variable]]

    Permalink

    Some Optimizer subclasses use additional variables.

    Some Optimizer subclasses use additional variables. For example, MomentumOptimizer and AdaGradOptimizer use variables to accumulate updates. This map is where these variables are stored.

    Attributes
    protected
  30. val supportedDataTypes: Set[types.DataType]

    Permalink

    Supported data types for the loss function, the variables, and the gradients.

    Supported data types for the loss function, the variables, and the gradients. Subclasses should override this field allow other float types.

  31. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  32. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  33. final def variables: Seq[variables.Variable]

    Permalink

    Returns a sequence of variables which encode the current state of this optimizer.

    Returns a sequence of variables which encode the current state of this optimizer. The returned variables include both slot variables and non-slot global variables created by this optimizer, in the current graph.

  34. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  35. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  36. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  37. final def zerosSlot(name: String, variable: variables.Variable, variableScope: String): variables.Variable

    Permalink

    Gets an existing slot or creates a new one using an initial value of zeros, if none exists.

    Gets an existing slot or creates a new one using an initial value of zeros, if none exists.

    name

    Slot name.

    variable

    Slot primary variable.

    variableScope

    Name to use when scoping the variable that needs to be created for the slot.

    returns

    Requested slot variable.

    Attributes
    protected

Inherited from AnyRef

Inherited from Any

Ungrouped