Package

com.thoughtworks.deeplearning

plugins

Permalink

package plugins

Author:

杨博 (Yang Bo)

Source
package.scala
Linear Supertypes
AnyRef, Any
Content Hierarchy
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. plugins
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. trait Builtins extends ImplicitsSingleton with Layers with Weights with Logging with Names with Operators with FloatTraining with FloatLiterals with FloatWeights with FloatLayers with CumulativeFloatLayers with DoubleTraining with DoubleLiterals with DoubleWeights with DoubleLayers with CumulativeDoubleLayers with INDArrayTraining with INDArrayLiterals with INDArrayWeights with INDArrayLayers with CumulativeINDArrayLayers with HLists with Products

    Permalink

    A plugin that enables all other DeepLearning.scala built-in plugins.

    A plugin that enables all other DeepLearning.scala built-in plugins.

    Author:

    杨博 (Yang Bo)

    Example:
    1. When creating a Builtins from com.thoughtworks.feature.Factory,

      import com.thoughtworks.feature.Factory
      val hyperparameters = Factory[plugins.Builtins].newInstance()

      and import anything in implicits,

      import hyperparameters.implicits._

      then all DeepLearning.scala built-in features should be enabled.


      Creating weights:

      import org.nd4j.linalg.factory.Nd4j
      import org.nd4j.linalg.api.ndarray.INDArray
      val numberOfInputFeatures = 8
      val numberOfOutputFeatures = 1
      val initialValueOfWeight: INDArray = Nd4j.rand(numberOfInputFeatures, numberOfOutputFeatures)
      val weight: hyperparameters.INDArrayWeight = hyperparameters.INDArrayWeight(initialValueOfWeight)

      Creating neural network layers,

      def fullyConnectedLayer(input: INDArray): hyperparameters.INDArrayLayer = {
        input dot weight
      }

      or loss functions:

      def hingeLoss(scores: hyperparameters.INDArrayLayer, label: INDArray): hyperparameters.DoubleLayer = {
        hyperparameters.max(0.0, 1.0 - label * scores).sum
      }

      Training:

      import scalaz.std.stream._
      import com.thoughtworks.future._
      import com.thoughtworks.each.Monadic._
      val batchSize = 4
      val numberOfIterations = 10
      val input = Nd4j.rand(batchSize, numberOfInputFeatures)
      val label = Nd4j.rand(batchSize, numberOfOutputFeatures)
      @monadic[Future]
      def train: Future[Stream[Double]] = {
        for (iteration <- (0 until numberOfIterations).toStream) yield {
          hingeLoss(fullyConnectedLayer(input), label).train.each
        }
      }

      When the training is done, the loss of the last iteration should be no more than the loss of the first iteration

      train.map { lossesByIteration =>
        lossesByIteration.last should be <= lossesByIteration.head
      }
  2. trait CumulativeDoubleLayers extends DoubleLayers

    Permalink

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.

    Author:

    杨博 (Yang Bo)

    Examples:
    1. Given a DoubleWeight,

      import com.thoughtworks.deeplearning.plugins._
      import com.thoughtworks.feature.Factory
      val hyperparameters = Factory[DoubleTraining with ImplicitsSingleton with Operators with CumulativeDoubleLayers with DoubleWeights].newInstance()
      import hyperparameters.implicits._
      val weight1 = hyperparameters.DoubleWeight(10)

      then the training result should be applied on it

      weight1.train.map { result =>
        result should be(10.0f)
        weight1.data should be < 10.0f
      }
    2. ,
    3. Given two DoubleWeights,

      import com.thoughtworks.deeplearning.plugins._
      import com.thoughtworks.feature.Factory
      val hyperparameters = Factory[DoubleTraining with ImplicitsSingleton with Operators with CumulativeDoubleLayers with DoubleWeights].newInstance()
      import hyperparameters.implicits._
      val weight1 = hyperparameters.DoubleWeight(10)
      val weight2 = hyperparameters.DoubleWeight(300)

      when adding them together,

      val weight1PlusWeight2 = weight1 + weight2

      then the training result should be applied on both weight

      weight1PlusWeight2.train.map { result =>
        result should be(310.0f)
        weight2.data should be < 300.0f
        weight1.data should be < 10.0f
      }
    Note

    Unlike DoubleLayers, DoubleLayer in this CumulativeDoubleLayers will share Tapes created in forward pass pass for all dependencies, avoiding re-evaluation in the case of diamond dependencies in a neural network.

  3. trait CumulativeFloatLayers extends FloatLayers

    Permalink

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.

    Author:

    杨博 (Yang Bo)

    Examples:
    1. Given a FloatWeight,

      import com.thoughtworks.deeplearning.plugins._
      import com.thoughtworks.feature.Factory
      val hyperparameters = Factory[FloatTraining with ImplicitsSingleton with Operators with CumulativeFloatLayers with FloatWeights].newInstance()
      import hyperparameters.implicits._
      val weight1 = hyperparameters.FloatWeight(10)

      then the training result should be applied on it

      weight1.train.map { result =>
        result should be(10.0f)
        weight1.data should be < 10.0f
      }
    2. ,
    3. Given two FloatWeights,

      import com.thoughtworks.deeplearning.plugins._
      import com.thoughtworks.feature.Factory
      val hyperparameters = Factory[FloatTraining with ImplicitsSingleton with Operators with CumulativeFloatLayers with FloatWeights].newInstance()
      import hyperparameters.implicits._
      val weight1 = hyperparameters.FloatWeight(10)
      val weight2 = hyperparameters.FloatWeight(300)

      when adding them together,

      val weight1PlusWeight2 = weight1 + weight2

      then the training result should be applied on both weight

      weight1PlusWeight2.train.map { result =>
        result should be(310.0f)
        weight2.data should be < 300.0f
        weight1.data should be < 10.0f
      }
    Note

    Unlike FloatLayers, FloatLayer in this CumulativeFloatLayers will share Tapes created in forward pass pass for all dependencies, avoiding re-evaluation in the case of diamond dependencies in a neural network.

  4. trait CumulativeINDArrayLayers extends INDArrayLayers

    Permalink

    A plugin that provides differentiable operators on neural networks whose Data and Delta is org.nd4j.linalg.api.ndarray.INDArray.

    A plugin that provides differentiable operators on neural networks whose Data and Delta is org.nd4j.linalg.api.ndarray.INDArray.

    Author:

    杨博 (Yang Bo)

    Note

    Unlike INDArrayLayers, INDArrayLayer in this CumulativeINDArrayLayers will share Tapes created in forward pass for all dependencies, avoiding re-evaluation in the case of diamond dependencies in a neural network.

  5. trait DoubleLayers extends Layers

    Permalink

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.

    Author:

    杨博 (Yang Bo)

    Note

    By default, the computation in a DoubleLayer will re-evaluate again and again if the DoubleLayer is used by multiple other operations. This behavior is very inefficient if there is are diamond dependencies in a neural network. It's wise to use CumulativeDoubleLayers instead of this DoubleLayers in such neural network.

  6. trait DoubleLiterals extends AnyRef

    Permalink

    A plugin that enables scala.Double in neural networks.

  7. trait DoubleTraining extends Training

    Permalink

    A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Double.

    A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Double.

    Author:

    杨博 (Yang Bo)

  8. trait DoubleWeights extends Weights

    Permalink

    A plugin to create scala.Double weights.

    A plugin to create scala.Double weights.

    Author:

    杨博 (Yang Bo)

    Note

    Custom optimization algorithm for updating DoubleWeight can be implemented by creating a plugin that provides an overridden DoubleOptimizer that provides an overridden DoubleOptimizer.delta.

  9. trait FloatLayers extends Layers

    Permalink

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.

    A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.

    Author:

    杨博 (Yang Bo)

    Note

    By default, the computation in a FloatLayer will re-evaluate again and again if the FloatLayer is used by multiple other operations. This behavior is very inefficient if there is are diamond dependencies in a neural network. It's wise to use CumulativeFloatLayers instead of this FloatLayers in such neural network.

  10. trait FloatLiterals extends AnyRef

    Permalink

    A plugin that enables scala.Float in neural networks.

  11. trait FloatTraining extends Training

    Permalink

    A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Float.

    A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Float.

    Author:

    杨博 (Yang Bo)

  12. trait FloatWeights extends Weights

    Permalink

    A plugin to create scala.Float weights.

    A plugin to create scala.Float weights.

    Author:

    杨博 (Yang Bo)

    Note

    Custom optimization algorithm for updating FloatWeight can be implemented by creating a plugin that provides an overridden FloatOptimizer that provides an overridden FloatOptimizer.delta.

  13. trait HLists extends AnyRef

    Permalink

    Author:

    杨博 (Yang Bo)

  14. trait INDArrayLayers extends DoubleLayers with DoubleLiterals with ImplicitsSingleton

    Permalink

    A plugin that provides differentiable operators on neural networks whose Data and Delta is org.nd4j.linalg.api.ndarray.INDArray.

    A plugin that provides differentiable operators on neural networks whose Data and Delta is org.nd4j.linalg.api.ndarray.INDArray.

    Author:

    杨博 (Yang Bo)

    Note

    By default, the computation in a INDArrayLayer will re-evaluate again and again if the INDArrayLayer is used by multiple other operations. This behavior is very inefficient if there is are diamond dependencies in a neural network. It's wise to use CumulativeINDArrayLayers instead of this INDArrayLayers in such neural network.

  15. trait INDArrayLiterals extends AnyRef

    Permalink

    A plugin that enables org.nd4j.linalg.api.ndarray.INDArray in neural networks.

  16. trait INDArrayTraining extends Training

    Permalink

    A DeepLearning.scala plugin that enables train method for neural networks whose loss is a org.nd4j.linalg.api.ndarray.INDArray.

    A DeepLearning.scala plugin that enables train method for neural networks whose loss is a org.nd4j.linalg.api.ndarray.INDArray.

    Author:

    杨博 (Yang Bo)

  17. trait INDArrayWeights extends Weights with ImplicitsSingleton

    Permalink

    A plugin to create org.nd4j.linalg.api.ndarray.INDArray weights.

    A plugin to create org.nd4j.linalg.api.ndarray.INDArray weights.

    Author:

    杨博 (Yang Bo)

    Note

    Custom optimization algorithm for updating INDArrayWeight can be implemented by creating a plugin that provides a overridden INDArrayOptimizer that provides an overridden INDArrayOptimizer.delta.

  18. trait ImplicitsSingleton extends AnyRef

    Permalink

    A plugin that creates the instance of implicits.

    A plugin that creates the instance of implicits.

    Any fields and methods in Implicits added by other plugins will be mixed-in and present in implicits.

  19. trait Layers extends AnyRef

    Permalink

    A plugin that enables Layer in neural networks.

  20. trait Logging extends Layers with Weights

    Permalink

    A plugin that logs uncaught exceptions raised from Layer and Weight.

    A plugin that logs uncaught exceptions raised from Layer and Weight.

    Author:

    杨博 (Yang Bo)

  21. trait Names extends Layers with Weights

    Permalink

    A plugin that automatically names Layers and Weights.

    A plugin that automatically names Layers and Weights.

    Author:

    杨博 (Yang Bo)

  22. trait Operators extends AnyRef

    Permalink

    A plugin that contains definitions of polymorphic functions and methods.

    A plugin that contains definitions of polymorphic functions and methods.

    The implementations of polymorphic functions and methods can be found in FloatLayers.Implicits, DoubleLayers.Implicits and INDArrayLayers.Implicits.

    Author:

    杨博 (Yang Bo)

    See also

    Shapeless's Documentations for the underlying mechanism of polymorphic functions.

  23. trait Products extends HLists

    Permalink

    Author:

    杨博 (Yang Bo)

  24. trait Training extends AnyRef

    Permalink

    A DeepLearning.scala plugin that enables methods defined in DeepLearning.Ops for neural networks.

    A DeepLearning.scala plugin that enables methods defined in DeepLearning.Ops for neural networks.

    Author:

    杨博 (Yang Bo)

  25. trait Weights extends AnyRef

    Permalink

    A plugin that enables Weight in neural networks.

    A plugin that enables Weight in neural networks.

    Author:

    杨博 (Yang Bo)

Value Members

  1. object INDArrayLayers

    Permalink
  2. object Layers

    Permalink
  3. object Logging

    Permalink
  4. object Operators

    Permalink

Inherited from AnyRef

Inherited from Any

Ungrouped