Package

com.thoughtworks

deeplearning

Permalink

package deeplearning

This is the documentation for the DeepLearning.Scala

Overview

BufferedLayer, DifferentiableAny, DifferentiableNothing, Layer, Poly and Symbolic are base packages which contains necessary operations , all other packages dependent on those base packages.

If you want to implement a layer, you need to know how to use base packages.

Imports guidelines

If you want use some operations of Type T, you should import:

import com.thoughtworks.deeplearning.DifferentiableT._

it means: If you want use some operations of INDArray, you should import:

import com.thoughtworks.deeplearning.DifferentiableINDArray._

If you write something like this:

def softmax(implicit scores: INDArray @Symbolic): INDArray @Symbolic = {
  val expScores = exp(scores)
  expScores / expScores.sum(1)
}

If compiler shows error :

Could not infer implicit value for com.thoughtworks.deeplearning.Symbolic[org.nd4j.linalg.api.ndarray.INDArray]

you need add import this time :

import com.thoughtworks.deeplearning.DifferentiableINDArray._

If you write something like this:

def crossEntropyLossFunction(
  implicit pair: (INDArray :: INDArray :: HNil) @Symbolic)
: Double @Symbolic = {
 val score = pair.head
 val label = pair.tail.head
 -(label * log(score * 0.9 + 0.1) + (1.0 - label) * log(1.0 - score * 0.9)).mean
}

If the compiler shows error:

value * is not a member of com.thoughtworks.deeplearning.Layer.Aux[com.thoughtworks.deeplearning.Layer.Tape.Aux[org.nd4j.linalg.api.ndarray.INDArray,org.nd4j.linalg.api.ndarray.INDArray],com.thoughtworks.deeplearning.DifferentiableINDArray.INDArrayPlaceholder.Tape]val bias = Nd4j.ones(numberOfOutputKernels).toWeight * 0.1...

you need add import :

import com.thoughtworks.deeplearning.Poly.MathMethods.*
import com.thoughtworks.deeplearning.DifferentiableINDArray._

If the compiler shows error:

not found: value log -(label * log(score * 0.9 + 0.1) + (1.0 - label) * log(1.0 - score * 0.9)).mean...

you need add import:

import com.thoughtworks.deeplearning.Poly.MathFunctions.*
import com.thoughtworks.deeplearning.DifferentiableINDArray._

Those + - * / and log exp abs max min are defined at MathMethods and MathFunctions,those method are been implemented at DifferentiableType,so you need to import the implicit of DifferentiableType.

Composability

Neural networks created by DeepLearning.scala are composable. You can create large networks by combining smaller networks. If two larger networks share some sub-networks, the weights in shared sub-networks trained with one network affect the other network.

See also

Compose

Linear Supertypes
Content Hierarchy
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. deeplearning
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. trait CumulativeLayer extends Layer

    Permalink

    A Layer that minimizes the computation during both forward pass and backward pass.

    A Layer that minimizes the computation during both forward pass and backward pass.

    For forward pass, the result will be cached.

    For backward pass, the Tape is accumulated until flush.

    Author:

    杨博 (Yang Bo) <[email protected]>

    See also

    Layer.Output

  2. trait Layer extends AnyRef

    Permalink

    A Layer represents a neural network.

    A Layer represents a neural network. Each Layer can be included in as a sub-network of another Layer, forming a more complex neural network. The nesting structure of Layer can be used to represent mathematical expression or Coarse-grain neural network structure. When a neural network is written, the most elements in it are placeholders. When network training begins, the data enter into the network.

    Tree structure of Layer
    val myLayer: Layer.Aux[Tape.Aux[Double, Double], Tape.Aux[Double, Double]] = {
      Times(
        Plus(
          Literal(1.0),
          Identity[Double, Double]()
        ),
        Weight(2.0)
      )
    }

    The above mathematical expression with equivalent codes can be written, by Symbolic, as: (1.0 + x) * 2.0.toWeight. 2.0.toWeight represents a variable, of which the initial value is 2. The value updates during neural network iteration.

    Both Times and Plus are of case class, therefore, myLayer is a tree in nested structure consisted of the case class. Times and Plus are placeholders.

    Weight is a Layer containing weight, of which the initial value is 2.0.

    Identity is a Layer with equal input and output, which return the same input back. The Identity here is the placeholder of Input.

    Literal is a Layer containing a constant.

    Iteration

    Each training of the network is called as an iteration, including two stages: forward and backward, forming a complete process of https://en.wikipedia.org/wiki/Backpropagation

    Forward

    When invoking forward in Layer.Aux[A,B], A is input type, B is output type, and both A and B are Tape. Now, the codes are interpreted segment by segment as follows.

    For example:

    val inputTape: Tape.Aux[Double, Double] = Literal(a)
    val outputTape = myLayer.forward(inputTape)

    When invoking myLayer.forward(inputData), forward of Times shall be invoked first, of which the pseudo codes are as follows:

    final case class Times(operand1: Layer, operand2: Layer) extends Layer {
      def forward(inputData: Tape): Output = {
        val upstream1 = operand1.forward(input)
        val upstream2 = operand2.forward(input)
        new Output(upstream1, upstream2)// the concrete realization is ignored here, and recursion details are focused on
      }
      final class Output(upstream1: Tape, upstream2: Tape) extends Tape { ... }
    }

    It is Plus at myLayer.operand1, and Weight at myLayer.operand2. Therefore, upstream1 and upstream2 are the results of forward of operand1 and operand2 respectively.

    In a similar way, the forward code of Plus is similar to forward of Times. During the invoking for forward of Plus, operand1 is Literal, and operand2 is Identity. At this point, forward of Literal and Identity of each are invoked respectively.

    During the invoking for forward of Identity, the same input will be returned. The pseudo code for forward of Identity is as follows:

    def forward(inputTape: Tape.Aux[Double, Double]) = inputTape

    Therefore, Input is the x in mathematical expression (1.0 + x) * 2.0.toWeight, and in this way, Input is propagated to the neural network.

    The return value outputTape of myLayer.forward is in Tape type. Therefore, a tree consisted of Tape will be generated finally with the structure similar to that of myLayer.

    Therefore, via layer-by-layer propagation, the same myLayer.forward(inputTape) is finally returned by Identity and combined into the newly generated Tape tree.

    The computation result including forward of outputTape can be used for outputTape, for example:

    try {
      val loss = outputTape.value
      outputTape.backward(loss)
      loss
    } finally {
      outputTape.close()
    }

    outputTape.value is the computation result of mathematical expression (1.0 + x) * 2.0.toWeight

    Backward

    outputTape.backward is the outputTape.backward of Times.Output, of which the pseudo code is as follows:

    case class Times(operand1: Layer, operand2: Layer) extends Layer {
      def forward = ...
      class Output(upstream1, upstream2) extends Tape {
        private def upstreamDelta1(outputDelta: Double) = ???
        private def upstreamDelta2(outputDelta: Double) = ???
        override protected def backward(outputDelta: Double): Unit = {
          upstream1.backward(upstreamDelta1(outputDelta))
          upstream2.backward(upstreamDelta2(outputDelta))
        }
      }
    }

    outputTape.upstream1 and outputTape.upstream2 are the results of forward of operand1 and operand2 respectively, which are followed by backward of outputTape.upstream1 and outputTape.upstream2.

    In a similar way, the backward code of Plus is similar to backward of Times. During the invoking for backward of Plus, upstream1 and upstream2 are the results of forward of Literal and Identity respectively. At this point, backward of upstream1 and upstream2 of each are invoked respectively.

    Weight updates during backward, refer to updateDouble

    Aux & Symbolic API

    Layer.Aux[A,B] represents that Input is of A type, and Output is of B type. Tape.Aux[C,D] represents that Data is of C type, and Delta is of D type.

    Layer.Aux and Type.Aux can be combined for use. For example, Layer.Aux[Tape.Aux[A,B],Tape.Aux[C,D]] can be used to represent that the input type of a layer is a Tape, and the data type of this Tape is A, delta type is B; the output type of a layer is a Tape, and the data type of this Tape is C, delta type is D.

    Aux is a design pattern which realized type refinement and can be used to limit the range of type parameters.

    Generally, we will not handwrite Aux type, because we can use Symbolic to acquire the same effect. For example, when used for symbolic method internal variable and return value: Layer.Aux[Tape.Aux[INDArray, INDArray], Tape.Aux[INDArray, INDArray and INDArray @Symbolic are equivalent, so we usually use Symbolic to replace the writing method of Aux.

    See also

    Symbolic

    Backpropagation

    type refinement

    aux pattern evolution

    aux pattern

  3. trait Symbolic[NativeOutput] extends AnyRef

    Permalink

    Provides @Symbolic annotation to create symbolic methods, in which you can create Layers from mathematical formulas.

    Provides @Symbolic annotation to create symbolic methods, in which you can create Layers from mathematical formulas.

    Symbolic is a dependent type class that calculates a specific Layer type according to NativeOutput. Combining with implicit-dependent-type compiler plugin, it can be treated as a type annotation in the form of NativeOutput @Symbolic, converting NativeOutput to a specific Layer type.

    Three usages of @Symbolic

    Implicit parameter types used for symbol methods

    In case that the implicit parameter of an method is marked with @Symbolic, then this method is symbol method. The implicit parameter type marked with @Symbolic is the input type of this symbol method. In this case, NativeOutput @Symbolic will be expanded as:

    Identity[NativeOutput, Derivative type of NativeOutput]

    For example:

    def sumNetwork(implicit scores: INDArray @Symbolic): Double = {
      exp(scores).sum
    }

    In the above code, because the derivative type of INDArray is also INDArray, the input type INDArray @Symbolic of sumNetwork, once expanded, is Identity[INDArray, INDArray]

    Used for symbol method internal variable and return value

    A NativeOutput @Symbolic inside a symbol method, or at the return position of a symbol method, will be expanded as:

    Layer.Aux[Tape.Aux[value type of input type, derivative type of input type], Tape.Aux[NativeOutput, derivative type of NativeOutput]]

    For example:

    def sumNetwork(implicit scores: INDArray @Symbolic): Double @Symbolic = {
      val expScores: INDArray @Symbolic = exp(scores)
      val result: Double @Symbolic = expScores.sum
      result
    }

    In the above code, the type INDArray @Symbolic of expScores is expanded as:

    Layer.Aux[Tape.Aux[INDArray, INDArray], Tape.Aux[INDArray, INDArray]]

    The type Double @Symbolic of result is expanded as:

    Layer.Aux[Tape.Aux[INDArray, INDArray], Tape.Aux[Double, Double]]
    Used for cases excluding symbol method

    (NativeInput => NativeOutput) @Symbolic outside a symbol method, will be expanded as:

    Layer.Aux[Tape.Aux[NativeInput, derivative type of NativeInput], Tape.Aux[NativeOutput, derivative type of NativeOutput]]

    For example:

    val predictor: (INDArray => Double) @Symbolic = sumNetwork

    In the above code, type (INDArray => Double) @Symbolic of predictor is expanded as:

    Layer.Aux[Tape.Aux[INDArray, INDArray], Tape.Aux[Double, Double]]

    Custom symbol type

    The @Symbolic determines the mapping relation between the primitive type and derivative by checking Symbolic.ToLiteral implicit value. Therefore, @Symbolic can be a custom symbol type once you define your own the implicit Symbolic.ToLiteral.

    For example, if you want to support Short @Symbolic, using Float as the derivative type of Short, then you can conduct the follows:

    implicit object ShortToLiteral extends ToLiteral[Short] {
      override type Data = Short
      override type Delta = Float
      override def apply(data: Short) = Literal(data)
    }
    
    def makeShortNetwork(implicit input: Short @Symbolic): Short @Symbolic = {
      input
    }
    
    val shortNetwork: (Short => Short) @Symbolic = makeShortNetwork

    Thus, type of shortNetwork is expanded as:

    Layer.Aux[Tape.Aux[Short, Float], Tape.Aux[Short, Float]]
    Annotations
    @implicitNotFound( ... )
    See also

    Layer.Tape#Delta

    Symbolic.ToLiteral

    Symbolic.Layers.Identity

Value Members

  1. object CumulativeLayer

    Permalink
  2. object DifferentiableAny

    Permalink

    A namespace of common operators for any layers.

    A namespace of common operators for any layers.

    After importing DifferentiableAny._, the following methods will be available on any layers.

    Author:

    杨博 (Yang Bo) <[email protected]>

  3. object DifferentiableBoolean

    Permalink

    A namespace of common operators for Boolean layers.

    A namespace of common operators for Boolean layers.

    After importing DifferentiableBoolean._, the following methods will be available on Boolean layers.

    Author:

    杨博 (Yang Bo) <[email protected]>

  4. object DifferentiableCoproduct

    Permalink

    A namespace of common operators for Coproduct layers.

    A namespace of common operators for Coproduct layers.

    After importing DifferentiableCoproduct._, the following methods will be available on Coproduct layers.

    Author:

    杨博 (Yang Bo) <[email protected]>

  5. object DifferentiableDouble

    Permalink

    A namespace of common operators for Double layers.

    A namespace of common operators for Double layers.

    Author:

    杨博 (Yang Bo) <[email protected]>

  6. object DifferentiableFloat

    Permalink

    A namespace of common operators for Float layers.

    A namespace of common operators for Float layers.

    Author:

    杨博 (Yang Bo) <[email protected]>

  7. object DifferentiableHList

    Permalink

    A namespace of common operators for HList layers.

    A namespace of common operators for HList layers.

    After importing DifferentiableHList._, the following methods will be available on HList layers.

    Author:

    杨博 (Yang Bo) <[email protected]>

  8. object DifferentiableINDArray

    Permalink

    A namespace of common operators for INDArray layers.

    A namespace of common operators for INDArray layers.

    After importing DifferentiableINDArray._,

    You will able to use MathFunctions,like

    You will able to use MathMethods,like

    You will able to use INDArrayLayerOps,like

    You will able to use some methods like conv2d

    Author:

    杨博 (Yang Bo) <[email protected]>

  9. object DifferentiableInt

    Permalink

    A namespace of common operators for Int layers.

    A namespace of common operators for Int layers.

    Author:

    杨博 (Yang Bo) <[email protected]>

  10. object DifferentiableNothing

    Permalink

    A namespace of common operators for all layers.

    A namespace of common operators for all layers.

    Author:

    杨博 (Yang Bo) <[email protected]>

  11. object DifferentiableSeq

    Permalink

    A namespace of common operators for Seq layers.

    A namespace of common operators for Seq layers.

    After importing DifferentiableSeq._, the following methods will be available on Seq layers.

    Author:

    杨博 (Yang Bo) <[email protected]>

  12. object Layer

    Permalink
  13. object Poly

    Permalink

    A namespace of common math operators.

    A namespace of common math operators.

    MathMethods and MathFunctions provide functions like +, -, *, /, log, abs, max, min and exp, those functions been implements in specific Differentiable Object such as DifferentiableINDArray

    Author:

    杨博 (Yang Bo) <[email protected]>

    See also

    DifferentiableINDArray.exp(INDArray)

    DifferentiableINDArray.Double+INDArray

  14. object Symbolic extends LowPrioritySymbolic

    Permalink

    There are two ways to convert a value to Layer.

    There are two ways to convert a value to Layer.

    The first way is invoke toLayer, such as:

    def createMyNeuralNetwork(implicit input: Float @Symbolic): Float @Symbolic = {
      val floatLayer: Float @Symbolic = 1.0f.toLayer
      floatLayer
    }

    The second way is autoToLayer, such as:

    def createMyNeuralNetwork(implicit input: Float @Symbolic): Float @Symbolic = {
      val floatLayer: Float @Symbolic = 1.0f
      floatLayer
    }

    In order to compile the above code through, you will need:

    import com.thoughtworks.deeplearning.Symbolic._
    import com.thoughtworks.deeplearning.Symbolic
    import com.thoughtworks.deeplearning.DifferentiableFloat._

Inherited from AnyRef

Inherited from Any

Ungrouped