package core

Ordering
  1. Alphabetic
Visibility
  1. Public
  2. All

Type Members

  1. trait Activator [N] extends (N) ⇒ N with UFunc with MappingUFunc with Serializable

    The activator function with its derivative.

  2. trait Approximation [V] extends Serializable

    Approximates weight residing in respective layer in weights using lossFunction.

    Approximates weight residing in respective layer in weights using lossFunction. For GPU implementations calling sync between updates is necessary.

  3. trait CNN [V] extends Network[V, Tensor3D[V], core.Network.Vector[V]]
  4. trait Constructor [V, +N <: Network[_, _, _]] extends AnyRef

    A minimal constructor for a Network.

    A minimal constructor for a Network.

    Annotations
    @implicitNotFound( ... )
  5. case class Debuggable [V]() extends Update[V] with Product with Serializable

    Exposes the lastGradients for debugging.

  6. trait DistCNN [V] extends Network[V, Tensor3D[V], core.Network.Vector[V]] with DistributedTraining
  7. trait DistFFN [V] extends Network[V, core.Network.Vector[V], core.Network.Vector[V]] with DistributedTraining
  8. trait DistributedTraining extends AnyRef
  9. trait FFN [V] extends Network[V, core.Network.Vector[V], core.Network.Vector[V]]
  10. case class FiniteDifferences [V](Δ: V)(implicit evidence$1: Numeric[V]) extends Approximation[V] with Product with Serializable

    Approximates using centralized finite diffs with step size Δ.

  11. trait HasActivator [N] extends AnyRef

    Label for neurons in the network performing a function on their synapses.

  12. trait IllusionBreaker extends AnyRef

    Since

    03.01.16

  13. trait KeepBestLogic [V] extends AnyRef
  14. trait Lexicon extends AnyRef
  15. trait LossFuncGrapher extends AnyRef

    Since

    10.07.16

  16. case class LossFuncOutput (file: Option[String] = None, action: Option[(Double) ⇒ Unit] = None) extends Product with Serializable
  17. trait LossFunction [V] extends Layout

    A loss function gets target y, prediction x, computes loss and gradient, which will be backpropped into the raw output layer of a net.

  18. case class Momentum [V](μ: V) extends Update[V] with Product with Serializable

    Momentum update is jumping downhill into the loss' minimum, iteratively re-gaining momentum into all directions by varying gradients, which is decelerated by factor μ.

  19. trait Network [V, In, Out] extends (In) ⇒ Out with Logs with LossFuncGrapher with IllusionBreaker with Welcoming with Serializable
  20. case class Node (host: String, port: Int) extends Product with Serializable

    Distributed training node

  21. trait RNN [V] extends Network[V, core.Network.Vectors[V], core.Network.Vectors[V]]
  22. trait Regularization extends Serializable

    Marker trait for regulators

  23. case class Settings [V](verbose: Boolean = true, learningRate: Network.LearningRate = { case (_, _) => 1E-4 }, updateRule: Update[V] = Vanilla[V](), precision: Double = 1E-5, iterations: Int = 100, prettyPrint: Boolean = true, coordinator: Node = Node("localhost", 2552), transport: Transport = Transport(100000, "128 MiB"), parallelism: Option[Int] = ..., batchSize: Option[Int] = None, gcThreshold: Option[Long] = None, lossFuncOutput: Option[LossFuncOutput] = None, waypoint: Option[Waypoint[V]] = None, approximation: Option[Approximation[V]] = None, regularization: Option[Regularization] = None, partitions: Option[Set[Int]] = None, specifics: Option[Map[String, Double]] = None) extends Serializable with Product

    Settings of a neural network, where:

    Settings of a neural network, where:

    verbose Indicates logging behavior on console. learningRate A function from current iteration and learning rate, producing a new learning rate. updateRule Defines the relationship between gradient, weights and learning rate during training. precision The training will stop if precision is high enough. iterations The training will stop if maximum iterations is reached. prettyPrint If true, the layout is rendered graphically on console. coordinator The coordinator host address for distributed training. transport Transport throughput specifics for distributed training. parallelism Controls how many threads are used for distributed training. batchSize Controls how many samples are presented per weight update. (1=on-line, ..., n=full-batch) gcThreshold Fine tune GC for GPU, the threshold is set in bytes lossFuncOutput Prints the loss to the specified file/closure. waypoint Periodic actions can be executed, e.g. saving the weights every n steps. approximation If set, the gradients are approximated numerically. regularization The respective regulator tries to avoid over-fitting. partitions A sequential training sequence can be partitioned for RNNs. (0 index-based) specifics Some nets use specific parameters set in the specifics map.

  24. case class Softmax [V]() extends LossFunction[V] with Product with Serializable

    L = -Σ(y * log(ex / ΣeX))

    L = -Σ(y * log(ex / ΣeX))

    Works for 1-of-K classification, under a cross-entropy regime, where y is the target and x the prediction. The target is expressed using hot-vector encoding, e. g. (0, 1, 0, 0) where 1 is the true class. The first sum Σ is taken over the full batch and both exponentials give a convex functional form. The second sum Σ produces scores in range [0.0, 1.0] such that they sum up to 1 and are interpretable as percent.

  25. case class SquaredMeanError [V]() extends LossFunction[V] with Product with Serializable

    L = Σ1/2(y - x)²

    L = Σ1/2(y - x)²

    Where y is the target and x the prediction. The sum Σ is taken over the full batch and the square ² gives a convex functional form.

  26. case class Transport (messageGroupSize: Int, frameSize: String) extends Product with Serializable

    The messageGroupSize controls how many weights per batch will be sent.

    The messageGroupSize controls how many weights per batch will be sent. The frameSize is the maximum message size for inter-node communication.

  27. trait Update [V] extends AnyRef

    Updates weights ws using derivatives dws and learningRate for layer.

  28. case class Vanilla [V]() extends Update[V] with Product with Serializable

    Gingerly stepping vanilla.

    Gingerly stepping vanilla. Weights_{n} = Weights_{n-1} - (Grads_{n-1} * learningRate)

  29. case class Waypoint [V](nth: Int, action: (Int, Network.Weights[V]) ⇒ Unit) extends Product with Serializable

    Performs function action every nth step.

    Performs function action every nth step. The function is passed iteration count and a snapshot of the weights.

  30. trait WaypointLogic [V] extends AnyRef
  31. trait WeightBreeder [V] extends (Seq[Layer]) ⇒ core.Network.Weights[V]

    A WeightBreeder produces a weight matrix for each Layer.

    A WeightBreeder produces a weight matrix for each Layer.

    Annotations
    @implicitNotFound( ... )
  32. trait Welcoming extends AnyRef

    Since

    09.07.16

Value Members

  1. object Activator extends Serializable

    Activator functions.

  2. object IllusionBreaker
  3. object KeepBest extends Regularization with Product with Serializable

    Keep weights, which had the smallest loss during training.

    Keep weights, which had the smallest loss during training. In particular, this is useful for RNNs, as they can oscillate during training.

  4. object Network extends Lexicon with Serializable

    Since

    03.01.16

  5. object SoftmaxImpl

    Computes ex / ΣeX for given matrix x by row.

  6. object WeightBreeder
  7. object convolute extends UFunc

    Convolutes neuroflow.common.Tensor3D linearized in in, producing a new one.

  8. object convolute_backprop extends UFunc

    Backprops convolution of a neuroflow.common.Tensor3D linearized in in.

  9. object reshape_batch extends UFunc

    Reshapes matrix in by transposing the batch.

    Reshapes matrix in by transposing the batch. Examples given: |1 2 3| |1 1 1| |1 2 3| -> |2 2 2| |1 2 3| |3 3 3|

    |1 1 2 2| |1 1 1 1 1 1| |1 1 2 2| -> |2 2 2 2 2 2| |1 1 2 2|

  10. object reshape_batch_backprop extends UFunc

    Reshapes matrix in by transposing the batch.

    Reshapes matrix in by transposing the batch. Examples given: |1 2 3| |1 1 1| |1 2 3| <- |2 2 2| |1 2 3| |3 3 3|

    |1 1 2 2| |1 1 1 1 1 1| |1 1 2 2| <- |2 2 2 2 2 2| |1 1 2 2|

  11. object subRowMax extends UFunc

    Subtracts row maximum from row elements.

    Subtracts row maximum from row elements. Example given: |1 2 1| |-1 0 -1| |2 2 2| -> | 0 0 0| |0 1 0| |-1 0 -1|

Ungrouped