package core
- Alphabetic
- Public
- All
Type Members
-
trait
Activator
[N] extends (N) ⇒ N with UFunc with MappingUFunc with Serializable
The activator function with its derivative.
-
trait
Approximation
[V] extends Serializable
Approximates
weight
residing in respectivelayer
inweights
usinglossFunction
.Approximates
weight
residing in respectivelayer
inweights
usinglossFunction
. For GPU implementations callingsync
between updates is necessary. - trait CNN [V] extends Network[V, Tensor3D[V], core.Network.Vector[V]]
-
trait
Constructor
[V, +N <: Network[_, _, _]] extends AnyRef
A minimal constructor for a Network.
A minimal constructor for a Network.
- Annotations
- @implicitNotFound( ... )
-
case class
Debuggable
[V]() extends Update[V] with Product with Serializable
Exposes the
lastGradients
for debugging. - trait DistCNN [V] extends Network[V, Tensor3D[V], core.Network.Vector[V]] with DistributedTraining
- trait DistFFN [V] extends Network[V, core.Network.Vector[V], core.Network.Vector[V]] with DistributedTraining
- trait DistributedTraining extends AnyRef
- trait FFN [V] extends Network[V, core.Network.Vector[V], core.Network.Vector[V]]
-
case class
FiniteDifferences
[V](Δ: V)(implicit evidence$1: Numeric[V]) extends Approximation[V] with Product with Serializable
Approximates using centralized finite diffs with step size
Δ
. -
trait
HasActivator
[N] extends AnyRef
Label for neurons in the network performing a function on their synapses.
-
trait
IllusionBreaker
extends AnyRef
- Since
03.01.16
- trait KeepBestLogic [V] extends AnyRef
- trait Lexicon extends AnyRef
-
trait
LossFuncGrapher
extends AnyRef
- Since
10.07.16
- case class LossFuncOutput (file: Option[String] = None, action: Option[(Double) ⇒ Unit] = None) extends Product with Serializable
-
trait
LossFunction
[V] extends Layout
A loss function gets target
y
, predictionx
, computes loss and gradient, which will be backpropped into the raw output layer of a net. -
case class
Momentum
[V](μ: V) extends Update[V] with Product with Serializable
Momentum update is jumping downhill into the loss' minimum, iteratively re-gaining momentum into all directions by varying gradients, which is decelerated by factor
μ
. - trait Network [V, In, Out] extends (In) ⇒ Out with Logs with LossFuncGrapher with IllusionBreaker with Welcoming with Serializable
-
case class
Node
(host: String, port: Int) extends Product with Serializable
Distributed training node
- trait RNN [V] extends Network[V, core.Network.Vectors[V], core.Network.Vectors[V]]
-
trait
Regularization
extends Serializable
Marker trait for regulators
-
case class
Settings
[V](verbose: Boolean = true, learningRate: Network.LearningRate = { case (_, _) => 1E-4 }, updateRule: Update[V] = Vanilla[V](), precision: Double = 1E-5, iterations: Int = 100, prettyPrint: Boolean = true, coordinator: Node = Node("localhost", 2552), transport: Transport = Transport(100000, "128 MiB"), parallelism: Option[Int] = ..., batchSize: Option[Int] = None, gcThreshold: Option[Long] = None, lossFuncOutput: Option[LossFuncOutput] = None, waypoint: Option[Waypoint[V]] = None, approximation: Option[Approximation[V]] = None, regularization: Option[Regularization] = None, partitions: Option[Set[Int]] = None, specifics: Option[Map[String, Double]] = None) extends Serializable with Product
Settings of a neural network, where:
Settings of a neural network, where:
verbose
Indicates logging behavior on console.learningRate
A function from current iteration and learning rate, producing a new learning rate.updateRule
Defines the relationship between gradient, weights and learning rate during training.precision
The training will stop if precision is high enough.iterations
The training will stop if maximum iterations is reached.prettyPrint
If true, the layout is rendered graphically on console.coordinator
The coordinator host address for distributed training.transport
Transport throughput specifics for distributed training.parallelism
Controls how many threads are used for distributed training.batchSize
Controls how many samples are presented per weight update. (1=on-line, ..., n=full-batch)gcThreshold
Fine tune GC for GPU, the threshold is set in byteslossFuncOutput
Prints the loss to the specified file/closure.waypoint
Periodic actions can be executed, e.g. saving the weights every n steps.approximation
If set, the gradients are approximated numerically.regularization
The respective regulator tries to avoid over-fitting.partitions
A sequential training sequence can be partitioned for RNNs. (0 index-based)specifics
Some nets use specific parameters set in thespecifics
map. -
case class
Softmax
[V]() extends LossFunction[V] with Product with Serializable
L = -Σ(y * log(ex / ΣeX))
L = -Σ(y * log(ex / ΣeX))
Works for 1-of-K classification, under a cross-entropy regime, where
y
is the target andx
the prediction. The target is expressed using hot-vector encoding, e. g. (0, 1, 0, 0) where 1 is the true class. The first sum Σ is taken over the full batch and both exponentials give a convex functional form. The second sum Σ produces scores in range [0.0, 1.0] such that they sum up to 1 and are interpretable as percent. -
case class
SquaredMeanError
[V]() extends LossFunction[V] with Product with Serializable
L = Σ1/2(y - x)²
L = Σ1/2(y - x)²
Where
y
is the target andx
the prediction. The sum Σ is taken over the full batch and the square ² gives a convex functional form. -
case class
Transport
(messageGroupSize: Int, frameSize: String) extends Product with Serializable
The
messageGroupSize
controls how many weights per batch will be sent.The
messageGroupSize
controls how many weights per batch will be sent. TheframeSize
is the maximum message size for inter-node communication. -
trait
Update
[V] extends AnyRef
Updates weights
ws
using derivativesdws
andlearningRate
forlayer
. -
case class
Vanilla
[V]() extends Update[V] with Product with Serializable
Gingerly stepping vanilla.
Gingerly stepping vanilla. Weights_{n} = Weights_{n-1} - (Grads_{n-1} * learningRate)
-
case class
Waypoint
[V](nth: Int, action: (Int, Network.Weights[V]) ⇒ Unit) extends Product with Serializable
Performs function
action
everynth
step.Performs function
action
everynth
step. The function is passed iteration count and a snapshot of the weights. - trait WaypointLogic [V] extends AnyRef
-
trait
WeightBreeder
[V] extends (Seq[Layer]) ⇒ core.Network.Weights[V]
A WeightBreeder produces a weight matrix for each Layer.
A WeightBreeder produces a weight matrix for each Layer.
- Annotations
- @implicitNotFound( ... )
-
trait
Welcoming
extends AnyRef
- Since
09.07.16
Value Members
-
object
Activator
extends Serializable
Activator functions.
- object IllusionBreaker
-
object
KeepBest
extends Regularization with Product with Serializable
Keep weights, which had the smallest loss during training.
Keep weights, which had the smallest loss during training. In particular, this is useful for RNNs, as they can oscillate during training.
-
object
Network
extends Lexicon with Serializable
- Since
03.01.16
-
object
SoftmaxImpl
Computes ex / ΣeX for given matrix
x
by row. - object WeightBreeder
-
object
convolute
extends UFunc
Convolutes neuroflow.common.Tensor3D linearized in
in
, producing a new one. -
object
convolute_backprop
extends UFunc
Backprops convolution of a neuroflow.common.Tensor3D linearized in
in
. -
object
reshape_batch
extends UFunc
Reshapes matrix
in
by transposing the batch.Reshapes matrix
in
by transposing the batch. Examples given: |1 2 3| |1 1 1| |1 2 3| -> |2 2 2| |1 2 3| |3 3 3||1 1 2 2| |1 1 1 1 1 1| |1 1 2 2| -> |2 2 2 2 2 2| |1 1 2 2|
-
object
reshape_batch_backprop
extends UFunc
Reshapes matrix
in
by transposing the batch.Reshapes matrix
in
by transposing the batch. Examples given: |1 2 3| |1 1 1| |1 2 3| <- |2 2 2| |1 2 3| |3 3 3||1 1 2 2| |1 1 1 1 1 1| |1 1 2 2| <- |2 2 2 2 2 2| |1 1 2 2|
-
object
subRowMax
extends UFunc
Subtracts row maximum from row elements.
Subtracts row maximum from row elements. Example given: |1 2 1| |-1 0 -1| |2 2 2| -> | 0 0 0| |0 1 0| |-1 0 -1|