Trait that describes an activation function for each layer
Trait that describes an activation function for each layer
Because these activation functions can be shared, we recommend to make inherited one as an object.
Defines transformation of new activation function.
Algorithm: AdaDelta algorithm
Algorithm: AdaDelta algorithm
If you are trying to use this algorithm for your research, you should add a reference to AdaDelta techinical report.
val algorithm = new AdaDelta(l2decay = 0.0001)
Algorithm: AdaGrad algorithm.
Algorithm: AdaGrad algorithm.
If you are trying to use this algorithm for your research, you should add a reference to AdaGrad paper.
val algorithm = new AdaGrad(l2decay = 0.0001)
Trait that describes an objective function for entire network
Trait that describes an objective function for entire network
Because these objective functions can be shared, we recommend to make inherited one as an object.
Type of probability *
Defines sugar operations of probability
Type of scalar *
Type of Neuron Input *
Defines sugar operations for ScalarMatrix
Algorithm: Stochastic Gradient Descent
Algorithm: Stochastic Gradient Descent
Basic Gradient Descent rule with mini-batch training.
val algorithm = new StochasticGradientDescent(l2decay = 0.0001)
Defines sugar operations of sequence of weights
Trait that describes the algorithm for weight update
Trait that describes the algorithm for weight update
Because each weight update requires history, we recommend to make inherited one as a class.
Objective Function: Cosine Similarity Error
Objective Function: Cosine Similarity Error
This has a heavy computation. If you want to use lighter one, use DotProductErr
val output = net(input) val err = CosineErr(real, output) val diff = CosineErr.derivative(real, output)
This function returns 1 - cosine similarity, i.e. cosine dissimiarlity.
Objective Function: Sum of Cross-Entropy (Logistic)
Objective Function: Sum of Cross-Entropy (Logistic)
val output = net(input) val err = CrossEntropyErr(real, output) val diff = CrossEntropyErr.derivative(real, output)
This objective function prefer 0/1 output
Objective Function: Dot-product Error
Objective Function: Dot-product Error
val output = net(input) val err = DotProductErr(real, output) val diff = DotProductErr.derivative(real, output)
This function computes additive inverse of dot product, i.e. dot-product dissimiarity.
Activation Function: Hard version of Sigmoid
Activation Function: Hard version of Sigmoid
val fx = HardSigmoid(0.0) val diff = HardSigmoid.derivative(fx)
sigmoid(x) = 1 / [exp(-x) + 1]
, hard version approximates tanh as piecewise linear function
(derived from relationship between tanh & sigmoid, and tanh & hard tanh.)
Activation Function: Hard version of Tanh (Hyperbolic Tangent)
Activation Function: Hard version of Tanh (Hyperbolic Tangent)
val fx = HardTanh(0.0) val diff = HardTanh.derivative(fx)
tanh(x) = sinh(x) / cosh(x)
, hard version approximates tanh as piecewise linear function.
Activation Function: Tanh (Hyperbolic Tangent)
Activation Function: Tanh (Hyperbolic Tangent)
val fx = HyperbolicTangent(0.0) val diff = HyperbolicTangent.derivative(fx)
tanh(x) = sinh(x) / cosh(x)
Activation Function: Linear
Activation Function: Linear
val fx = Linear(0.0) val diff = Linear.derivative(fx)
linear(x) = x
Objective Function: Sum of Absolute Error
Objective Function: Sum of Absolute Error
val output = net(input) val err = ManhattanErr(real, output) val diff = ManhattanErr.derivative(real, output)
In mathematics, L1-distance is called Manhattan distance.
Activation Function: Rectifier
Activation Function: Rectifier
val fx = Rectifier(0.0) val diff = Rectifier.derivative(fx)
rectifier(x) = x if x > 0, otherwise 0
Companion Object of ScalarMatrix
Companion Object of ScalarMatrix
This object defines various shortcuts.
Kryo Serializer for ScalarMatrix.
Activation Function: Sigmoid function
Activation Function: Sigmoid function
val fx = Sigmoid(0.0) val diff = Sigmoid.derivative(fx)
sigmoid(x) = 1 / [exp(-x) + 1]
Activation Function: Softplus
Activation Function: Softplus
val fx = Softplus(0.0) val diff = Softplus.derivative(fx)
softplus(x) = log[1 + exp(x)]
Objective Function: Sum of Squared Error
Objective Function: Sum of Squared Error
val output = net(input) val err = SquaredErr(real, output) val diff = SquaredErr.derivative(real, output)
Define Alias *
Package for various functions.