A plugin that enables all other DeepLearning.scala built-in plugins.
A plugin that enables all other DeepLearning.scala built-in plugins.
杨博 (Yang Bo)
When creating a Builtins from com.thoughtworks.feature.Factory,
import com.thoughtworks.feature.Factory val hyperparameters = Factory[plugins.Builtins].newInstance()
and import
anything in implicits,
import hyperparameters.implicits._
then all DeepLearning.scala built-in features should be enabled.
import org.nd4j.linalg.factory.Nd4j import org.nd4j.linalg.api.ndarray.INDArray
val numberOfInputFeatures = 8 val numberOfOutputFeatures = 1 val initialValueOfWeight: INDArray = Nd4j.rand(numberOfInputFeatures, numberOfOutputFeatures) val weight: hyperparameters.INDArrayWeight = hyperparameters.INDArrayWeight(initialValueOfWeight)
Creating neural network layers,
def fullyConnectedLayer(input: INDArray): hyperparameters.INDArrayLayer = {
input dot weight
}
or loss functions:
def hingeLoss(scores: hyperparameters.INDArrayLayer, label: INDArray): hyperparameters.DoubleLayer = { hyperparameters.max(0.0, 1.0 - label * scores).sum }
Training:
import scalaz.std.stream._ import com.thoughtworks.future._ import com.thoughtworks.each.Monadic._
val batchSize = 4 val numberOfIterations = 10 val input = Nd4j.rand(batchSize, numberOfInputFeatures) val label = Nd4j.rand(batchSize, numberOfOutputFeatures)
@monadic[Future] def train: Future[Stream[Double]] = { for (iteration <- (0 until numberOfIterations).toStream) yield { hingeLoss(fullyConnectedLayer(input), label).train.each } }
When the training is done, the loss of the last iteration should be no more than the loss of the first iteration
train.map { lossesByIteration =>
lossesByIteration.last should be <= lossesByIteration.head
}
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.
杨博 (Yang Bo)
Given a DoubleWeight,
import com.thoughtworks.deeplearning.plugins._ import com.thoughtworks.feature.Factory val hyperparameters = Factory[DoubleTraining with ImplicitsSingleton with Operators with CumulativeDoubleLayers with DoubleWeights].newInstance() import hyperparameters.implicits._ val weight1 = hyperparameters.DoubleWeight(10)
then the training result should be applied on it
weight1.train.map { result => result should be(10.0f) weight1.data should be < 10.0f }
Given two DoubleWeights,
import com.thoughtworks.deeplearning.plugins._ import com.thoughtworks.feature.Factory val hyperparameters = Factory[DoubleTraining with ImplicitsSingleton with Operators with CumulativeDoubleLayers with DoubleWeights].newInstance() import hyperparameters.implicits._ val weight1 = hyperparameters.DoubleWeight(10) val weight2 = hyperparameters.DoubleWeight(300)
when adding them together,
val weight1PlusWeight2 = weight1 + weight2
then the training result should be applied on both weight
weight1PlusWeight2.train.map { result => result should be(310.0f) weight2.data should be < 300.0f weight1.data should be < 10.0f }
Unlike DoubleLayers, DoubleLayer in this CumulativeDoubleLayers
will share Tapes
created in forward pass pass for all dependencies, avoiding re-evaluation
in the case of diamond dependencies in a neural network.
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.
杨博 (Yang Bo)
Given a FloatWeight,
import com.thoughtworks.deeplearning.plugins._ import com.thoughtworks.feature.Factory val hyperparameters = Factory[FloatTraining with ImplicitsSingleton with Operators with CumulativeFloatLayers with FloatWeights].newInstance() import hyperparameters.implicits._ val weight1 = hyperparameters.FloatWeight(10)
then the training result should be applied on it
weight1.train.map { result => result should be(10.0f) weight1.data should be < 10.0f }
Given two FloatWeights,
import com.thoughtworks.deeplearning.plugins._ import com.thoughtworks.feature.Factory val hyperparameters = Factory[FloatTraining with ImplicitsSingleton with Operators with CumulativeFloatLayers with FloatWeights].newInstance() import hyperparameters.implicits._ val weight1 = hyperparameters.FloatWeight(10) val weight2 = hyperparameters.FloatWeight(300)
when adding them together,
val weight1PlusWeight2 = weight1 + weight2
then the training result should be applied on both weight
weight1PlusWeight2.train.map { result => result should be(310.0f) weight2.data should be < 300.0f weight1.data should be < 10.0f }
Unlike FloatLayers, FloatLayer in this CumulativeFloatLayers
will share Tapes
created in forward pass pass for all dependencies, avoiding re-evaluation
in the case of diamond dependencies in a neural network.
A plugin that provides differentiable operators on neural networks whose Data and Delta is org.nd4j.linalg.api.ndarray.INDArray.
A plugin that provides differentiable operators on neural networks whose Data and Delta is org.nd4j.linalg.api.ndarray.INDArray.
杨博 (Yang Bo)
Unlike INDArrayLayers, INDArrayLayer in this CumulativeINDArrayLayers
will share Tapes
created in forward pass for all dependencies, avoiding re-evaluation
in the case of diamond dependencies in a neural network.
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.
杨博 (Yang Bo)
By default, the computation in a DoubleLayer will re-evaluate again and again
if the DoubleLayer
is used by multiple other operations.
This behavior is very inefficient if there is are diamond dependencies in a neural network.
It's wise to use CumulativeDoubleLayers instead of this DoubleLayers
in such neural network.
A plugin that enables scala.Double in neural networks.
A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Double.
A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Double.
杨博 (Yang Bo)
A plugin to create scala.Double weights.
A plugin to create scala.Double weights.
杨博 (Yang Bo)
Custom optimization algorithm for updating DoubleWeight can be implemented by creating a plugin that provides an overridden DoubleOptimizer that provides an overridden DoubleOptimizer.delta.
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Float.
杨博 (Yang Bo)
By default, the computation in a FloatLayer will re-evaluate again and again
if the FloatLayer
is used by multiple other operations.
This behavior is very inefficient if there is are diamond dependencies in a neural network.
It's wise to use CumulativeFloatLayers instead of this FloatLayers
in such neural network.
A plugin that enables scala.Float in neural networks.
A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Float.
A DeepLearning.scala plugin that enables train method for neural networks whose loss is a scala.Float.
杨博 (Yang Bo)
A plugin to create scala.Float weights.
A plugin to create scala.Float weights.
杨博 (Yang Bo)
Custom optimization algorithm for updating FloatWeight can be implemented by creating a plugin that provides an overridden FloatOptimizer that provides an overridden FloatOptimizer.delta.
杨博 (Yang Bo)
A plugin that provides differentiable operators on neural networks whose Data and Delta is org.nd4j.linalg.api.ndarray.INDArray.
A plugin that provides differentiable operators on neural networks whose Data and Delta is org.nd4j.linalg.api.ndarray.INDArray.
杨博 (Yang Bo)
By default, the computation in a INDArrayLayer will re-evaluate again and again
if the INDArrayLayer
is used by multiple other operations.
This behavior is very inefficient if there is are diamond dependencies in a neural network.
It's wise to use CumulativeINDArrayLayers instead of this INDArrayLayers
in such neural network.
A plugin that enables org.nd4j.linalg.api.ndarray.INDArray in neural networks.
A DeepLearning.scala plugin that enables train method for neural networks whose loss is a org.nd4j.linalg.api.ndarray.INDArray.
A DeepLearning.scala plugin that enables train method for neural networks whose loss is a org.nd4j.linalg.api.ndarray.INDArray.
杨博 (Yang Bo)
A plugin to create org.nd4j.linalg.api.ndarray.INDArray weights.
A plugin to create org.nd4j.linalg.api.ndarray.INDArray weights.
杨博 (Yang Bo)
Custom optimization algorithm for updating INDArrayWeight can be implemented by creating a plugin that provides a overridden INDArrayOptimizer that provides an overridden INDArrayOptimizer.delta.
A plugin that creates the instance of implicits.
A plugin that enables Layer in neural networks.
A plugin that logs uncaught exceptions raised from Layer and Weight.
A plugin that contains definitions of polymorphic functions and methods.
A plugin that contains definitions of polymorphic functions and methods.
The implementations of polymorphic functions and methods can be found in FloatLayers.Implicits, DoubleLayers.Implicits and INDArrayLayers.Implicits.
杨博 (Yang Bo)
Shapeless's Documentations for the underlying mechanism of polymorphic functions.
杨博 (Yang Bo)
A DeepLearning.scala plugin that enables methods defined in DeepLearning.Ops for neural networks.
A DeepLearning.scala plugin that enables methods defined in DeepLearning.Ops for neural networks.
杨博 (Yang Bo)
A plugin that enables Weight in neural networks.
A plugin that enables Weight in neural networks.
杨博 (Yang Bo)
Author:
杨博 (Yang Bo)