com.thoughtworks.deeplearning.DifferentiableAny
[use case]
import com.thoughtworks.deeplearning.DifferentiableAny._ def composeNetwork(implicit thisLayer: INDArray @Symbolic)(anotherLayer: INDArray @Symbolic) = { thisLayer.compose(anotherLayer)
Returns the result of inputData's forward.
Returns the result of inputData's forward.
import com.thoughtworks.deeplearning.DifferentiableAny._ def composeNetwork(implicit inputData: INDArray @Symbolic) = ??? val predictor = composeNetwork predictor.predict(testData)
Updates those weights embedded in anyLayer
according to the result of inputData's backward.
Updates those weights embedded in anyLayer
according to the result of inputData's backward.
import com.thoughtworks.deeplearning.DifferentiableAny._ def composeNetwork(implicit input: INDArray @Symbolic) = ??? val yourNetwork=composeNetwork yourNetwork.train(testData)
Returns a new Layer which is a wrapper of result of anyLayer
's forward and will invoke hook(anyLayer.forward(input).value)
.
Returns a new Layer which is a wrapper of result of anyLayer
's forward and will invoke hook(anyLayer.forward(input).value)
.
In DeepLearning.Scala, operation is not immediately run,
but first filled the network with placeholders, until the entire network is running, the real data will replace placeholders.
So if you want to know some layers's intermediate state, you need to use withOutputDataHook
.
import com.thoughtworks.deeplearning.DifferentiableAny._ (var:INDArray @Symbolic).withOutputDataHook{ data => println(data) }
(trainable: StringAdd).self
(trainable: StringFormat).self
(trainable: ArrowAssoc[Trainable[Data, Delta]]).x
(Since version 2.10.0) Use leftOfArrow
instead
(trainable: Ensuring[Trainable[Data, Delta]]).x
(Since version 2.10.0) Use resultOfEnsuring
instead
A type class that makes result of forward as input of backward. To train a layer, a implement of
Trainable
has parameterized with types of the layer's Input and Output is required.This type class is required by train.