By default, the computation in a DoubleLayer will re-evaluate again and again
if the DoubleLayer is used by multiple other operations.
This behavior is very inefficient if there is are diamond dependencies in a neural network.
It's wise to use CumulativeDoubleLayers instead of this DoubleLayers in such neural network.
A plugin that provides differentiable operators on neural networks whose Data and Delta is scala.Double.
Author:
杨博 (Yang Bo)
By default, the computation in a DoubleLayer will re-evaluate again and again if the
DoubleLayer
is used by multiple other operations. This behavior is very inefficient if there is are diamond dependencies in a neural network. It's wise to use CumulativeDoubleLayers instead of thisDoubleLayers
in such neural network.