Lower boundary of the uniform random distribution. Default is 1.0/8.
Upper boundary of the uniform random distribution. Default is 1.0/3.
A Single Shape, does not include the batch dimension.
Build graph: some other modules point to current module
Build graph: some other modules point to current module
upstream variables
Variable containing current module
A Single Shape, does not include the batch dimension.
Lower boundary of the uniform random distribution.
Lower boundary of the uniform random distribution. Default is 1.0/8.
Upper boundary of the uniform random distribution.
Upper boundary of the uniform random distribution. Default is 1.0/3.
(Since version 0.3.0) please use recommended saveModule(path, overWrite)
Applies the randomized leaky rectified linear unit element-wise to the input.
f(x) = max(0,x) + a * min(0, x) where a ~ U(l, u).
In the training mode, negative inputs are multiplied by a factor drawn from a uniform random distribution U(l, u). In the evaluation mode, a RReLU behaves like a LeakyReLU with a constant mean factor a = (l + u) / 2. If l == u, a RReLU essentially becomes a LeakyReLU. Regardless of operating in in-place mode a RReLU will internally allocate an input-sized noise tensor to store random factors for negative inputs. For reference, see [Empirical Evaluation of Rectified Activations in Convolutional Network](http://arxiv.org/abs/1505.00853).
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Remark: This layer is from Torch and wrapped in Keras style.
The numeric type of parameter(e.g. weight, bias). Only support float/double now.