an element-wise abs operation
measures the mean absolute value of the element-wise difference between input
measures the mean absolute value of the element-wise difference between input
adds a bias term to input data ;
adds a bias term to input data ;
adding a constant
adding a constant
This loss function measures the Binary Cross Entropy between the target and the output loss(o, t) = - 1/n sum_i (t[i] * log(o[i]) + (1 - t[i]) * log(1 - o[i])) or in the case of the weights argument being specified: loss(o, t) = - 1/n sum_i weights[i] * (t[i] * log(o[i]) + (1 - t[i]) * log(1 - o[i]))
This loss function measures the Binary Cross Entropy between the target and the output loss(o, t) = - 1/n sum_i (t[i] * log(o[i]) + (1 - t[i]) * log(1 - o[i])) or in the case of the weights argument being specified: loss(o, t) = - 1/n sum_i weights[i] * (t[i] * log(o[i]) + (1 - t[i]) * log(1 - o[i]))
By default, the losses are averaged for each mini-batch over observations as well as over dimensions. However, if the field sizeAverage is set to false, the losses are instead summed.
numeric type
This layer implements Batch Normalization as described in the paper: "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" by Sergey Ioffe, Christian Szegedy https://arxiv.org/abs/1502.03167
This layer implements Batch Normalization as described in the paper: "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" by Sergey Ioffe, Christian Szegedy https://arxiv.org/abs/1502.03167
This implementation is useful for inputs NOT coming from convolution layers. For convolution layers, use nn.SpatialBatchNormalization.
The operation implemented is: ( x - mean(x) ) y = -------------------- * gamma + beta standard-deviation(x) where gamma and beta are learnable parameters.The learning of gamma and beta is optional.
numeric type
This layer implement a bidirectional recurrent neural network
This layer implement a bidirectional recurrent neural network
numeric type
a bilinear transformation with sparse inputs, The input tensor given in forward(input) is a table containing both inputs x_1 and x_2, which are tensors of size N x inputDimension1 and N x inputDimension2, respectively.
a bilinear transformation with sparse inputs, The input tensor given in forward(input) is a table containing both inputs x_1 and x_2, which are tensors of size N x inputDimension1 and N x inputDimension2, respectively.
Bottle allows varying dimensionality input to be forwarded through any module that accepts input of nInputDim dimensions, and generates output of nOutputDim dimensions.
Bottle allows varying dimensionality input to be forwarded through any module that accepts input of nInputDim dimensions, and generates output of nOutputDim dimensions.
This layer has a bias tensor with given size.
This layer has a bias tensor with given size. The bias will be added element wise to the input tensor. If the element number of the bias tensor match the input tensor, a simply element wise will be done. Or the bias will be expanded to the same size of the input. The expand means repeat on unmatched singleton dimension(if some unmatched dimension isn't singleton dimension, it will report an error). If the input is a batch, a singleton dimension will be add to the first dimension before the expand.
numeric type
Merge the input tensors in the input table by element wise adding them together.
Merge the input tensors in the input table by element wise adding them together. The input table is actually an array of tensor with same size.
Numeric type. Only support float/double now
Takes a table with two Tensor and returns the component-wise division between them.
Takes a table with two Tensor and returns the component-wise division between them.
Takes a table of Tensors and outputs the max of all of them.
Takes a table of Tensors and outputs the max of all of them.
Takes a table of Tensors and outputs the min of all of them.
Takes a table of Tensors and outputs the min of all of them.
This layer has a weight tensor with given size.
This layer has a weight tensor with given size. The weight will be multiplied element wise to the input tensor. If the element number of the weight tensor match the input tensor, a simply element wise multiply will be done. Or the bias will be expanded to the same size of the input. The expand means repeat on unmatched singleton dimension(if some unmatched dimension isn't singleton dimension, it will report an error). If the input is a batch, a singleton dimension will be add to the first dimension before the expand.
numeric type
Takes a table of Tensors and outputs the multiplication of all of them.
Takes a table of Tensors and outputs the multiplication of all of them.
Takes a table with two Tensor and returns the component-wise subtraction between them.
Takes a table with two Tensor and returns the component-wise subtraction between them.
hidden sizes in the Cell, whose length is the number of hiddens.
hidden sizes in the Cell, whose length is the number of hiddens. The elements correspond to the hidden sizes of returned hiddens
E.g. For RnnCell, it should be Array(hiddenSize)
For LSTM, it should be Array(hiddenSize, hiddenSize)
(because each time step a LSTM return two hiddens h
and c
in order,
which have the same size.)
A kind of hard tanh activition function with integer min and max
A kind of hard tanh activition function with integer min and max
numeric type
The negative log likelihood criterion.
The negative log likelihood criterion. It is useful to train a classification problem with n classes. If provided, the optional argument weights should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.
The input given through a forward() is expected to contain log-probabilities of each class: input has to be a 1D Tensor of size n. Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftMax layer in the last layer of your neural network. You may use CrossEntropyCriterion instead, if you prefer not to add an extra layer to your network. This criterion expects a class index (1 to the number of class) as target when calling forward(input, target) and backward(input, target).
The loss can be described as: loss(x, class) = -x[class] or in the case of the weights argument it is specified as follows: loss(x, class) = -weights[class] * x[class] Due to the behaviour of the backend code, it is necessary to set sizeAverage to false when calculating losses in non-batch mode.
By default, the losses are averaged over observations for each minibatch. However, if the field sizeAverage is set to false, the losses are instead summed for each minibatch.
numeric type
ClassSimplexCriterion implements a criterion for classification.
ClassSimplexCriterion implements a criterion for classification. It learns an embedding per class, where each class' embedding is a point on an (N-1)-dimensional simplex, where N is the number of classes.
Concat concatenates the output of one layer of "parallel"
modules along the provided dimension
: they take the
same inputs, and their output is concatenated.
Concat concatenates the output of one layer of "parallel"
modules along the provided dimension
: they take the
same inputs, and their output is concatenated.
+-----------+
+----> module1 -----+
| | | |
input -----+----> module2 -----+----> output
| | | |
+----> module3 -----+
+-----------+
ConcateTable is a container module like Concate.
ConcateTable is a container module like Concate. Applies an input to each member module, input can be a tensor or a table.
ConcateTable usually works with CAddTable and CMulTable to implement element wise add/multiply on outputs of two modules.
Container is an abstract AbstractModule class which declares methods defined in all containers.
Container is an abstract AbstractModule class which
declares methods defined in all containers. A container usually
contain some other modules in the modules
variable. It overrides
many module methods such that calls are propogated to the contained
modules.
Input data type
Output data type
Numeric type. Only support float/double now
used to make input, gradOutput both contiguous
used to make input, gradOutput both contiguous
Cosine calculates the cosine similarity of the input to k mean centers.
Cosine calculates the cosine similarity of the input to k mean centers.
The input given in forward(input)
must be either
a vector (1D tensor) or matrix (2D tensor). If the input is a vector, it must
have the size of inputSize
. If it is a matrix, then each row is assumed to be
an input sample of given batch (the number of rows means the batch size and
the number of columns should be equal to the inputSize
).
outputs the cosine distance between inputs
outputs the cosine distance between inputs
Creates a criterion that measures the loss given an input x = {x1, x2}, a table of two Tensors, and a Tensor label y with values 1 or -1.
Creates a criterion that measures the loss given an input x = {x1, x2}, a table of two Tensors, and a Tensor label y with values 1 or -1.
This criterion combines LogSoftMax and ClassNLLCriterion in one single class.
This criterion combines LogSoftMax and ClassNLLCriterion in one single class.
The Kullback–Leibler divergence criterion
The Kullback–Leibler divergence criterion
This is a simple table layer which takes a table of two tensors as input and calculate the dot product between them as outputs
This is a simple table layer which takes a table of two tensors as input and calculate the dot product between them as outputs
Dropout masks(set to zero) parts of input using a bernoulli distribution.
Dropout masks(set to zero) parts of input using a bernoulli distribution. Each input element has a probability initP of being dropped. If scale is set, the outputs are scaled by a factor of 1/(1-initP) during training. During evaluating, output is the same as input.
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) [http://arxiv.org/pdf/1511.07289.pdf]
Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) [http://arxiv.org/pdf/1511.07289.pdf]
This module is for debug purpose, which can print activation and gradient in your model topology
This module is for debug purpose, which can print activation and gradient in your model topology
Outputs the Euclidean distance of the input to outputSize centers
Outputs the Euclidean distance of the input to outputSize centers
Numeric type. Only support float/double now
Applies element-wise exp to input tensor.
Applies element-wise exp to input tensor.
This is a table layer which takes an arbitrarily deep table of Tensors (potentially nested) as input and a table of Tensors without any nested table will be produced
This is a table layer which takes an arbitrarily deep table of Tensors (potentially nested) as input and a table of Tensors without any nested table will be produced
Gated Recurrent Units architecture.
Gated Recurrent Units architecture. The first input in sequence uses zero value for cell and hidden state
Ref. 1. http://www.wildml.com/2015/10/ recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/
2. https://github.com/Element-Research/rnn/blob/master/GRU.lua
It is a simple module preserves the input, but takes the gradient from the subsequent layer, multiplies it by -lambda and passes it to the preceding layer.
It is a simple module preserves the input, but takes the gradient from the subsequent layer, multiplies it by -lambda and passes it to the preceding layer. This can be used to maximise an objective function whilst using gradient descent, as described in ["Domain-Adversarial Training of Neural Networks" (http://arxiv.org/abs/1505.07818)]
This is a transfer layer which applies the hard shrinkage function element-wise to the input Tensor.
This is a transfer layer which applies the hard shrinkage function element-wise to the input Tensor. The parameter lambda is set to 0.5 by default ⎧ x, if x > lambda f(x) = ⎨ x, if x < -lambda ⎩ 0, otherwise
Applies HardTanh to each element of input, HardTanh is defined: ⎧ maxValue, if x > maxValue f(x) = ⎨ minValue, if x < minValue ⎩ x, otherwise
Applies HardTanh to each element of input, HardTanh is defined: ⎧ maxValue, if x > maxValue f(x) = ⎨ minValue, if x < minValue ⎩ x, otherwise
Creates a criterion that measures the loss given an input x which is a 1-dimensional vector and a label y (1 or -1).
Creates a criterion that measures the loss given an input x which is a 1-dimensional vector and a label y (1 or -1). This is usually used for measuring whether two inputs are similar or dissimilar, e.g. using the L1 pairwise distance, and is typically used for learning nonlinear embeddings or semi-supervised learning.
⎧ x_i, if y_i == 1 loss(x, y) = 1/n ⎨ ⎩ max(0, margin - x_i), if y_i == -1
If x and y are n-dimensional Tensors, the sum operation still operates over all the elements, and divides by n (this can be avoided if one sets the internal variable sizeAverage to false). The margin has a default value of 1, or can be set in the constructor.
Identity just return the input to output.
Identity just return the input to output. It's useful in same parallel container to get an origin input.
Applies the Tensor index operation along the given dimension.
Applies the Tensor index operation along the given dimension.
Reshape with the support of infered size, Positive numbers are used directly, setting the corresponding dimension of the output tensor.
Reshape with the support of infered size, Positive numbers are used directly, setting the corresponding dimension of the output tensor. In addition, two special values are accepted: 0 means "copy the respective dimension of the input". i.e., if the input has 2 as its 1st dimension, the output will have 2 as its 1st dimension as well -1 stands for "infer this from the other dimensions" this dimension is calculated to keep the overall element count the same as in the input. At most one -1 can be used in a reshape operation.
For example, (4, 5, 6, 7) -> InferReshape (4, 0, 3, -1) -> (4, 5, 3, 14) with 1st and 3rd dim same as given size, with 2nd dim same as input, and the infered dim is 14
type
Initialization method to initialize bias and weight
It is a table module which takes a table of Tensors as input and
outputs a Tensor by joining them together along the dimension dimension
.
It is a table module which takes a table of Tensors as input and
outputs a Tensor by joining them together along the dimension dimension
.
The input to this layer is expected to be a tensor, or a batch of tensors;
when using mini-batch, a batch of sample tensors will be passed to the layer and
the user need to specify the number of dimensions of each sample tensor in the
batch using nInputDims
.
compute L1 norm for input, and sign of input
compute L1 norm for input, and sign of input
Creates a criterion that measures the loss given an input x = {x1, x2}, a table of two Tensors, and a label y (1 or -1):
Creates a criterion that measures the loss given an input x = {x1, x2}, a table of two Tensors, and a label y (1 or -1):
adds an L1 penalty to an input (for sparsity).
adds an L1 penalty to an input (for sparsity). L1Penalty is an inline module that in its forward propagation copies the input Tensor directly to the output, and computes an L1 loss of the latent state (input) and stores it in the module's loss field. During backward propagation: gradInput = gradOutput + gradLoss.
Long Short Term Memory architecture.
Long Short Term Memory architecture. Ref. A.: http://arxiv.org/pdf/1303.5778v1 (blueprint for this module) B. http://web.eecs.utk.edu/~itamar/courses/ECE-692/Bobby_paper1.pdf C. http://arxiv.org/pdf/1503.04069v1.pdf D. https://github.com/wojzaremba/lstm
Long Short Term Memory architecture with peephole.
Long Short Term Memory architecture with peephole. Ref. A.: http://arxiv.org/pdf/1303.5778v1 (blueprint for this module) B. http://web.eecs.utk.edu/~itamar/courses/ECE-692/Bobby_paper1.pdf C. http://arxiv.org/pdf/1503.04069v1.pdf D. https://github.com/wojzaremba/lstm
It is a transfer module that applies LeakyReLU, which parameter negval sets the slope of the negative part: LeakyReLU is defined as: f(x) = max(0, x) + negval * min(0, x)
It is a transfer module that applies LeakyReLU, which parameter negval sets the slope of the negative part: LeakyReLU is defined as: f(x) = max(0, x) + negval * min(0, x)
The Linear module applies a linear transformation to the input data, i.e.
The Linear module applies a linear transformation to the input data,
i.e. y = Wx + b
. The input given in forward(input)
must be either
a vector (1D tensor) or matrix (2D tensor). If the input is a vector, it must
have the size of inputSize
. If it is a matrix, then each row is assumed to be
an input sample of given batch (the number of rows means the batch size and
the number of columns should be equal to the inputSize
).
The Log module applies a log transformation to the input data
The Log module applies a log transformation to the input data
This class is a transform layer corresponding to the sigmoid function: f(x) = Log(1 / (1 + e ^^ (-x)))
This class is a transform layer corresponding to the sigmoid function: f(x) = Log(1 / (1 + e ^^ (-x)))
The LogSoftMax module applies a LogSoftMax transformation to the input data which is defined as: f_i(x) = log(1 / a exp(x_i)) where a = sum_j[exp(x_j)]
The LogSoftMax module applies a LogSoftMax transformation to the input data which is defined as: f_i(x) = log(1 / a exp(x_i)) where a = sum_j[exp(x_j)]
The input given in forward(input)
must be either
a vector (1D tensor) or matrix (2D tensor).
a convolution of width 1, commonly used for word embeddings;
a convolution of width 1, commonly used for word embeddings;
Module to perform matrix multiplication on two mini-batch inputs, producing a mini-batch.
Module to perform matrix multiplication on two mini-batch inputs, producing a mini-batch.
The mean squared error criterion e.g.
The mean squared error criterion e.g. input: a, target: b, total elements: n loss(a, b) = 1/n \sum |a_i - b_i|^2 sizeAverage is true by default to divide the sum of squared error by n
It is a module to perform matrix vector multiplication on two mini-batch inputs, producing a mini-batch.
It is a module to perform matrix vector multiplication on two mini-batch inputs, producing a mini-batch.
This class is a container for a single module which will be applied to all input elements.
This class is a container for a single module which will be applied to all input elements. The member module is cloned as necessary to process all input elements.
Creates a criterion that optimizes a two-class classification hinge loss (margin-based loss) between input x (a Tensor of dimension 1) and output y.
Creates a criterion that optimizes a two-class classification hinge loss (margin-based loss) between input x (a Tensor of dimension 1) and output y.
Creates a criterion that measures the loss given an input x = {x1, x2}, a table of two Tensors of size 1 (they contain only scalars), and a label y (1 or -1).
Creates a criterion that measures the loss given an input x = {x1, x2}, a table of two Tensors of size 1 (they contain only scalars), and a label y (1 or -1). In batch mode, x is a table of two Tensors of size batchsize, and y is a Tensor of size batchsize containing 1 or -1 for each corresponding pair of elements in the input Tensor. If y == 1 then it assumed the first input should be ranked higher (have a larger value) than the second input, and vice-versa for y == -1.
Performs a torch.MaskedSelect on a Tensor.
Performs a torch.MaskedSelect on a Tensor. The mask is supplied as a tabular argument with the input on the forward and backward passes.
Applies a max operation over dimension dim
Applies a max operation over dimension dim
It is a simple layer which applies a mean operation over the given dimension.
It is a simple layer which applies a mean operation over the given dimension. When nInputDims is provided, the input will be considered as batches. Then the mean operation will be applied in (dimension + 1).
The input to this layer is expected to be a tensor, or a batch of tensors;
when using mini-batch, a batch of sample tensors will be passed to the layer and
the user need to specify the number of dimensions of each sample tensor in the
batch using nInputDims
.
Applies a min operation over dimension dim
.
Applies a min operation over dimension dim
.
Creates a module that takes a table {gater, experts} as input and outputs the mixture of experts (a Tensor or table of Tensors) using a gater Tensor.
Creates a module that takes a table {gater, experts} as input and outputs the mixture of experts (a Tensor or table of Tensors) using a gater Tensor. When dim is provided, it specifies the dimension of the experts Tensor that will be interpolated (or mixed). Otherwise, the experts should take the form of a table of Tensors. This Module works for experts of dimension 1D or more, and for a 1D or 2D gater, i.e. for single examples or mini-batches.
Numeric type. Only support float/double now
multiply a single scalar factor to the incoming data
multiply a single scalar factor to the incoming data
Multiplies input Tensor by a (non-learnable) scalar constant.
Multiplies input Tensor by a (non-learnable) scalar constant. This module is sometimes useful for debugging purposes.
a weighted sum of other criterions each applied to the same input and target;
a weighted sum of other criterions each applied to the same input and target;
Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input x and output y (which is a Tensor of target class indices)
Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input x and output y (which is a Tensor of target class indices)
A MultiLabel multiclass criterion based on sigmoid:
A MultiLabel multiclass criterion based on sigmoid:
the loss is: l(x,y) = - sum_i y[i] * log(p[i]) + (1 - y[i]) * log (1 - p[i]) where p[i] = exp(x[i]) / (1 + exp(x[i]))
and with weights: l(x,y) = - sum_i weights[i] (y[i] * log(p[i]) + (1 - y[i]) * log (1 - p[i]))
Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input x and output y (which is a target class index).
Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input x and output y (which is a target class index).
Narrow is application of narrow operation in a module.
Narrow is application of narrow operation in a module. The module further supports a negative length in order to handle inputs with an unknown size.
Creates a module that takes a table as input and outputs the subtable starting at index offset having length elements (defaults to 1 element).
Creates a module that takes a table as input and outputs the subtable starting at index
offset having length elements (defaults to 1 element). The elements can be either
a table or a Tensor. If length
is negative, it means selecting the elements from the
offset to element which located at the abs(length
) to the last element of the input.
Non-Maximum Suppression (nms) for Object Detection The goal of nms is to solve the problem that groups of several detections near the real location, ideally obtaining only one detection per object
Normalizes the input Tensor to have unit L_p norm.
Normalizes the input Tensor to have unit L_p norm. The smoothing parameter eps prevents division by zero when the input contains all zero elements (default = 1e-10). p can be Double.MaxValue
Applies parametric ReLU, which parameter varies the slope of the negative part.
Applies parametric ReLU, which parameter varies the slope of the negative part.
PReLU: f(x) = max(0, x) + a * min(0, x)
nOutputPlane's default value is 0, that means using PReLU in shared version and has only one parameters.
Notice: Please don't use weight decay on this.
This module adds pad units of padding to dimension dim of the input.
This module adds pad units of padding to dimension dim of the input. If pad is negative, padding is added to the left, otherwise, it is added to the right of the dimension.
The input to this layer is expected to be a tensor, or a batch of tensors; when using mini-batch, a batch of sample tensors will be passed to the layer and the user need to specify the number of dimensions of each sample tensor in the batch using nInputDims.
It is a module that takes a table of two vectors as input and outputs the distance between them using the p-norm.
It is a module that takes a table of two vectors as input and outputs
the distance between them using the p-norm.
The input given in forward(input)
is a Table that contains two tensors which
must be either a vector (1D tensor) or matrix (2D tensor). If the input is a vector,
it must have the size of inputSize
. If it is a matrix, then each row is assumed to be
an input sample of the given batch (the number of rows means the batch size and
the number of columns should be equal to the inputSize
).
ParallelCriterion is a weighted sum of other criterions each applied to a different input and target.
ParallelCriterion is a weighted sum of other criterions each applied to a different input and target. Set repeatTarget = true to share the target for criterions.
Use add(criterion[, weight]) method to add criterion. Where weight is a scalar(default 1).
It is a container module that applies the i-th member module to the i-th input, and outputs an output in the form of Table
It is a container module that applies the i-th member module to the i-th input, and outputs an output in the form of Table
Apply an element-wise power operation with scale and shift.
Apply an element-wise power operation with scale and shift.
f(x) = (shift + scale * x)power
Applies the randomized leaky rectified linear unit (RReLU) element-wise to the input Tensor, thus outputting a Tensor of the same dimension.
Applies the randomized leaky rectified linear unit (RReLU) element-wise to the input Tensor, thus outputting a Tensor of the same dimension. Informally the RReLU is also known as 'insanity' layer. RReLU is defined as: f(x) = max(0,x) + a * min(0, x) where a ~ U(l, u). In training mode negative inputs are multiplied by a factor a drawn from a uniform random distribution U(l, u). In evaluation mode a RReLU behaves like a LeakyReLU with a constant mean factor a = (l + u) / 2. By default, l = 1/8 and u = 1/3. If l == u a RReLU effectively becomes a LeakyReLU. Regardless of operating in in-place mode a RReLU will internally allocate an input-sized noise tensor to store random factors for negative inputs. The backward() operation assumes that forward() has been called before. For reference see [Empirical Evaluation of Rectified Activations in Convolutional Network](http://arxiv.org/abs/1505.00853).
data type
Applies the rectified linear unit (ReLU) function element-wise to the input Tensor Thus the output is a Tensor of the same dimension ReLU function is defined as: f(x) = max(0, x)
Applies the rectified linear unit (ReLU) function element-wise to the input Tensor Thus the output is a Tensor of the same dimension ReLU function is defined as: f(x) = max(0, x)
Same as ReLU except that the rectifying function f(x) saturates at x = 6 ReLU6 is defined as: f(x) = min(max(0, x), 6)
Same as ReLU except that the rectifying function f(x) saturates at x = 6 ReLU6 is defined as: f(x) = min(max(0, x), 6)
Recurrent module is a container of rnn cells Different types of rnn cells can be added using add() function
Replicate repeats input nFeatures
times along its dim
dimension
Replicate repeats input nFeatures
times along its dim
dimension
Notice: No memory copy, it set the stride along the dim
-th dimension to zero.
The forward(input)
reshape the input tensor into a
size(0) * size(1) * ...
tensor, taking the elements row-wise.
The forward(input)
reshape the input tensor into a
size(0) * size(1) * ...
tensor, taking the elements row-wise.
Reverse the input w.r.t given dimension.
Reverse the input w.r.t given dimension. The input can be a Tensor or Table.
Numeric type. Only support float/double now
Implementation of vanilla recurrent neural network cell i2h: weight matrix of input to hidden units h2h: weight matrix of hidden units to themselves through time The updating is defined as: h_t = f(i2h * x_t + h2h * h_{t-1})
Region of interest pooling The RoIPooling uses max pooling to convert the features inside any valid region of interest into a small feature map with a fixed spatial extent of pooledH × pooledW (e.g., 7 × 7) an RoI is a rectangular window into a conv feature map.
Region of interest pooling The RoIPooling uses max pooling to convert the features inside any valid region of interest into a small feature map with a fixed spatial extent of pooledH × pooledW (e.g., 7 × 7) an RoI is a rectangular window into a conv feature map. Each RoI is defined by a four-tuple (x1, y1, x2, y2) that specifies its top-left corner (x1, y1) and its bottom-right corner (x2, y2). RoI max pooling works by dividing the h × w RoI window into an pooledH × pooledW grid of sub-windows of approximate size h/H × w/W and then max-pooling the values in each sub-window into the corresponding output grid cell. Pooling is applied independently to each feature map channel
Numeric type. Only support float/double now
Scale is the combination of cmul and cadd Computes the elementwise product of input and weight, with the shape of the weight "expand" to match the shape of the input.
Scale is the combination of cmul and cadd Computes the elementwise product of input and weight, with the shape of the weight "expand" to match the shape of the input. Similarly, perform a expand cdd bias and perform an elementwise add
Numeric type. Only support float/double now
A Simple layer selecting an index of the input tensor in the given dimension
A Simple layer selecting an index of the input tensor in the given dimension
Creates a module that takes a table as input and outputs the element at index index
(positive or negative).
Creates a module that takes a table as input and outputs the element at index index
(positive or negative). This can be either a table or a Tensor.
The gradients of the non-index elements are zeroed Tensors of the same size.
This is true regardless of the depth of the encapsulated Tensor as the function used
internally to do so is recursive.
Sequential provides a means to plug layers together in a feed-forward fully connected manner.
Sequential provides a means to plug layers together in a feed-forward fully connected manner.
Applies the Sigmoid function element-wise to the input Tensor, thus outputting a Tensor of the same dimension.
Applies the Sigmoid function element-wise to the input Tensor, thus outputting a Tensor of the same dimension. Sigmoid is defined as: f(x) = 1 / (1 + exp(-x))
Creates a criterion that can be thought of as a smooth version of the AbsCriterion.
Creates a criterion that can be thought of as a smooth version of the AbsCriterion. It uses a squared term if the absolute element-wise error falls below 1. It is less sensitive to outliers than the MSECriterion and in some cases prevents exploding gradients (e.g. see "Fast R-CNN" paper by Ross Girshick).
| 0.5 * (x_i - y_i)2, if |x_i - y_i| < 1 loss(x, y) = 1/n \sum | | |x_i - y_i| - 0.5, otherwise
If x and y are d-dimensional Tensors with a total of n elements, the sum operation still operates over all the elements, and divides by n. The division by n can be avoided if one sets the internal variable sizeAverage to false
a smooth version of the AbsCriterion It uses a squared term if the absolute element-wise error falls below 1.
a smooth version of the AbsCriterion It uses a squared term if the absolute element-wise error falls below 1. It is less sensitive to outliers than the MSECriterion and in some cases prevents exploding gradients (e.g. see "Fast R-CNN" paper by Ross Girshick).
d = (x - y) * w_in loss(x, y, w_in, w_out) | 0.5 * (sigma * d_i)^2 * w_out if |d_i| < 1 / sigma / sigma
Creates a criterion that optimizes a two-class classification logistic loss between input x (a Tensor of dimension 1) and output y (which is a tensor containing either 1s or -1s).
Creates a criterion that optimizes a two-class classification logistic loss between input x (a Tensor of dimension 1) and output y (which is a tensor containing either 1s or -1s).
loss(x, y) = sum_i (log(1 + exp(-y[i]*x[i]))) / x:nElement()
Applies the SoftMax function to an n-dimensional input Tensor, rescaling them so that the elements of the n-dimensional output Tensor lie in the range (0, 1) and sum to 1.
Applies the SoftMax function to an n-dimensional input Tensor, rescaling them so that the elements of the n-dimensional output Tensor lie in the range (0, 1) and sum to 1. Softmax is defined as: f_i(x) = exp(x_i - shift) / sum_j exp(x_j - shift) where shift = max_i(x_i).
Applies the SoftMin function to an n-dimensional input Tensor, rescaling them so that the elements of the n-dimensional output Tensor lie in the range (0,1) and sum to 1.
Applies the SoftMin function to an n-dimensional input Tensor, rescaling them so that the elements of the n-dimensional output Tensor lie in the range (0,1) and sum to 1. Softmin is defined as: f_i(x) = exp(-x_i - shift) / sum_j exp(-x_j - shift) where shift = max_i(-x_i).
Apply the SoftPlus function to an n-dimensional input tensor.
Apply the SoftPlus function to an n-dimensional input tensor.
SoftPlus function: f_i(x) = 1/beta * log(1 + exp(beta * x_i))
Apply the soft shrinkage function element-wise to the input Tensor
Apply the soft shrinkage function element-wise to the input Tensor
SoftShrinkage operator: ⎧ x - lambda, if x > lambda f(x) = ⎨ x + lambda, if x < -lambda ⎩ 0, otherwise
Apply SoftSign function to an n-dimensional input Tensor.
Apply SoftSign function to an n-dimensional input Tensor.
SoftSign function: f_i(x) = x_i / (1+|x_i|)
Computes the multinomial logistic loss for a one-of-many classification task, passing real-valued predictions through a softmax to get a probability distribution over classes.
Computes the multinomial logistic loss for a one-of-many classification task, passing real-valued predictions through a softmax to get a probability distribution over classes. It should be preferred over separate SoftmaxLayer + MultinomialLogisticLossLayer as its gradient computation is more numerically stable.
Applies 2D average-pooling operation in kWxkH regions by step size dWxdH steps.
Applies 2D average-pooling operation in kWxkH regions by step size dWxdH steps. The number of output features is equal to the number of input planes.
This file implements Batch Normalization as described in the paper: "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" by Sergey Ioffe, Christian Szegedy This implementation is useful for inputs coming from convolution layers.
This file implements Batch Normalization as described in the paper: "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" by Sergey Ioffe, Christian Szegedy This implementation is useful for inputs coming from convolution layers. For non-convolutional layers, see BatchNormalization The operation implemented is:
( x - mean(x) ) y = -------------------- * gamma + beta standard-deviation(x)
where gamma and beta are learnable parameters. The learning of gamma and beta is optional.
Subtractive + divisive contrast normalization.
Subtractive + divisive contrast normalization.
Applies a 2D convolution over an input image composed of several input planes.
Applies a 2D convolution over an input image composed of several input planes. The input tensor in forward(input) is expected to be a 3D tensor (nInputPlane x height x width).
Applies Spatial Local Response Normalization between different feature maps.
Applies Spatial Local Response Normalization between different feature maps. The operation implemented is: x_f y_f = ------------------------------------------------- (k+(alpha/size)* sum_{l=l1 to l2} (x_l2))beta
where x_f is the input at spatial locations h,w (not shown for simplicity) and feature map f, l1 corresponds to max(0,f-ceil(size/2)) and l2 to min(F, f-ceil(size/2) + size). Here, F is the number of feature maps.
Apply a 2D dilated convolution over an input image.
Apply a 2D dilated convolution over an input image.
The input tensor is expected to be a 3D or 4D(with batch) tensor.
If input is a 3D tensor nInputPlane x height x width, owidth = floor(width + 2 * padW - dilationW * (kW-1) - 1) / dW + 1 oheight = floor(height + 2 * padH - dilationH * (kH-1) - 1) / dH + 1
Reference Paper: Yu F, Koltun V. Multi-scale context aggregation by dilated convolutions[J]. arXiv preprint arXiv:1511.07122, 2015.
Applies a spatial division operation on a series of 2D inputs using kernel for computing the weighted average in a neighborhood.
Applies a spatial division operation on a series of 2D inputs using kernel for computing the weighted average in a neighborhood. The neighborhood is defined for a local spatial region that is the size as kernel and across all features. For an input image, since there is only one feature, the region is only spatial. For an RGB image, the weighted average is taken over RGB channels and a spatial region.
If the kernel is 1D, then it will be used for constructing and separable 2D kernel. The operations will be much more efficient in this case.
The kernel is generally chosen as a gaussian when it is believed that the correlation of two pixel locations decrease with increasing distance. On the feature dimension, a uniform average is used since the weighting across features is not known.
Apply a 2D full convolution over an input image.
Apply a 2D full convolution over an input image.
The input tensor is expected to be a 3D or 4D(with batch) tensor. Note that instead of setting adjW and adjH, SpatialFullConvolution[Table, T] also accepts a table input with two tensors: T(convInput, sizeTensor) where convInput is the standard input tensor, and the size of sizeTensor is used to set the size of the output (will ignore the adjW and adjH values used to construct the module). This module can be used without a bias by setting parameter noBias = true while constructing the module.
If input is a 3D tensor nInputPlane x height x width, owidth = (width - 1) * dW - 2*padW + kW + adjW oheight = (height - 1) * dH - 2*padH + kH + adjH
Other frameworks call this operation "In-network Upsampling", "Fractionally-strided convolution", "Backwards Convolution," "Deconvolution", or "Upconvolution."
Reference Paper: Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 3431-3440.
Applies 2D max-pooling operation in kWxkH regions by step size dWxdH steps.
Applies 2D max-pooling operation in kWxkH regions by step size dWxdH steps. The number of output features is equal to the number of input planes. If the input image is a 3D tensor nInputPlane x height x width, the output image size will be nOutputPlane x oheight x owidth where owidth = op((width + 2*padW - kW) / dW + 1) oheight = op((height + 2*padH - kH) / dH + 1) op is a rounding operator. By default, it is floor. It can be changed by calling :ceil() or :floor() methods.
Applies a spatial subtraction operation on a series of 2D inputs using kernel for computing the weighted average in a neighborhood.
Applies a spatial subtraction operation on a series of 2D inputs using kernel for computing the weighted average in a neighborhood. The neighborhood is defined for a local spatial region that is the size as kernel and across all features. For a an input image, since there is only one feature, the region is only spatial. For an RGB image, the weighted average is taken over RGB channels and a spatial region.
If the kernel is 1D, then it will be used for constructing and separable 2D kernel. The operations will be much more efficient in this case.
The kernel is generally chosen as a gaussian when it is believed that the correlation of two pixel locations decrease with increasing distance. On the feature dimension, a uniform average is used since the weighting across features is not known.
Each feature map of a given input is padded with specified number of zeros.
Each feature map of a given input is padded with specified number of zeros. If padding values are negative, then input is cropped.
Creates a module that takes a Tensor as input and
outputs several tables, splitting the Tensor along
the specified dimension dimension
.
Creates a module that takes a Tensor as input and
outputs several tables, splitting the Tensor along
the specified dimension dimension
.
The input to this layer is expected to be a tensor, or a batch of tensors;
when using mini-batch, a batch of sample tensors will be passed to the layer and
the user need to specify the number of dimensions of each sample tensor in a
batch using nInputDims
.
Numeric type. Only support float/double now
Apply an element-wise sqrt operation.
Apply an element-wise sqrt operation.
Apply an element-wise square operation.
Apply an element-wise square operation.
Delete singleton all dimensions or a specific dim.
Delete singleton all dimensions or a specific dim.
It is a simple layer which applies a sum operation over the given dimension.
It is a simple layer which applies a sum operation over the given dimension. When nInputDims is provided, the input will be considered as a batches. Then the sum operation will be applied in (dimension + 1)
The input to this layer is expected to be a tensor, or a batch of tensors;
when using mini-batch, a batch of sample tensors will be passed to the layer and
the user need to specify the number of dimensions of each sample tensor in the
batch using nInputDims
.
Applies the Tanh function element-wise to the input Tensor, thus outputting a Tensor of the same dimension.
Applies the Tanh function element-wise to the input Tensor, thus outputting a Tensor of the same dimension. Tanh is defined as f(x) = (exp(x)-exp(-x))/(exp(x)+exp(-x)).
A simple layer for each element of the input tensor, do the following operation during the forward process: [f(x) = tanh(x) - 1]
A simple layer for each element of the input tensor, do the following operation during the forward process: [f(x) = tanh(x) - 1]
Threshold input Tensor.
Threshold input Tensor. If values in the Tensor smaller than th, then replace it with v
This layer is intended to apply contained layer to each temporal time slice of input tensor.
This layer is intended to apply contained layer to each temporal time slice of input tensor.
For instance, The TimeDistributed Layer can feed each time slice of input tensor to the Linear layer.
data type, which can be Double or Float
This class is intended to support inputs with 3 or more dimensions.
This class is intended to support inputs with 3 or more dimensions. Apply Any Provided Criterion to every temporal slice of an input.
Transpose input along specified dimensions
Transpose input along specified dimensions
Insert singleton dim (i.e., dimension 1) at position pos.
Insert singleton dim (i.e., dimension 1) at position pos. For an input with dim = input.dim(), there are dim + 1 possible positions to insert the singleton dimension.
This module creates a new view of the input tensor using the sizes passed to the constructor.
This module creates a new view of the input tensor using the sizes passed to the constructor. The method setNumInputDims() allows to specify the expected number of dimensions of the inputs of the modules. This makes it possible to use minibatch inputs when using a size -1 for one of the dimensions.
In short, it helps signals reach deep into the network.
In short, it helps signals reach deep into the network.
During the training process of deep nn:
2. If the weights in a network start too large, then the signal grows as it passes through each layer until it’s too massive to be useful.
Xavier initialization makes sure the weights are ‘just right’, keeping the signal in a reasonable range of values through many layers.
More details on the paper [Understanding the difficulty of training deep feedforward neural networks] (http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf)
an element-wise abs operation