| Class | Description |
|---|---|
| AbstractLSTM |
LSTM recurrent net, based on Graves: Supervised Sequence Labelling with Recurrent Neural Networks
http://www.cs.toronto.edu/~graves/phd.pdf
|
| AbstractLSTM.Builder<T extends AbstractLSTM.Builder<T>> | |
| ActivationLayer |
Activation layer is a simple layer that applies the specified activation function to the input activations
|
| ActivationLayer.Builder | |
| AutoEncoder |
Autoencoder layer.
|
| AutoEncoder.Builder | |
| BaseLayer |
A neural network layer.
|
| BaseLayer.Builder<T extends BaseLayer.Builder<T>> | |
| BaseOutputLayer | |
| BaseOutputLayer.Builder<T extends BaseOutputLayer.Builder<T>> | |
| BasePretrainNetwork | |
| BasePretrainNetwork.Builder<T extends BasePretrainNetwork.Builder<T>> | |
| BaseRecurrentLayer | |
| BaseRecurrentLayer.Builder<T extends BaseRecurrentLayer.Builder<T>> | |
| BaseUpsamplingLayer |
Upsampling base layer
|
| BaseUpsamplingLayer.UpsamplingBuilder<T extends BaseUpsamplingLayer.UpsamplingBuilder<T>> | |
| BatchNormalization |
Batch normalization layer
See: Ioffe and Szegedy, 2014, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift https://arxiv.org/abs/1502.03167 |
| BatchNormalization.Builder | |
| CapsuleLayer |
An implementation of the DigiCaps layer from Dynamic Routing Between Capsules
Input should come from a PrimaryCapsules layer and be of shape [mb, inputCaps, inputCapDims].
|
| CapsuleLayer.Builder | |
| CapsuleStrengthLayer |
An layer to get the "strength" of each capsule, that is, the probability of it being in the input.
|
| CapsuleStrengthLayer.Builder | |
| CenterLossOutputLayer |
Center loss is similar to triplet loss except that it enforces intraclass consistency and doesn't require feed
forward of multiple examples.
|
| CenterLossOutputLayer.Builder | |
| Cnn3DLossLayer |
3D Convolutional Neural Network Loss Layer.
Handles calculation of gradients etc for various loss (objective) functions. NOTE: Cnn3DLossLayer does not have any parameters. |
| Cnn3DLossLayer.Builder | |
| CnnLossLayer |
Convolutional Neural Network Loss Layer.
Handles calculation of gradients etc for various loss (objective) functions. NOTE: CnnLossLayer does not have any parameters. |
| CnnLossLayer.Builder | |
| Convolution1D |
1D convolution layer.
|
| Convolution1DLayer |
1D (temporal) convolutional layer.
|
| Convolution1DLayer.Builder | |
| Convolution2D |
2D convolution layer
|
| Convolution3D |
3D convolution layer configuration
|
| Convolution3D.Builder | |
| ConvolutionLayer |
2D Convolution layer (for example, spatial convolution over images).
|
| ConvolutionLayer.BaseConvBuilder<T extends ConvolutionLayer.BaseConvBuilder<T>> | |
| ConvolutionLayer.Builder | |
| Deconvolution2D |
2D deconvolution layer configuration
Deconvolutions are also known as transpose convolutions or fractionally strided convolutions. |
| Deconvolution2D.Builder | |
| DenseLayer |
Dense layer: a standard fully connected feed forward layer
|
| DenseLayer.Builder | |
| DepthwiseConvolution2D |
2D depth-wise convolution layer configuration.
|
| DepthwiseConvolution2D.Builder | |
| DropoutLayer |
Dropout layer.
|
| DropoutLayer.Builder | |
| EmbeddingLayer |
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to
numClass-1) as input.
|
| EmbeddingLayer.Builder | |
| EmbeddingSequenceLayer |
Embedding layer for sequences: feed-forward layer that expects fixed-length number (inputLength) of integers/indices
per example as input, ranged from 0 to numClasses - 1.
|
| EmbeddingSequenceLayer.Builder | |
| FeedForwardLayer |
Created by jeffreytang on 7/21/15.
|
| FeedForwardLayer.Builder<T extends FeedForwardLayer.Builder<T>> | |
| GlobalPoolingLayer |
Global pooling layer - used to do pooling over time for RNNs, and 2d pooling for CNNs.
Supports the following PoolingTypes: SUM, AVG, MAX, PNORMGlobal pooling layer can also handle mask arrays when dealing with variable length inputs. |
| GlobalPoolingLayer.Builder | |
| GravesBidirectionalLSTM | Deprecated
use
Bidirectional instead. |
| GravesBidirectionalLSTM.Builder | |
| GravesLSTM | Deprecated
Will be eventually removed.
|
| GravesLSTM.Builder | |
| InputTypeUtil |
Utilities for calculating input types
|
| Layer |
A neural network layer.
|
| Layer.Builder<T extends Layer.Builder<T>> | |
| LayerValidation |
Utility methods for validating layer configurations
|
| LearnedSelfAttentionLayer |
Implements Dot Product Self Attention with learned queries
Takes in RNN style input in the shape of [batchSize, features, timesteps]
and applies dot product attention using learned queries.
|
| LearnedSelfAttentionLayer.Builder | |
| LocallyConnected1D |
SameDiff version of a 1D locally connected layer.
|
| LocallyConnected1D.Builder | |
| LocallyConnected2D |
SameDiff version of a 2D locally connected layer.
|
| LocallyConnected2D.Builder | |
| LocalResponseNormalization |
Local response normalization layer
See section 3.3 of http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf |
| LocalResponseNormalization.Builder | |
| LossLayer |
LossLayer is a flexible output layer that performs a loss function on an input without MLP logic.
LossLayer is similar to OutputLayer in that both perform loss calculations for network outputs vs. |
| LossLayer.Builder | |
| LSTM |
LSTM recurrent neural network layer without peephole connections.
|
| LSTM.Builder | |
| NoParamLayer | |
| OutputLayer |
Output layer used for training via backpropagation based on labels and a specified loss function.
|
| OutputLayer.Builder | |
| Pooling1D |
1D Pooling (subsampling) layer.
|
| Pooling2D |
2D Pooling (subsampling) layer.
|
| PReLULayer |
Parametrized Rectified Linear Unit (PReLU)
|
| PReLULayer.Builder | |
| PrimaryCapsules |
An implementation of the PrimaryCaps layer from Dynamic Routing Between Capsules
Is a reshaped 2D convolution, and the input should be 2D convolutional ([mb, c, h, w]).
|
| PrimaryCapsules.Builder | |
| RecurrentAttentionLayer |
Implements Recurrent Dot Product Attention
Takes in RNN style input in the shape of [batchSize, features, timesteps]
and applies dot product attention using the hidden state as the query and
all time steps as keys/values.
|
| RecurrentAttentionLayer.Builder | |
| RnnLossLayer |
Recurrent Neural Network Loss Layer.
Handles calculation of gradients etc for various objective (loss) functions. Note: Unlike RnnOutputLayer this RnnLossLayer does not have any parameters - i.e., there is no
time distributed dense component here. |
| RnnLossLayer.Builder | |
| RnnOutputLayer |
A version of
OutputLayer for recurrent neural networks. |
| RnnOutputLayer.Builder | |
| SelfAttentionLayer |
Implements Dot Product Self Attention
Takes in RNN style input in the shape of [batchSize, features, timesteps]
and applies dot product attention using each timestep as the query.
|
| SelfAttentionLayer.Builder | |
| SeparableConvolution2D |
2D Separable convolution layer configuration.
|
| SeparableConvolution2D.Builder | |
| SpaceToBatchLayer |
Space to batch utility layer configuration for convolutional input types.
|
| SpaceToBatchLayer.Builder<T extends SpaceToBatchLayer.Builder<T>> | |
| SpaceToDepthLayer |
Space to channels utility layer configuration for convolutional input types.
|
| SpaceToDepthLayer.Builder<T extends SpaceToDepthLayer.Builder<T>> | |
| Subsampling1DLayer |
1D (temporal) subsampling layer - also known as pooling layer.
Expects input of shape [minibatch, nIn,
sequenceLength]. |
| Subsampling1DLayer.Builder | |
| Subsampling3DLayer |
3D subsampling / pooling layer for convolutional neural networks
|
| Subsampling3DLayer.BaseSubsamplingBuilder<T extends Subsampling3DLayer.BaseSubsamplingBuilder<T>> | |
| Subsampling3DLayer.Builder | |
| SubsamplingLayer |
Subsampling layer also referred to as pooling in convolution neural nets
Supports the following pooling types: MAX, AVG, SUM, PNORM
|
| SubsamplingLayer.BaseSubsamplingBuilder<T extends SubsamplingLayer.BaseSubsamplingBuilder<T>> | |
| SubsamplingLayer.Builder | |
| Upsampling1D |
Upsampling 1D layer
Repeats each step size times along the temporal/sequence axis (dimension 2)For input shape [minibatch, channels, sequenceLength] output has shape [minibatch, channels, size *
sequenceLength]Example: |
| Upsampling1D.Builder | |
| Upsampling2D |
Upsampling 2D layer
Repeats each value (or rather, set of depth values) in the height and width dimensions by size[0] and size[1] times respectively. If input has shape [minibatch, channels, height, width] then
output has shape [minibatch, channels, height*size[0], width*size[1]]Example: |
| Upsampling2D.Builder | |
| Upsampling3D |
Upsampling 3D layer
Repeats each value (all channel values for each x/y/z location) by size[0], size[1] and size[2] If input has shape [minibatch, channels, depth, height, width] then output has shape [minibatch, channels, size[0] * depth, size[1] * height, size[2] * width] |
| Upsampling3D.Builder | |
| ZeroPadding1DLayer |
Zero padding 1D layer for convolutional neural networks.
|
| ZeroPadding1DLayer.Builder | |
| ZeroPadding3DLayer |
Zero padding 3D layer for convolutional neural networks.
|
| ZeroPadding3DLayer.Builder | |
| ZeroPaddingLayer |
Zero padding layer for convolutional neural networks (2D CNNs).
|
| ZeroPaddingLayer.Builder |
| Enum | Description |
|---|---|
| Convolution3D.DataFormat |
An optional dataFormat: "NDHWC" or "NCDHW".
|
| ConvolutionLayer.AlgoMode |
The "PREFER_FASTEST" mode will pick the fastest algorithm for the specified parameters from the
ConvolutionLayer.FwdAlgo,
ConvolutionLayer.BwdFilterAlgo, and ConvolutionLayer.BwdDataAlgo lists, but they may be very memory intensive, so if weird errors
occur when using cuDNN, please try the "NO_WORKSPACE" mode. |
| ConvolutionLayer.BwdDataAlgo |
The backward data algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
| ConvolutionLayer.BwdFilterAlgo |
The backward filter algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
| ConvolutionLayer.FwdAlgo |
The forward algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
| PoolingType |
Pooling type:
MAX: Max pooling - output is the maximum value of the input values AVG: Average pooling - output is the average value of the input values SUM: Sum pooling - output is the sum of the input values PNORM: P-norm pooling |
| SpaceToDepthLayer.DataFormat | |
| Subsampling3DLayer.PoolingType | |
| SubsamplingLayer.PoolingType |
Copyright © 2019. All rights reserved.