public abstract class InputType extends Object implements Serializable
ComputationGraphConfiguration.GraphBuilder.setInputTypes(InputType...)
and
ComputationGraphConfiguration.addPreProcessors(InputType...)
Modifier and Type | Class and Description |
---|---|
static class |
InputType.InputTypeConvolutional |
static class |
InputType.InputTypeConvolutional3D |
static class |
InputType.InputTypeConvolutionalFlat |
static class |
InputType.InputTypeFeedForward |
static class |
InputType.InputTypeRecurrent |
static class |
InputType.Type
The type of activations in/out of a given GraphVertex
FF: Standard feed-foward (2d minibatch, 1d per example) data RNN: Recurrent neural network (3d minibatch) time series data CNN: 2D Convolutional neural network (4d minibatch, [miniBatchSize, channels, height, width]) CNNFlat: Flattened 2D conv net data (2d minibatch, [miniBatchSize, height * width * channels]) CNN3D: 3D convolutional neural network (5d minibatch, [miniBatchSize, channels, height, width, channels]) |
Constructor and Description |
---|
InputType() |
Modifier and Type | Method and Description |
---|---|
abstract long |
arrayElementsPerExample() |
static InputType |
convolutional(long height,
long width,
long depth)
Input type for convolutional (CNN) data, that is 4d with shape [miniBatchSize, channels, height, width].
|
static InputType |
convolutional3D(Convolution3D.DataFormat dataFormat,
long depth,
long height,
long width,
long channels)
Input type for 3D convolutional (CNN3D) 5d data:
If NDHWC format [miniBatchSize, depth, height, width, channels] If NDCWH |
static InputType |
convolutional3D(long depth,
long height,
long width,
long channels)
|
static InputType |
convolutionalFlat(long height,
long width,
long depth)
Input type for convolutional (CNN) data, where the data is in flattened (row vector) format.
|
static InputType |
feedForward(long size)
InputType for feed forward network data
|
long[] |
getShape()
Returns the shape of this InputType without minibatch dimension in the returned array
|
abstract long[] |
getShape(boolean includeBatchDim)
Returns the shape of this InputType
|
abstract InputType.Type |
getType() |
static InputType |
inferInputType(INDArray inputArray) |
static InputType[] |
inferInputTypes(INDArray... inputArrays) |
static InputType |
recurrent(long size)
InputType for recurrent neural network (time series) data
|
static InputType |
recurrent(long size,
long timeSeriesLength)
InputType for recurrent neural network (time series) data
|
abstract String |
toString() |
public abstract InputType.Type getType()
public abstract long arrayElementsPerExample()
public abstract long[] getShape(boolean includeBatchDim)
includeBatchDim
- Whether to include minibatch in the return shape arraypublic long[] getShape()
public static InputType feedForward(long size)
size
- The size of the activationspublic static InputType recurrent(long size)
size
- The size of the activationspublic static InputType recurrent(long size, long timeSeriesLength)
size
- The size of the activationstimeSeriesLength
- Length of the input time seriespublic static InputType convolutional(long height, long width, long depth)
convolutionalFlat(long, long, long)
height
- height of the inputwidth
- Width of the inputdepth
- Depth, or number of channels@Deprecated public static InputType convolutional3D(long depth, long height, long width, long channels)
height
- height of the inputwidth
- Width of the inputdepth
- Depth of the inputchannels
- Number of channels of the inputpublic static InputType convolutional3D(Convolution3D.DataFormat dataFormat, long depth, long height, long width, long channels)
height
- height of the inputwidth
- Width of the inputdepth
- Depth of the inputchannels
- Number of channels of the inputpublic static InputType convolutionalFlat(long height, long width, long depth)
convolutional(long, long, long)
height
- Height of the (unflattened) data represented by this input typewidth
- Width of the (unflattened) data represented by this input typedepth
- Depth of the (unflattened) data represented by this input typeCopyright © 2019. All rights reserved.