public final class Ops extends Object
Op
s
Any operation wrapper found in the classpath properly annotated as an@Operator
is exposed
by this API or one of its subgroup.
Example usage:
try (Graph g = new Graph()) {
Ops tf = Ops.create(g);
// Operations are typed classes with convenience
// builders in Ops.
Constant<TInt32> three = tf.constant(3);
// Single-result operations implement the Operand
// interface, so this works too.
Operand<TInt32> four = tf.constant(4);
// Most builders are found within a group, and accept
// Operand types as operands
Operand<TInt32> nine = tf.math.add(four, tf.constant(5));
// Multi-result operations however offer methods to
// select a particular result for use.
Operand<TInt32> result =
tf.math.add(tf.unique(s, a).y(), b);
// Optional attributes
tf.linalg.matMul(a, b, MatMul.transposeA(true));
// Naming operators
tf.withName("foo").constant(5); // name "foo"
// Names can exist in a hierarchy
Ops sub = tf.withSubScope("sub");
sub.withName("bar").constant(4); // "sub/bar"
}
Modifier and Type | Field and Description |
---|---|
AudioOps |
audio |
BitwiseOps |
bitwise |
DataOps |
data |
DtypesOps |
dtypes |
ImageOps |
image |
IoOps |
io |
LinalgOps |
linalg |
MathOps |
math |
NnOps |
nn |
QuantizationOps |
quantization |
RaggedOps |
ragged |
RandomOps |
random |
ShapeOps |
shape |
SignalOps |
signal |
SparseOps |
sparse |
StringsOps |
strings |
SummaryOps |
summary |
TpuOps |
tpu |
TrainOps |
train |
XlaOps |
xla |
Modifier and Type | Method and Description |
---|---|
Abort |
abort(Abort.Options... options)
Raise a exception to abort the process when called.
|
All |
all(Operand<TBool> input,
Operand<? extends TNumber> axis,
All.Options... options)
Computes the "logical and" of elements across dimensions of a tensor.
|
Any |
any(Operand<TBool> input,
Operand<? extends TNumber> axis,
Any.Options... options)
Computes the "logical or" of elements across dimensions of a tensor.
|
Constant<TBool> |
array(boolean... data)
Creates a constant of
boolean elements. |
Constant<TUint8> |
array(byte... data)
Creates a constant of
byte elements. |
Constant<TString> |
array(Charset charset,
String... data)
Creates a constant of
String elements, using the given charset. |
Constant<TFloat64> |
array(double... data)
Creates a constant of
double elements. |
Constant<TFloat32> |
array(float... data)
Creates a constant of
float elements. |
Constant<TInt32> |
array(int... data)
Creates a constant of
int elements. |
Constant<TInt64> |
array(long... data)
Creates a constant of
long elements. |
Constant<TString> |
array(String... data)
Creates a constant of
String elements, using the default UTF-8 charset. |
AssertThat |
assertThat(Operand<TBool> condition,
Iterable<Operand<?>> data,
AssertThat.Options... options)
Asserts that the given condition is true.
|
<T extends TType> |
assign(Operand<T> ref,
Operand<T> value,
Assign.Options... options)
Update 'ref' by assigning 'value' to it.
|
<T extends TType> |
assignAdd(Operand<T> ref,
Operand<T> value,
AssignAdd.Options... options)
Update 'ref' by adding 'value' to it.
|
AssignAddVariableOp |
assignAddVariableOp(Operand<? extends TType> resource,
Operand<? extends TType> value)
Adds a value to the current value of a variable.
|
<T extends TType> |
assignSub(Operand<T> ref,
Operand<T> value,
AssignSub.Options... options)
Update 'ref' by subtracting 'value' from it.
|
AssignSubVariableOp |
assignSubVariableOp(Operand<? extends TType> resource,
Operand<? extends TType> value)
Subtracts a value from the current value of a variable.
|
AssignVariableOp |
assignVariableOp(Operand<? extends TType> resource,
Operand<? extends TType> value)
Assigns a new value to a variable.
|
Barrier |
barrier(List<Class<? extends TType>> componentTypes,
Barrier.Options... options)
Defines a barrier that persists across different graph executions.
|
BarrierClose |
barrierClose(Operand<TString> handle,
BarrierClose.Options... options)
Closes the given barrier.
|
BarrierIncompleteSize |
barrierIncompleteSize(Operand<TString> handle)
Computes the number of incomplete elements in the given barrier.
|
BarrierInsertMany |
barrierInsertMany(Operand<TString> handle,
Operand<TString> keys,
Operand<? extends TType> values,
Long componentIndex)
For each key, assigns the respective value to the specified component.
|
BarrierReadySize |
barrierReadySize(Operand<TString> handle)
Computes the number of complete elements in the given barrier.
|
BarrierTakeMany |
barrierTakeMany(Operand<TString> handle,
Operand<TInt32> numElements,
List<Class<? extends TType>> componentTypes,
BarrierTakeMany.Options... options)
Takes the given number of completed elements from a barrier.
|
Batch |
batch(Iterable<Operand<?>> inTensors,
Long numBatchThreads,
Long maxBatchSize,
Long batchTimeoutMicros,
Long gradTimeoutMicros,
Batch.Options... options)
Batches all input tensors nondeterministically.
|
BatchFunction |
batchFunction(Iterable<Operand<?>> inTensors,
Iterable<Operand<?>> capturedTensors,
ConcreteFunction f,
Long numBatchThreads,
Long maxBatchSize,
Long batchTimeoutMicros,
List<Class<? extends TType>> Tout,
BatchFunction.Options... options)
Batches all the inputs tensors to the computation done by the function.
|
<T extends TType> |
batchToSpace(Operand<T> input,
Operand<? extends TNumber> crops,
Long blockSize)
BatchToSpace for 4-D tensors of type T.
|
<T extends TType> |
batchToSpaceNd(Operand<T> input,
Operand<? extends TNumber> blockShape,
Operand<? extends TNumber> crops)
BatchToSpace for N-D tensors of type T.
|
<U extends TType> |
bitcast(Operand<? extends TType> input,
Class<U> type)
Bitcasts a tensor from one type to another without copying data.
|
<T extends TType> |
booleanMask(Operand<T> tensor,
Operand<TBool> mask,
BooleanMask.Options... options)
Apply boolean mask to tensor.
|
<T extends TType> |
booleanMaskUpdate(Operand<T> tensor,
Operand<TBool> mask,
Operand<T> updates,
BooleanMaskUpdate.Options... options)
Updates a tensor at the masked values, and returns the updated tensor.
|
<T extends TNumber> |
broadcastDynamicShape(Operand<T> s0,
Operand<T> s1)
Return the shape of s0 op s1 with broadcast.
|
<T extends TType> |
broadcastTo(Operand<T> input,
Operand<? extends TNumber> shape)
Broadcast an array for a compatible shape.
|
Bucketize |
bucketize(Operand<? extends TNumber> input,
List<Float> boundaries)
Bucketizes 'input' based on 'boundaries'.
|
Map<String,Operand<?>> |
call(ConcreteFunction function,
Map<String,Operand<?>> arguments)
Calls the function in an execution environment, adding its graph as a function if it isn't
already present.
|
Operand<?> |
call(ConcreteFunction function,
Operand<?> argument)
Calls the function in an execution environment, adding its graph as a function if it isn't
already present.
|
Case |
caseOp(Operand<TInt32> branchIndex,
Iterable<Operand<?>> input,
List<Class<? extends TType>> Tout,
List<ConcreteFunction> branches,
Case.Options... options)
An n-way switch statement which calls a single branch function.
|
<T extends TType> |
clipByValue(Operand<T> t,
Operand<T> clipValueMin,
Operand<T> clipValueMax)
Clips tensor values to a specified min and max.
|
<T extends TType> |
concat(Iterable<Operand<T>> values,
Operand<? extends TNumber> axis)
Concatenates tensors along one dimension.
|
Constant<TBool> |
constant(boolean data)
Creates a constant containing a single
boolean element. |
Constant<TBool> |
constant(boolean[] data)
Creates a rank-1 constant of
boolean elements. |
Constant<TBool> |
constant(boolean[][] data)
Creates a rank-2 constant of
boolean elements. |
Constant<TBool> |
constant(boolean[][][] data)
Creates a rank-3 constant of
boolean elements. |
Constant<TBool> |
constant(boolean[][][][] data)
Creates a rank-4 constant of
boolean elements. |
Constant<TBool> |
constant(boolean[][][][][] data)
Creates a rank-5 constant of
boolean elements. |
Constant<TBool> |
constant(boolean[][][][][][] data)
Creates a rank-6 constant of
boolean elements. |
Constant<TBool> |
constant(org.tensorflow.ndarray.BooleanNdArray data)
Creates a constant of
boolean elements that is a copy of a given n-dimensional array. |
Constant<TUint8> |
constant(byte data)
Creates a constant containing a single
byte element. |
Constant<TUint8> |
constant(byte[] data)
Creates a rank-1 constant of
byte elements. |
Constant<TUint8> |
constant(byte[][] data)
Creates a rank-2 constant of
byte elements. |
Constant<TUint8> |
constant(byte[][][] data)
Creates a rank-3 constant of
byte elements. |
Constant<TUint8> |
constant(byte[][][][] data)
Creates a rank-4 constant of
byte elements. |
Constant<TUint8> |
constant(byte[][][][][] data)
Creates a rank-5 constant of
byte elements. |
Constant<TUint8> |
constant(byte[][][][][][] data)
Creates a rank-6 constant of
byte elements. |
Constant<TUint8> |
constant(org.tensorflow.ndarray.ByteNdArray data)
Creates a constant of
byte elements that is a copy of a given n-dimensional array. |
Constant<TString> |
constant(Charset charset,
org.tensorflow.ndarray.NdArray<String> data)
Creates a constant of
String elements that is a copy of a given n-dimensional array,
using the given encoding. |
Constant<TString> |
constant(Charset charset,
org.tensorflow.ndarray.Shape shape,
org.tensorflow.ndarray.buffer.DataBuffer<String> data)
Create a
TString constant with data from the given buffer, using the given encoding. |
Constant<TString> |
constant(Charset charset,
String data)
Creates a
String constant using a specified encoding. |
Constant<TString> |
constant(Charset charset,
String[] data)
Creates a constant of
String elements, using the given charset. |
<T extends TNumber> |
constant(Class<T> type,
Number number)
Creates a scalar of
type , with the value of number . |
<T extends TType> |
constant(Class<T> type,
org.tensorflow.ndarray.Shape shape,
org.tensorflow.ndarray.buffer.ByteDataBuffer data)
Create a constant with data from the given buffer.
|
Constant<TFloat64> |
constant(double data)
Creates a constant containing a single
double element. |
Constant<TFloat64> |
constant(double[] data)
Creates a rank-1 constant of
double elements. |
Constant<TFloat64> |
constant(double[][] data)
Creates a rank-2 constant of
double elements. |
Constant<TFloat64> |
constant(double[][][] data)
Creates a rank-3 constant of
double elements. |
Constant<TFloat64> |
constant(double[][][][] data)
Creates a rank-4 constant of
double elements. |
Constant<TFloat64> |
constant(double[][][][][] data)
Creates a rank-5 constant of
double elements. |
Constant<TFloat64> |
constant(double[][][][][][] data)
Creates a rank-6 constant of
double elements. |
Constant<TFloat64> |
constant(org.tensorflow.ndarray.DoubleNdArray data)
Creates a constant of
double elements that is a copy of a given n-dimensional array. |
Constant<TFloat32> |
constant(float data)
Creates a constant containing a single
float element. |
Constant<TFloat32> |
constant(float[] data)
Creates a rank-1 constant of
float elements. |
Constant<TFloat32> |
constant(float[][] data)
Creates a rank-2 constant of
float elements. |
Constant<TFloat32> |
constant(float[][][] data)
Creates a rank-3 constant of
float elements. |
Constant<TFloat32> |
constant(float[][][][] data)
Creates a rank-4 constant of
float elements. |
Constant<TFloat32> |
constant(float[][][][][] data)
Creates a rank-5 constant of
float elements. |
Constant<TFloat32> |
constant(float[][][][][][] data)
Creates a rank-6 constant of
float elements. |
Constant<TFloat32> |
constant(org.tensorflow.ndarray.FloatNdArray data)
Creates a constant of
float elements that is a copy of a given n-dimensional array. |
Constant<TInt32> |
constant(int data)
Creates a constant containing a single
int element. |
Constant<TInt32> |
constant(int[] data)
Creates a rank-1 constant of
int elements. |
Constant<TInt32> |
constant(int[][] data)
Creates a rank-2 constant of
int elements. |
Constant<TInt32> |
constant(int[][][] data)
Creates a rank-3 constant of
int elements. |
Constant<TInt32> |
constant(int[][][][] data)
Creates a rank-4 constant of
int elements. |
Constant<TInt32> |
constant(int[][][][][] data)
Creates a rank-5 constant of
int elements. |
Constant<TInt32> |
constant(int[][][][][][] data)
Creates a rank-6 constant of
int elements. |
Constant<TInt32> |
constant(org.tensorflow.ndarray.IntNdArray data)
Creates a constant of
int elements that is a copy of a given n-dimensional array. |
Constant<TInt64> |
constant(long data)
Creates a constant containing a single
long element. |
Constant<TInt64> |
constant(long[] data)
Creates a rank-1 constant of
long elements. |
Constant<TInt64> |
constant(long[][] data)
Creates a rank-2 constant of
long elements. |
Constant<TInt64> |
constant(long[][][] data)
Creates a rank-3 constant of
long elements. |
Constant<TInt64> |
constant(long[][][][] data)
Creates a rank-4 constant of
long elements. |
Constant<TInt64> |
constant(long[][][][][] data)
Creates a rank-5 constant of
long elements. |
Constant<TInt64> |
constant(long[][][][][][] data)
Creates a rank-6 constant of
long elements. |
Constant<TInt64> |
constant(org.tensorflow.ndarray.LongNdArray data)
Creates a constant of
long elements that is a copy of a given n-dimensional array. |
Constant<TString> |
constant(org.tensorflow.ndarray.NdArray<String> data)
Creates a constant of
String elements that is a copy of a given n-dimensional array,
using the default UTF-8 encoding. |
Constant<TInt64> |
constant(org.tensorflow.ndarray.Shape shape)
Creates a rank-1 constant of
long elements representing the size of each dimensions of
the given shape. |
Constant<TBool> |
constant(org.tensorflow.ndarray.Shape shape,
org.tensorflow.ndarray.buffer.BooleanDataBuffer data)
Create a
TBool constant with data from the given buffer. |
Constant<TUint8> |
constant(org.tensorflow.ndarray.Shape shape,
org.tensorflow.ndarray.buffer.ByteDataBuffer data)
Create a
TUint8 constant with data from the given buffer. |
Constant<TString> |
constant(org.tensorflow.ndarray.Shape shape,
org.tensorflow.ndarray.buffer.DataBuffer<String> data)
Create a
TString constant with data from the given buffer, using the default UTF-8
encoding. |
Constant<TFloat64> |
constant(org.tensorflow.ndarray.Shape shape,
org.tensorflow.ndarray.buffer.DoubleDataBuffer data)
Create a
TFloat64 constant with data from the given buffer. |
Constant<TFloat32> |
constant(org.tensorflow.ndarray.Shape shape,
org.tensorflow.ndarray.buffer.FloatDataBuffer data)
Create a
TFloat32 constant with data from the given buffer. |
Constant<TInt32> |
constant(org.tensorflow.ndarray.Shape shape,
org.tensorflow.ndarray.buffer.IntDataBuffer data)
Create a
TInt32 constant with data from the given buffer. |
Constant<TInt64> |
constant(org.tensorflow.ndarray.Shape shape,
org.tensorflow.ndarray.buffer.LongDataBuffer data)
Create a
TInt64 constant with data from the given buffer. |
Constant<TString> |
constant(String data)
Creates a
String constant using the default, UTF-8 encoding. |
<T extends TType> |
constantOf(T tensor)
Create a constant by making an immutable copy of
tensor . |
<T extends TNumber> |
constantOfSameType(Operand<T> toMatch,
Number number)
Creates a scalar of the same type as
toMatch , with the value of number . |
ConsumeMutexLock |
consumeMutexLock(Operand<? extends TType> mutexLock)
This op consumes a lock created by
MutexLock . |
ControlTrigger |
controlTrigger()
Does nothing.
|
<T extends TNumber> |
countUpTo(Operand<T> ref,
Long limit)
Increments 'ref' until it reaches 'limit'.
|
static Ops |
create()
Creates an API for building operations in the default eager execution environment
|
static Ops |
create(ExecutionEnvironment env)
Creates an API for building operations in the provided execution environment
|
DecodeProto |
decodeProto(Operand<TString> bytes,
String messageType,
List<String> fieldNames,
List<Class<? extends TType>> outputTypes,
DecodeProto.Options... options)
The op extracts fields from a serialized protocol buffers message into tensors.
|
<T extends TType> |
deepCopy(Operand<T> x)
Makes a copy of
x . |
DeleteSessionTensor |
deleteSessionTensor(Operand<TString> handle)
Delete the tensor specified by its handle in the session.
|
DestroyResourceOp |
destroyResourceOp(Operand<? extends TType> resource,
DestroyResourceOp.Options... options)
Deletes the resource specified by the handle.
|
<T extends TType> |
destroyTemporaryVariable(Operand<T> ref,
String varName)
Destroys the temporary variable and returns its final value.
|
<T extends TType> |
dynamicPartition(Operand<T> data,
Operand<TInt32> partitions,
Long numPartitions)
Partitions
data into num_partitions tensors using indices from partitions . |
<T extends TType> |
dynamicStitch(Iterable<Operand<TInt32>> indices,
Iterable<Operand<T>> data)
Interleave the values from the
data tensors into a single tensor. |
<T extends TType> |
editDistance(Operand<TInt64> hypothesisIndices,
Operand<T> hypothesisValues,
Operand<TInt64> hypothesisShape,
Operand<TInt64> truthIndices,
Operand<T> truthValues,
Operand<TInt64> truthShape,
EditDistance.Options... options)
Computes the (possibly normalized) Levenshtein Edit Distance.
|
<T extends TType> |
empty(Operand<TInt32> shape,
Class<T> dtype,
Empty.Options... options)
Creates a tensor with the given shape.
|
<U extends TType> |
emptyTensorList(Operand<? extends TNumber> elementShape,
Operand<TInt32> maxNumElements,
Class<U> elementDtype)
Creates and returns an empty tensor list.
|
EmptyTensorMap |
emptyTensorMap()
Creates and returns an empty tensor map.
|
EncodeProto |
encodeProto(Operand<TInt32> sizes,
Iterable<Operand<?>> values,
List<String> fieldNames,
String messageType,
EncodeProto.Options... options)
The op serializes protobuf messages provided in the input tensors.
|
<T extends TType> |
ensureShape(Operand<T> input,
org.tensorflow.ndarray.Shape shape)
Ensures that the tensor's shape matches the expected shape.
|
<T extends TType> |
expandDims(Operand<T> input,
Operand<? extends TNumber> axis)
Inserts a dimension of 1 into a tensor's shape.
|
<T extends TNumber> |
extractVolumePatches(Operand<T> input,
List<Long> ksizes,
List<Long> strides,
String padding)
Extract
patches from input and put them in the "depth" output dimension. |
<U extends TType> |
fill(Operand<? extends TNumber> dims,
Operand<U> value)
Creates a tensor filled with a scalar value.
|
Fingerprint |
fingerprint(Operand<? extends TType> data,
Operand<TString> method)
Generates fingerprint values.
|
For |
forOp(Operand<TInt32> start,
Operand<TInt32> limit,
Operand<TInt32> delta,
Iterable<Operand<?>> input,
ConcreteFunction body)
output = input;
for i in range(start, limit, delta)
output = body(i, output);
|
<T extends TType> |
gather(Operand<T> params,
Operand<? extends TNumber> indices,
Operand<? extends TNumber> axis,
Gather.Options... options)
Gather slices from
params axis axis according to indices . |
<T extends TType> |
gatherNd(Operand<T> params,
Operand<? extends TNumber> indices)
Gather slices from
params into a Tensor with shape specified by indices . |
GetSessionHandle |
getSessionHandle(Operand<? extends TType> value)
Store the input tensor in the state of the current session.
|
<T extends TType> |
getSessionTensor(Operand<TString> handle,
Class<T> dtype)
Get the value of the tensor specified by its handle.
|
Gradients |
gradients(Iterable<? extends Operand<?>> y,
Iterable<? extends Operand<?>> x,
Gradients.Options... options)
Adds gradients computation ops to the graph according to scope.
|
Gradients |
gradients(Operand<?> y,
Iterable<? extends Operand<?>> x,
Gradients.Options... options)
Adds operations to compute the partial derivatives of sum of
y s w.r.t x s,
i.e., d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2... |
<T extends TType> |
guaranteeConst(Operand<T> input)
Gives a guarantee to the TF runtime that the input tensor is a constant.
|
<T extends TType,U extends TType> |
hashTable(Class<T> keyDtype,
Class<U> valueDtype,
HashTable.Options... options)
Creates a non-initialized hash table.
|
<T extends TNumber> |
histogramFixedWidth(Operand<T> values,
Operand<T> valueRange,
Operand<TInt32> nbins)
Return histogram of values.
|
<U extends TNumber,T extends TNumber> |
histogramFixedWidth(Operand<T> values,
Operand<T> valueRange,
Operand<TInt32> nbins,
Class<U> dtype)
Return histogram of values.
|
<T extends TType> |
identity(Operand<T> input)
Return a tensor with the same shape and contents as the input tensor or value.
|
IdentityN |
identityN(Iterable<Operand<?>> input)
Returns a list of tensors with the same shapes and contents as the input
tensors.
|
If |
ifOp(Operand<? extends TType> cond,
Iterable<Operand<?>> input,
List<Class<? extends TType>> Tout,
ConcreteFunction thenBranch,
ConcreteFunction elseBranch,
If.Options... options)
output = cond ? then_branch(input) : else_branch(input)
|
<T extends TType> |
immutableConst(Class<T> dtype,
org.tensorflow.ndarray.Shape shape,
String memoryRegionName)
Returns immutable tensor from memory region.
|
InitializeTable |
initializeTable(Operand<? extends TType> tableHandle,
Operand<? extends TType> keys,
Operand<? extends TType> values)
Table initializer that takes two tensors for keys and values respectively.
|
InitializeTableFromTextFile |
initializeTableFromTextFile(Operand<? extends TType> tableHandle,
Operand<TString> filename,
Long keyIndex,
Long valueIndex,
InitializeTableFromTextFile.Options... options)
Initializes a table from a text file.
|
<T extends TType> |
inplaceAdd(Operand<T> x,
Operand<TInt32> i,
Operand<T> v)
Adds v into specified rows of x.
|
<T extends TType> |
inplaceSub(Operand<T> x,
Operand<TInt32> i,
Operand<T> v)
Subtracts `v` into specified rows of `x`.
|
<T extends TType> |
inplaceUpdate(Operand<T> x,
Operand<TInt32> i,
Operand<T> v)
Updates specified rows 'i' with values 'v'.
|
IsVariableInitialized |
isVariableInitialized(Operand<? extends TType> ref)
Checks whether a tensor has been initialized.
|
KthOrderStatistic |
kthOrderStatistic(Operand<TFloat32> input,
Long k)
Computes the Kth order statistic of a data set.
|
<T extends Operand> |
liftToInitScope(T op)
Make
op an init operation, doing the same for all of it's inputs (and control inputs). |
<T extends TType,U extends TType> |
lookupTableExport(Operand<? extends TType> tableHandle,
Class<T> Tkeys,
Class<U> Tvalues)
Outputs all keys and values in the table.
|
<U extends TType> |
lookupTableFind(Operand<? extends TType> tableHandle,
Operand<? extends TType> keys,
Operand<U> defaultValue)
Looks up keys in a table, outputs the corresponding values.
|
LookupTableImport |
lookupTableImport(Operand<? extends TType> tableHandle,
Operand<? extends TType> keys,
Operand<? extends TType> values)
Replaces the contents of the table with the specified keys and values.
|
LookupTableInsert |
lookupTableInsert(Operand<? extends TType> tableHandle,
Operand<? extends TType> keys,
Operand<? extends TType> values)
Updates the table to associates keys with values.
|
LookupTableSize |
lookupTableSize(Operand<? extends TType> tableHandle)
Computes the number of elements in the given table.
|
LoopCond |
loopCond(Operand<TBool> input)
Forwards the input to the output.
|
MakeUnique |
makeUnique(Operand<TFloat32> input)
Make all elements in the non-Batch dimension unique, but "close" to
their initial value.
|
MapClear |
mapClear(List<Class<? extends TType>> dtypes,
MapClear.Options... options)
Op removes all elements in the underlying container.
|
MapIncompleteSize |
mapIncompleteSize(List<Class<? extends TType>> dtypes,
MapIncompleteSize.Options... options)
Op returns the number of incomplete elements in the underlying container.
|
MapPeek |
mapPeek(Operand<TInt64> key,
Operand<TInt32> indices,
List<Class<? extends TType>> dtypes,
MapPeek.Options... options)
Op peeks at the values at the specified key.
|
MapSize |
mapSize(List<Class<? extends TType>> dtypes,
MapSize.Options... options)
Op returns the number of elements in the underlying container.
|
MapStage |
mapStage(Operand<TInt64> key,
Operand<TInt32> indices,
Iterable<Operand<?>> values,
List<Class<? extends TType>> dtypes,
MapStage.Options... options)
Stage (key, values) in the underlying container which behaves like a hashtable.
|
MapUnstage |
mapUnstage(Operand<TInt64> key,
Operand<TInt32> indices,
List<Class<? extends TType>> dtypes,
MapUnstage.Options... options)
Op removes and returns the values associated with the key
from the underlying container.
|
MapUnstageNoKey |
mapUnstageNoKey(Operand<TInt32> indices,
List<Class<? extends TType>> dtypes,
MapUnstageNoKey.Options... options)
Op removes and returns a random (key, value)
from the underlying container.
|
<T extends TNumber> |
max(Operand<T> input,
Operand<? extends TNumber> axis,
Max.Options... options)
Computes the maximum of elements across dimensions of a tensor.
|
<T extends TType> |
merge(Iterable<Operand<T>> inputs)
Forwards the value of an available tensor from
inputs to output . |
<T extends TNumber> |
min(Operand<T> input,
Operand<? extends TNumber> axis,
Min.Options... options)
Computes the minimum of elements across dimensions of a tensor.
|
<T extends TType> |
mirrorPad(Operand<T> input,
Operand<? extends TNumber> paddings,
String mode)
Pads a tensor with mirrored values.
|
MlirPassthroughOp |
mlirPassthroughOp(Iterable<Operand<?>> inputs,
String mlirModule,
List<Class<? extends TType>> Toutputs)
Wraps an arbitrary MLIR computation expressed as a module with a main() function.
|
<T extends TType,U extends TType> |
mutableDenseHashTable(Operand<T> emptyKey,
Operand<T> deletedKey,
Class<U> valueDtype,
MutableDenseHashTable.Options... options)
Creates an empty hash table that uses tensors as the backing store.
|
<T extends TType,U extends TType> |
mutableHashTable(Class<T> keyDtype,
Class<U> valueDtype,
MutableHashTable.Options... options)
Creates an empty hash table.
|
<T extends TType,U extends TType> |
mutableHashTableOfTensors(Class<T> keyDtype,
Class<U> valueDtype,
MutableHashTableOfTensors.Options... options)
Creates an empty hash table.
|
Mutex |
mutex(Mutex.Options... options)
Creates a Mutex resource that can be locked by
MutexLock . |
MutexLock |
mutexLock(Operand<? extends TType> mutex)
Locks a mutex resource.
|
<T extends TType> |
nextIteration(Operand<T> data)
Makes its input available to the next iteration.
|
NoOp |
noOp()
Does nothing.
|
<U extends TType> |
oneHot(Operand<? extends TNumber> indices,
Operand<TInt32> depth,
Operand<U> onValue,
Operand<U> offValue,
OneHot.Options... options)
Returns a one-hot tensor.
|
<T extends TType> |
ones(Operand<? extends TNumber> dims,
Class<T> type)
Creates a one valued tensor given its type and shape.
|
<T extends TType> |
onesLike(Operand<T> x)
Returns a tensor of ones with the same shape and type as x.
|
OrderedMapClear |
orderedMapClear(List<Class<? extends TType>> dtypes,
OrderedMapClear.Options... options)
Op removes all elements in the underlying container.
|
OrderedMapIncompleteSize |
orderedMapIncompleteSize(List<Class<? extends TType>> dtypes,
OrderedMapIncompleteSize.Options... options)
Op returns the number of incomplete elements in the underlying container.
|
OrderedMapPeek |
orderedMapPeek(Operand<TInt64> key,
Operand<TInt32> indices,
List<Class<? extends TType>> dtypes,
OrderedMapPeek.Options... options)
Op peeks at the values at the specified key.
|
OrderedMapSize |
orderedMapSize(List<Class<? extends TType>> dtypes,
OrderedMapSize.Options... options)
Op returns the number of elements in the underlying container.
|
OrderedMapStage |
orderedMapStage(Operand<TInt64> key,
Operand<TInt32> indices,
Iterable<Operand<?>> values,
List<Class<? extends TType>> dtypes,
OrderedMapStage.Options... options)
Stage (key, values) in the underlying container which behaves like a ordered
associative container.
|
OrderedMapUnstage |
orderedMapUnstage(Operand<TInt64> key,
Operand<TInt32> indices,
List<Class<? extends TType>> dtypes,
OrderedMapUnstage.Options... options)
Op removes and returns the values associated with the key
from the underlying container.
|
OrderedMapUnstageNoKey |
orderedMapUnstageNoKey(Operand<TInt32> indices,
List<Class<? extends TType>> dtypes,
OrderedMapUnstageNoKey.Options... options)
Op removes and returns the (key, value) element with the smallest
key from the underlying container.
|
<T extends TType> |
pad(Operand<T> input,
Operand<? extends TNumber> paddings,
Operand<T> constantValues)
Pads a tensor.
|
<T extends TType> |
parallelConcat(Iterable<Operand<T>> values,
org.tensorflow.ndarray.Shape shape)
Concatenates a list of
N tensors along the first dimension. |
<T extends TType> |
parallelDynamicStitch(Iterable<Operand<TInt32>> indices,
Iterable<Operand<T>> data)
Interleave the values from the
data tensors into a single tensor. |
PartitionedCall |
partitionedCall(Iterable<Operand<?>> args,
List<Class<? extends TType>> Tout,
ConcreteFunction f,
PartitionedCall.Options... options)
returns
f(inputs) , where f 's body is placed and partitioned. |
<T extends TType> |
placeholder(Class<T> dtype,
Placeholder.Options... options)
A placeholder op for a value that will be fed into the computation.
|
<T extends TType> |
placeholderWithDefault(Operand<T> input,
org.tensorflow.ndarray.Shape shape)
A placeholder op that passes through
input when its output is not fed. |
Print |
print(Operand<TString> input,
Print.Options... options)
Prints a string scalar.
|
<T extends TType> |
prod(Operand<T> input,
Operand<? extends TNumber> axis,
Prod.Options... options)
Computes the product of elements across dimensions of a tensor.
|
<T extends TType> |
quantizedReshape(Operand<T> tensor,
Operand<? extends TNumber> shape,
Operand<TFloat32> inputMin,
Operand<TFloat32> inputMax)
Reshapes a quantized tensor as per the Reshape op.
|
<T extends TNumber> |
range(Operand<T> start,
Operand<T> limit,
Operand<T> delta)
Creates a sequence of numbers.
|
Rank |
rank(Operand<? extends TType> input)
Returns the rank of a tensor.
|
<T extends TType> |
readVariableOp(Operand<? extends TType> resource,
Class<T> dtype)
Reads the value of a variable.
|
ReduceAll |
reduceAll(Operand<TBool> input,
Operand<? extends TNumber> axis,
ReduceAll.Options... options)
Computes the "logical and" of elements across dimensions of a tensor.
|
ReduceAny |
reduceAny(Operand<TBool> input,
Operand<? extends TNumber> axis,
ReduceAny.Options... options)
Computes the "logical or" of elements across dimensions of a tensor.
|
<T extends TNumber> |
reduceMax(Operand<T> input,
Operand<? extends TNumber> axis,
ReduceMax.Options... options)
Computes the maximum of elements across dimensions of a tensor.
|
<T extends TNumber> |
reduceMin(Operand<T> input,
Operand<? extends TNumber> axis,
ReduceMin.Options... options)
Computes the minimum of elements across dimensions of a tensor.
|
<T extends TType> |
reduceProd(Operand<T> input,
Operand<? extends TNumber> axis,
ReduceProd.Options... options)
Computes the product of elements across dimensions of a tensor.
|
<T extends TType> |
reduceSum(Operand<T> input,
Operand<? extends TNumber> axis,
ReduceSum.Options... options)
Computes the sum of elements across dimensions of a tensor.
|
<T extends TType> |
refNextIteration(Operand<T> data)
Makes its input available to the next iteration.
|
<T extends TType> |
refSelect(Operand<TInt32> index,
Iterable<Operand<T>> inputs)
Forwards the
index th element of inputs to output . |
<T extends TType> |
refSwitch(Operand<T> data,
Operand<TBool> pred)
Forwards the ref tensor
data to the output port determined by pred . |
RemoteCall |
remoteCall(Operand<TString> target,
Iterable<Operand<?>> args,
List<Class<? extends TType>> Tout,
ConcreteFunction f)
Runs function
f on a remote device indicated by target . |
<T extends TType> |
reshape(Operand<T> tensor,
Operand<? extends TNumber> shape)
Reshapes a tensor.
|
<T extends TNumber> |
resourceCountUpTo(Operand<? extends TType> resource,
Long limit,
Class<T> T)
Increments variable pointed to by 'resource' until it reaches 'limit'.
|
<U extends TType> |
resourceGather(Operand<? extends TType> resource,
Operand<? extends TNumber> indices,
Class<U> dtype,
ResourceGather.Options... options)
Gather slices from the variable pointed to by
resource according to indices . |
<U extends TType> |
resourceGatherNd(Operand<? extends TType> resource,
Operand<? extends TNumber> indices,
Class<U> dtype)
The ResourceGatherNd operation
|
ResourceScatterAdd |
resourceScatterAdd(Operand<? extends TType> resource,
Operand<? extends TNumber> indices,
Operand<? extends TType> updates)
Adds sparse updates to the variable referenced by
resource . |
ResourceScatterDiv |
resourceScatterDiv(Operand<? extends TType> resource,
Operand<? extends TNumber> indices,
Operand<? extends TType> updates)
Divides sparse updates into the variable referenced by
resource . |
ResourceScatterMax |
resourceScatterMax(Operand<? extends TType> resource,
Operand<? extends TNumber> indices,
Operand<? extends TType> updates)
Reduces sparse updates into the variable referenced by
resource using the max operation. |
ResourceScatterMin |
resourceScatterMin(Operand<? extends TType> resource,
Operand<? extends TNumber> indices,
Operand<? extends TType> updates)
Reduces sparse updates into the variable referenced by
resource using the min operation. |
ResourceScatterMul |
resourceScatterMul(Operand<? extends TType> resource,
Operand<? extends TNumber> indices,
Operand<? extends TType> updates)
Multiplies sparse updates into the variable referenced by
resource . |
ResourceScatterNdAdd |
resourceScatterNdAdd(Operand<? extends TType> ref,
Operand<? extends TNumber> indices,
Operand<? extends TType> updates,
ResourceScatterNdAdd.Options... options)
Applies sparse addition to individual values or slices in a Variable.
|
ResourceScatterNdMax |
resourceScatterNdMax(Operand<? extends TType> ref,
Operand<? extends TNumber> indices,
Operand<? extends TType> updates,
ResourceScatterNdMax.Options... options)
The ResourceScatterNdMax operation
|
ResourceScatterNdMin |
resourceScatterNdMin(Operand<? extends TType> ref,
Operand<? extends TNumber> indices,
Operand<? extends TType> updates,
ResourceScatterNdMin.Options... options)
The ResourceScatterNdMin operation
|
ResourceScatterNdSub |
resourceScatterNdSub(Operand<? extends TType> ref,
Operand<? extends TNumber> indices,
Operand<? extends TType> updates,
ResourceScatterNdSub.Options... options)
Applies sparse subtraction to individual values or slices in a Variable.
|
ResourceScatterNdUpdate |
resourceScatterNdUpdate(Operand<? extends TType> ref,
Operand<? extends TNumber> indices,
Operand<? extends TType> updates,
ResourceScatterNdUpdate.Options... options)
Applies sparse
updates to individual values or slices within a given
variable according to indices . |
ResourceScatterSub |
resourceScatterSub(Operand<? extends TType> resource,
Operand<? extends TNumber> indices,
Operand<? extends TType> updates)
Subtracts sparse updates from the variable referenced by
resource . |
ResourceScatterUpdate |
resourceScatterUpdate(Operand<? extends TType> resource,
Operand<? extends TNumber> indices,
Operand<? extends TType> updates)
Assigns sparse updates to the variable referenced by
resource . |
<T extends TNumber> |
resourceStridedSliceAssign(Operand<? extends TType> ref,
Operand<T> begin,
Operand<T> end,
Operand<T> strides,
Operand<? extends TType> value,
ResourceStridedSliceAssign.Options... options)
Assign
value to the sliced l-value reference of ref . |
<T extends TType> |
reverse(Operand<T> tensor,
Operand<? extends TNumber> axis)
Reverses specific dimensions of a tensor.
|
<T extends TType> |
reverseSequence(Operand<T> input,
Operand<? extends TNumber> seqLengths,
Long seqDim,
ReverseSequence.Options... options)
Reverses variable length slices.
|
<T extends TType> |
roll(Operand<T> input,
Operand<? extends TNumber> shift,
Operand<? extends TNumber> axis)
Rolls the elements of a tensor along an axis.
|
<T extends TType> |
scatterAdd(Operand<T> ref,
Operand<? extends TNumber> indices,
Operand<T> updates,
ScatterAdd.Options... options)
Adds sparse updates to a variable reference.
|
<T extends TType> |
scatterDiv(Operand<T> ref,
Operand<? extends TNumber> indices,
Operand<T> updates,
ScatterDiv.Options... options)
Divides a variable reference by sparse updates.
|
<T extends TNumber> |
scatterMax(Operand<T> ref,
Operand<? extends TNumber> indices,
Operand<T> updates,
ScatterMax.Options... options)
Reduces sparse updates into a variable reference using the
max operation. |
<T extends TNumber> |
scatterMin(Operand<T> ref,
Operand<? extends TNumber> indices,
Operand<T> updates,
ScatterMin.Options... options)
Reduces sparse updates into a variable reference using the
min operation. |
<T extends TType> |
scatterMul(Operand<T> ref,
Operand<? extends TNumber> indices,
Operand<T> updates,
ScatterMul.Options... options)
Multiplies sparse updates into a variable reference.
|
<U extends TType,T extends TNumber> |
scatterNd(Operand<T> indices,
Operand<U> updates,
Operand<T> shape)
Scatters
updates into a tensor of shape shape according to indices . |
<T extends TType> |
scatterNdAdd(Operand<T> ref,
Operand<? extends TNumber> indices,
Operand<T> updates,
ScatterNdAdd.Options... options)
Applies sparse addition to individual values or slices in a Variable.
|
<T extends TType> |
scatterNdNonAliasingAdd(Operand<T> input,
Operand<? extends TNumber> indices,
Operand<T> updates)
Applies sparse addition to
input using individual values or slices
from updates according to indices indices . |
<T extends TType> |
scatterNdSub(Operand<T> ref,
Operand<? extends TNumber> indices,
Operand<T> updates,
ScatterNdSub.Options... options)
Applies sparse subtraction to individual values or slices in a Variable.
|
<T extends TType> |
scatterNdUpdate(Operand<T> ref,
Operand<? extends TNumber> indices,
Operand<T> updates,
ScatterNdUpdate.Options... options)
Applies sparse
updates to individual values or slices within a given
variable according to indices . |
<T extends TType> |
scatterSub(Operand<T> ref,
Operand<? extends TNumber> indices,
Operand<T> updates,
ScatterSub.Options... options)
Subtracts sparse updates to a variable reference.
|
<T extends TType> |
scatterUpdate(Operand<T> ref,
Operand<? extends TNumber> indices,
Operand<T> updates,
ScatterUpdate.Options... options)
Applies sparse updates to a variable reference.
|
Scope |
scope()
Returns the current
scope of this API |
<T extends TType> |
select(Operand<TBool> condition,
Operand<T> t,
Operand<T> e)
The SelectV2 operation
|
<T extends TType> |
setDiff1d(Operand<T> x,
Operand<T> y)
Computes the difference between two lists of numbers or strings.
|
<T extends TType,U extends TNumber> |
setDiff1d(Operand<T> x,
Operand<T> y,
Class<U> outIdx)
Computes the difference between two lists of numbers or strings.
|
SetSize |
setSize(Operand<TInt64> setIndices,
Operand<? extends TType> setValues,
Operand<TInt64> setShape,
SetSize.Options... options)
Number of unique elements along last dimension of input
set . |
Shape<TInt32> |
shape(Operand<? extends TType> input)
Returns the shape of a tensor.
|
<U extends TNumber> |
shape(Operand<? extends TType> input,
Class<U> outType)
Returns the shape of a tensor.
|
ShapeN<TInt32> |
shapeN(Iterable<Operand<? extends TType>> input)
Returns shape of tensors.
|
<U extends TNumber> |
shapeN(Iterable<Operand<? extends TType>> input,
Class<U> outType)
Returns shape of tensors.
|
Size<TInt32> |
size(Operand<? extends TType> input)
Returns the size of a tensor.
|
<U extends TNumber> |
size(Operand<? extends TType> input,
Class<U> outType)
Returns the size of a tensor.
|
Skipgram |
skipgram(String filename,
Long batchSize,
Skipgram.Options... options)
Parses a text file and creates a batch of examples.
|
<T extends TType,U extends TNumber> |
slice(Operand<T> input,
Operand<U> begin,
Operand<U> sizeOutput)
Return a slice from 'input'.
|
<T extends TType> |
snapshot(Operand<T> input)
Returns a copy of the input tensor.
|
<T extends TType> |
spaceToBatchNd(Operand<T> input,
Operand<? extends TNumber> blockShape,
Operand<? extends TNumber> paddings)
SpaceToBatch for N-D tensors of type T.
|
<T extends TType> |
split(Operand<TInt32> axis,
Operand<T> value,
Long numSplit)
Splits a tensor into
num_split tensors along one dimension. |
<T extends TType> |
splitV(Operand<T> value,
Operand<? extends TNumber> sizeSplits,
Operand<TInt32> axis,
Long numSplit)
Splits a tensor into
num_split tensors along one dimension. |
<T extends TType> |
squeeze(Operand<T> input,
Squeeze.Options... options)
Removes dimensions of size 1 from the shape of a tensor.
|
<T extends TType> |
stack(Iterable<Operand<T>> values,
Stack.Options... options)
Packs a list of
N rank-R tensors into one rank-(R+1) tensor. |
Stage |
stage(Iterable<Operand<?>> values,
Stage.Options... options)
Stage values similar to a lightweight Enqueue.
|
StageClear |
stageClear(List<Class<? extends TType>> dtypes,
StageClear.Options... options)
Op removes all elements in the underlying container.
|
StagePeek |
stagePeek(Operand<TInt32> index,
List<Class<? extends TType>> dtypes,
StagePeek.Options... options)
Op peeks at the values at the specified index.
|
StageSize |
stageSize(List<Class<? extends TType>> dtypes,
StageSize.Options... options)
Op returns the number of elements in the underlying container.
|
StatefulCase |
statefulCase(Operand<TInt32> branchIndex,
Iterable<Operand<?>> input,
List<Class<? extends TType>> Tout,
List<ConcreteFunction> branches,
Case.Options... options)
An n-way switch statement which calls a single branch function.
|
StatefulIf |
statefulIf(Operand<? extends TType> cond,
Iterable<Operand<?>> input,
List<Class<? extends TType>> Tout,
ConcreteFunction thenBranch,
ConcreteFunction elseBranch,
If.Options... options)
output = cond ? then_branch(input) : else_branch(input)
|
StatefulPartitionedCall |
statefulPartitionedCall(Iterable<Operand<?>> args,
List<Class<? extends TType>> Tout,
ConcreteFunction f,
PartitionedCall.Options... options)
returns
f(inputs) , where f 's body is placed and partitioned. |
StatefulWhile |
statefulWhile(Iterable<Operand<?>> input,
ConcreteFunction cond,
ConcreteFunction body,
While.Options... options)
output = input; While (Cond(output)) { output = Body(output) }
|
StatelessIf |
statelessIf(Operand<? extends TType> cond,
Iterable<Operand<?>> input,
List<Class<? extends TType>> Tout,
ConcreteFunction thenBranch,
ConcreteFunction elseBranch,
If.Options... options)
output = cond ? then_branch(input) : else_branch(input)
|
StatelessPartitionedCall |
statelessPartitionedCall(Iterable<Operand<?>> args,
List<Class<? extends TType>> Tout,
ConcreteFunction f,
PartitionedCall.Options... options)
returns
f(inputs) , where f 's body is placed and partitioned. |
StatelessWhile |
statelessWhile(Iterable<Operand<?>> input,
ConcreteFunction cond,
ConcreteFunction body,
While.Options... options)
output = input; While (Cond(output)) { output = Body(output) }
|
<T extends TType> |
stopGradient(Operand<T> input)
Stops gradient computation.
|
<T extends TType> |
stridedSlice(Operand<T> input,
org.tensorflow.ndarray.index.Index... indices)
Return a strided slice from `input`.
|
<T extends TType,U extends TNumber> |
stridedSlice(Operand<T> input,
Operand<U> begin,
Operand<U> end,
Operand<U> strides,
StridedSlice.Options... options)
Return a strided slice from
input . |
<T extends TType> |
stridedSliceAssign(Operand<T> ref,
Operand<T> value,
org.tensorflow.ndarray.index.Index... indices)
Assign `value` to the sliced l-value reference of `ref`.
|
<T extends TType,U extends TNumber> |
stridedSliceAssign(Operand<T> ref,
Operand<U> begin,
Operand<U> end,
Operand<U> strides,
Operand<T> value,
StridedSliceAssign.Options... options)
Assign
value to the sliced l-value reference of ref . |
<U extends TType,T extends TNumber> |
stridedSliceGrad(Operand<T> shape,
Operand<T> begin,
Operand<T> end,
Operand<T> strides,
Operand<U> dy,
StridedSliceGrad.Options... options)
Returns the gradient of
StridedSlice . |
<T extends TType> |
sum(Operand<T> input,
Operand<? extends TNumber> axis,
Sum.Options... options)
Computes the sum of elements across dimensions of a tensor.
|
<T extends TType> |
switchCond(Operand<T> data,
Operand<TBool> pred)
Forwards
data to the output port determined by pred . |
<T extends TType> |
temporaryVariable(org.tensorflow.ndarray.Shape shape,
Class<T> dtype,
TemporaryVariable.Options... options)
Returns a tensor that may be mutated, but only persists within a single step.
|
<T extends TType> |
tensorArray(Operand<TInt32> sizeOutput,
Class<T> dtype,
TensorArray.Options... options)
An array of Tensors of given size.
|
TensorArrayClose |
tensorArrayClose(Operand<? extends TType> handle)
Delete the TensorArray from its resource container.
|
<T extends TType> |
tensorArrayConcat(Operand<? extends TType> handle,
Operand<TFloat32> flowIn,
Class<T> dtype,
TensorArrayConcat.Options... options)
Concat the elements from the TensorArray into value
value . |
<T extends TType> |
tensorArrayGather(Operand<? extends TType> handle,
Operand<TInt32> indices,
Operand<TFloat32> flowIn,
Class<T> dtype,
TensorArrayGather.Options... options)
Gather specific elements from the TensorArray into output
value . |
TensorArrayGrad |
tensorArrayGrad(Operand<? extends TType> handle,
Operand<TFloat32> flowIn,
String source)
Creates a TensorArray for storing the gradients of values in the given handle.
|
TensorArrayGradWithShape |
tensorArrayGradWithShape(Operand<? extends TType> handle,
Operand<TFloat32> flowIn,
Operand<TInt32> shapeToPrepend,
String source)
Creates a TensorArray for storing multiple gradients of values in the given handle.
|
<T extends TType> |
tensorArrayPack(Operand<TString> handle,
Operand<TFloat32> flowIn,
Class<T> dtype,
TensorArrayPack.Options... options)
The TensorArrayPack operation
|
<T extends TType> |
tensorArrayRead(Operand<? extends TType> handle,
Operand<TInt32> index,
Operand<TFloat32> flowIn,
Class<T> dtype)
Read an element from the TensorArray into output
value . |
TensorArrayScatter |
tensorArrayScatter(Operand<? extends TType> handle,
Operand<TInt32> indices,
Operand<? extends TType> value,
Operand<TFloat32> flowIn)
Scatter the data from the input value into specific TensorArray elements.
|
TensorArraySize |
tensorArraySize(Operand<? extends TType> handle,
Operand<TFloat32> flowIn)
Get the current size of the TensorArray.
|
TensorArraySplit |
tensorArraySplit(Operand<? extends TType> handle,
Operand<? extends TType> value,
Operand<TInt64> lengths,
Operand<TFloat32> flowIn)
Split the data from the input value into TensorArray elements.
|
TensorArrayUnpack |
tensorArrayUnpack(Operand<TString> handle,
Operand<? extends TType> value,
Operand<TFloat32> flowIn)
The TensorArrayUnpack operation
|
TensorArrayWrite |
tensorArrayWrite(Operand<? extends TType> handle,
Operand<TInt32> index,
Operand<? extends TType> value,
Operand<TFloat32> flowIn)
Push an element onto the tensor_array.
|
<U extends TType> |
tensorListConcat(Operand<? extends TType> inputHandle,
Operand<? extends TNumber> elementShape,
Operand<TInt64> leadingDims,
Class<U> elementDtype)
Concats all tensors in the list along the 0th dimension.
|
<T extends TType> |
tensorListConcatLists(Operand<? extends TType> inputA,
Operand<? extends TType> inputB,
Class<T> elementDtype)
The TensorListConcatLists operation
|
<T extends TNumber> |
tensorListElementShape(Operand<? extends TType> inputHandle,
Class<T> shapeType)
The shape of the elements of the given list, as a tensor.
|
TensorListFromTensor |
tensorListFromTensor(Operand<? extends TType> tensor,
Operand<? extends TNumber> elementShape)
Creates a TensorList which, when stacked, has the value of
tensor . |
<T extends TType> |
tensorListGather(Operand<? extends TType> inputHandle,
Operand<TInt32> indices,
Operand<TInt32> elementShape,
Class<T> elementDtype)
Creates a Tensor by indexing into the TensorList.
|
<T extends TType> |
tensorListGetItem(Operand<? extends TType> inputHandle,
Operand<TInt32> index,
Operand<TInt32> elementShape,
Class<T> elementDtype)
The TensorListGetItem operation
|
TensorListLength |
tensorListLength(Operand<? extends TType> inputHandle)
Returns the number of tensors in the input tensor list.
|
<T extends TType> |
tensorListPopBack(Operand<? extends TType> inputHandle,
Operand<TInt32> elementShape,
Class<T> elementDtype)
Returns the last element of the input list as well as a list with all but that element.
|
TensorListPushBack |
tensorListPushBack(Operand<? extends TType> inputHandle,
Operand<? extends TType> tensor)
Returns a list which has the passed-in
Tensor as last element and the other elements of the given list in input_handle . |
TensorListPushBackBatch |
tensorListPushBackBatch(Operand<? extends TType> inputHandles,
Operand<? extends TType> tensor)
The TensorListPushBackBatch operation
|
<U extends TType> |
tensorListReserve(Operand<? extends TNumber> elementShape,
Operand<TInt32> numElements,
Class<U> elementDtype)
List of the given size with empty elements.
|
TensorListResize |
tensorListResize(Operand<? extends TType> inputHandle,
Operand<TInt32> sizeOutput)
Resizes the list.
|
TensorListScatter |
tensorListScatter(Operand<? extends TType> tensor,
Operand<TInt32> indices,
Operand<? extends TNumber> elementShape,
Operand<TInt32> numElements)
Creates a TensorList by indexing into a Tensor.
|
TensorListScatterIntoExistingList |
tensorListScatterIntoExistingList(Operand<? extends TType> inputHandle,
Operand<? extends TType> tensor,
Operand<TInt32> indices)
Scatters tensor at indices in an input list.
|
TensorListSetItem |
tensorListSetItem(Operand<? extends TType> inputHandle,
Operand<TInt32> index,
Operand<? extends TType> item)
The TensorListSetItem operation
|
TensorListSplit |
tensorListSplit(Operand<? extends TType> tensor,
Operand<? extends TNumber> elementShape,
Operand<TInt64> lengths)
Splits a tensor into a list.
|
<T extends TType> |
tensorListStack(Operand<? extends TType> inputHandle,
Operand<TInt32> elementShape,
Class<T> elementDtype,
TensorListStack.Options... options)
Stacks all tensors in the list.
|
<U extends TType> |
tensorMapErase(Operand<? extends TType> inputHandle,
Operand<? extends TType> key,
Class<U> valueDtype)
Returns a tensor map with item from given key erased.
|
TensorMapHasKey |
tensorMapHasKey(Operand<? extends TType> inputHandle,
Operand<? extends TType> key)
Returns whether the given key exists in the map.
|
TensorMapInsert |
tensorMapInsert(Operand<? extends TType> inputHandle,
Operand<? extends TType> key,
Operand<? extends TType> value)
Returns a map that is the 'input_handle' with the given key-value pair inserted.
|
<U extends TType> |
tensorMapLookup(Operand<? extends TType> inputHandle,
Operand<? extends TType> key,
Class<U> valueDtype)
Returns the value from a given key in a tensor map.
|
TensorMapSize |
tensorMapSize(Operand<? extends TType> inputHandle)
Returns the number of tensors in the input tensor map.
|
<T extends TType> |
tensorMapStackKeys(Operand<? extends TType> inputHandle,
Class<T> keyDtype)
Returns a Tensor stack of all keys in a tensor map.
|
<T extends TType> |
tensorScatterNdAdd(Operand<T> tensor,
Operand<? extends TNumber> indices,
Operand<T> updates)
Adds sparse
updates to an existing tensor according to indices . |
<T extends TType> |
tensorScatterNdMax(Operand<T> tensor,
Operand<? extends TNumber> indices,
Operand<T> updates)
The TensorScatterMax operation
|
<T extends TType> |
tensorScatterNdMin(Operand<T> tensor,
Operand<? extends TNumber> indices,
Operand<T> updates)
The TensorScatterMin operation
|
<T extends TType> |
tensorScatterNdSub(Operand<T> tensor,
Operand<? extends TNumber> indices,
Operand<T> updates)
Subtracts sparse
updates from an existing tensor according to indices . |
<T extends TType> |
tensorScatterNdUpdate(Operand<T> tensor,
Operand<? extends TNumber> indices,
Operand<T> updates)
Scatter
updates into an existing tensor according to indices . |
<T extends TType,U extends TNumber> |
tensorStridedSliceUpdate(Operand<T> input,
Operand<U> begin,
Operand<U> end,
Operand<U> strides,
Operand<T> value,
TensorStridedSliceUpdate.Options... options)
Assign
value to the sliced l-value reference of input . |
<T extends TType> |
tile(Operand<T> input,
Operand<? extends TNumber> multiples)
Constructs a tensor by tiling a given tensor.
|
Timestamp |
timestamp()
Provides the time since epoch in seconds.
|
TopKUnique |
topKUnique(Operand<TFloat32> input,
Long k)
Returns the TopK unique values in the array in sorted order.
|
TopKWithUnique |
topKWithUnique(Operand<TFloat32> input,
Long k)
Returns the TopK values in the array in sorted order.
|
<T extends TType> |
unbatch(Operand<T> batchedTensor,
Operand<TInt64> batchIndex,
Operand<TInt64> id,
Long timeoutMicros,
Unbatch.Options... options)
Reverses the operation of Batch for a single output Tensor.
|
<T extends TType> |
unbatchGrad(Operand<T> originalInput,
Operand<TInt64> batchIndex,
Operand<T> grad,
Operand<TInt64> id,
UnbatchGrad.Options... options)
Gradient of Unbatch.
|
<T extends TType> |
unique(Operand<T> x,
Operand<? extends TNumber> axis)
Finds unique elements along an axis of a tensor.
|
<T extends TType,V extends TNumber> |
unique(Operand<T> x,
Operand<? extends TNumber> axis,
Class<V> outIdx)
Finds unique elements along an axis of a tensor.
|
<T extends TType> |
uniqueWithCounts(Operand<T> x,
Operand<? extends TNumber> axis)
Finds unique elements along an axis of a tensor.
|
<T extends TType,V extends TNumber> |
uniqueWithCounts(Operand<T> x,
Operand<? extends TNumber> axis,
Class<V> outIdx)
Finds unique elements along an axis of a tensor.
|
<T extends TNumber> |
unravelIndex(Operand<T> indices,
Operand<T> dims)
Converts an array of flat indices into a tuple of coordinate arrays.
|
<T extends TType> |
unstack(Operand<T> value,
Long num,
Unstack.Options... options)
Unpacks a given dimension of a rank-
R tensor into num rank-(R-1) tensors. |
Unstage |
unstage(List<Class<? extends TType>> dtypes,
Unstage.Options... options)
Op is similar to a lightweight Dequeue.
|
<T extends TType> |
varHandleOp(Class<T> dtype,
org.tensorflow.ndarray.Shape shape,
VarHandleOp.Options... options)
Creates a handle to a Variable resource.
|
<T extends TType> |
variable(Operand<T> init,
Variable.Options... options)
Factory method to create a new Variable with its initializer.
|
<T extends TType> |
variable(org.tensorflow.ndarray.Shape shape,
Class<T> dtype,
Variable.Options... options)
Holds state in the form of a tensor that persists across steps.
|
VariableShape<TInt32> |
variableShape(Operand<? extends TType> input)
Returns the shape of the variable pointed to by
resource . |
<T extends TNumber> |
variableShape(Operand<? extends TType> input,
Class<T> outType)
Returns the shape of the variable pointed to by
resource . |
VarIsInitializedOp |
varIsInitializedOp(Operand<? extends TType> resource)
Checks whether a resource handle-based variable has been initialized.
|
Where |
where(Operand<? extends TType> condition)
Returns locations of nonzero / true values in a tensor.
|
While |
whileOp(Iterable<Operand<?>> input,
ConcreteFunction cond,
ConcreteFunction body,
While.Options... options)
output = input; While (Cond(output)) { output = Body(output) }
|
Ops |
withControlDependencies(Iterable<Op> controls)
Returns an API that adds operations to the graph with the provided control dependencies.
|
Ops |
withControlDependencies(Op... controls)
Returns an API that adds operations to the graph with the provided control dependencies.
|
Ops |
withControlDependencyOps(Iterable<Operation> controls)
Returns an API that adds operations to the graph with the provided control dependencies.
|
Ops |
withControlDependencyOps(Operation... controls)
Returns an API that adds operations to the graph with the provided control dependencies.
|
Ops |
withDevice(DeviceSpec deviceSpec)
Returns an API that places the created operations on the device(s) matching the provided spec.
|
Ops |
withInitScope()
Returns an API that builds init operations.
|
Ops |
withName(String opName)
Returns an API that uses the provided name for an op.
|
Ops |
withSubScope(String childScopeName)
Returns an API that builds operations with the provided name prefix.
|
<T extends TType> |
zeros(Operand<? extends TNumber> dims,
Class<T> type)
Creates a zeroed tensor given its type and shape.
|
<T extends TType> |
zerosLike(Operand<T> x)
Returns a tensor of zeros with the same shape and type as x.
|
public final NnOps nn
public final SummaryOps summary
public final ImageOps image
public final RaggedOps ragged
public final DataOps data
public final ShapeOps shape
public final IoOps io
public final DtypesOps dtypes
public final XlaOps xla
public final LinalgOps linalg
public final RandomOps random
public final StringsOps strings
public final SparseOps sparse
public final BitwiseOps bitwise
public final TpuOps tpu
public final AudioOps audio
public final MathOps math
public final SignalOps signal
public final TrainOps train
public final QuantizationOps quantization
public Abort abort(Abort.Options... options)
Returns nothing but an exception.
options
- carries optional attribute valuespublic All all(Operand<TBool> input, Operand<? extends TNumber> axis, All.Options... options)
input
along the dimensions given in axis
. Unless
keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in
axis
. If keep_dims
is true, the reduced dimensions are
retained with length 1.input
- The tensor to reduce.axis
- The dimensions to reduce. Must be in the range
[-rank(input), rank(input))
.options
- carries optional attribute valuespublic Any any(Operand<TBool> input, Operand<? extends TNumber> axis, Any.Options... options)
input
along the dimensions given in axis
. Unless
keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in
axis
. If keep_dims
is true, the reduced dimensions are
retained with length 1.input
- The tensor to reduce.axis
- The dimensions to reduce. Must be in the range
[-rank(input), rank(input))
.options
- carries optional attribute valuespublic Constant<TString> array(String... data)
String
elements, using the default UTF-8 charset.data
- An array containing the values to put into the new constant.String
constantpublic Constant<TInt32> array(int... data)
int
elements.data
- An array containing the values to put into the new constant.public Constant<TFloat64> array(double... data)
double
elements.data
- An array containing the values to put into the new constant.public Constant<TInt64> array(long... data)
long
elements.data
- An array containing the values to put into the new constant.public Constant<TUint8> array(byte... data)
byte
elements.data
- An array containing the values to put into the new constant.public Constant<TBool> array(boolean... data)
boolean
elements.data
- An array containing the values to put into the new constant.public Constant<TFloat32> array(float... data)
float
elements.data
- An array containing the values to put into the new constant.public Constant<TString> array(Charset charset, String... data)
String
elements, using the given charset.charset
- charset for encoding/decoding strings bytes.data
- An array containing the values to put into the new constant. String elements are
sequences of bytes from the last array dimension.String
constantpublic AssertThat assertThat(Operand<TBool> condition, Iterable<Operand<?>> data, AssertThat.Options... options)
condition
evaluates to false, print the list of tensors in data
.
summarize
determines how many entries of the tensors to print.condition
- The condition to evaluate.data
- The tensors to print out when condition is false.options
- carries optional attribute valuespublic <T extends TType> Assign<T> assign(Operand<T> ref, Operand<T> value, Assign.Options... options)
T
- data type for output_ref
outputT
- data type for Assign
output and operandsref
- Should be from a Variable
node. May be uninitialized.value
- The value to be assigned to the variable.options
- carries optional attribute valuespublic <T extends TType> AssignAdd<T> assignAdd(Operand<T> ref, Operand<T> value, AssignAdd.Options... options)
T
- data type for output_ref
outputT
- data type for AssignAdd
output and operandsref
- Should be from a Variable
node.value
- The value to be added to the variable.options
- carries optional attribute valuespublic AssignAddVariableOp assignAddVariableOp(Operand<? extends TType> resource, Operand<? extends TType> value)
resource
- handle to the resource in which to store the variable.value
- the value by which the variable will be incremented.public <T extends TType> AssignSub<T> assignSub(Operand<T> ref, Operand<T> value, AssignSub.Options... options)
T
- data type for output_ref
outputT
- data type for AssignSub
output and operandsref
- Should be from a Variable
node.value
- The value to be subtracted to the variable.options
- carries optional attribute valuespublic AssignSubVariableOp assignSubVariableOp(Operand<? extends TType> resource, Operand<? extends TType> value)
resource
- handle to the resource in which to store the variable.value
- the value by which the variable will be incremented.public AssignVariableOp assignVariableOp(Operand<? extends TType> resource, Operand<? extends TType> value)
resource
- handle to the resource in which to store the variable.value
- the value to set the new tensor to use.public Barrier barrier(List<Class<? extends TType>> componentTypes, Barrier.Options... options)
At runtime, the barrier contains 'complete' and 'incomplete' elements. A complete element has defined tensors for all components of its value tuple, and may be accessed using BarrierTakeMany. An incomplete element has some undefined components in its value tuple, and may be updated using BarrierInsertMany.
componentTypes
- The type of each component in a value.options
- carries optional attribute valuespublic BarrierClose barrierClose(Operand<TString> handle, BarrierClose.Options... options)
handle
- The handle to a barrier.options
- carries optional attribute valuespublic BarrierIncompleteSize barrierIncompleteSize(Operand<TString> handle)
handle
- The handle to a barrier.public BarrierInsertMany barrierInsertMany(Operand<TString> handle, Operand<TString> keys, Operand<? extends TType> values, Long componentIndex)
handle
- The handle to a barrier.keys
- A one-dimensional tensor of keys, with length n.values
- An any-dimensional tensor of values, which are associated with the
respective keys. The 0th dimension must have length n.componentIndex
- The component of the barrier elements that is being assigned.public BarrierReadySize barrierReadySize(Operand<TString> handle)
handle
- The handle to a barrier.public BarrierTakeMany barrierTakeMany(Operand<TString> handle, Operand<TInt32> numElements, List<Class<? extends TType>> componentTypes, BarrierTakeMany.Options... options)
Elements come out of the barrier when they are complete, and in the order in which they were placed into the barrier. The indices output provides information about the batch in which each element was originally inserted into the barrier.
handle
- The handle to a barrier.numElements
- A single-element tensor containing the number of elements to
take.componentTypes
- The type of each component in a value.options
- carries optional attribute valuespublic Batch batch(Iterable<Operand<?>> inTensors, Long numBatchThreads, Long maxBatchSize, Long batchTimeoutMicros, Long gradTimeoutMicros, Batch.Options... options)
All Tensors in in_tensors are batched together (so, for example, labels and features should be batched with a single instance of this operation.
Each invocation of batch emits an id
scalar which will be used to identify
this particular invocation when doing unbatch or its gradient.
Each op which emits a non-empty batch will also emit a non-empty batch_index Tensor, which, is a [K, 3] matrix where each row contains the invocation's id, start, and length of elements of each set of Tensors present in batched_tensors.
Batched tensors are concatenated along the first dimension, and all tensors in in_tensors must have the first dimension of the same size.
in_tensors: The tensors to be batched. num_batch_threads: Number of scheduling threads for processing batches of work. Determines the number of batches processed in parallel. max_batch_size: Batch sizes will never be bigger than this. batch_timeout_micros: Maximum number of microseconds to wait before outputting an incomplete batch. allowed_batch_sizes: Optional list of allowed batch sizes. If left empty, does nothing. Otherwise, supplies a list of batch sizes, causing the op to pad batches up to one of those sizes. The entries must increase monotonically, and the final entry must equal max_batch_size. grad_timeout_micros: The timeout to use for the gradient. See Unbatch. batched_tensors: Either empty tensors or a batch of concatenated Tensors. batch_index: If out_tensors is non-empty, has information to invert it. container: Controls the scope of sharing of this batch. id: always contains a scalar with a unique ID for this invocation of Batch. shared_name: Concurrently running instances of batch in the same device with the same container and shared_name will batch their elements together. If left empty, the op name will be used as the shared name. T: the types of tensors to be batched.
inTensors
- The inTensors valuenumBatchThreads
- The value of the numBatchThreads attributemaxBatchSize
- The value of the maxBatchSize attributebatchTimeoutMicros
- The value of the batchTimeoutMicros attributegradTimeoutMicros
- The value of the gradTimeoutMicros attributeoptions
- carries optional attribute valuespublic BatchFunction batchFunction(Iterable<Operand<?>> inTensors, Iterable<Operand<?>> capturedTensors, ConcreteFunction f, Long numBatchThreads, Long maxBatchSize, Long batchTimeoutMicros, List<Class<? extends TType>> Tout, BatchFunction.Options... options)
# This input will be captured. y = tf.placeholder_with_default(1.0, shape=[]) @tf.Defun(tf.float32) def computation(a): return tf.matmul(a, a) + y b = gen_batch_ops.batch_function( f=computation in_tensors=[a], captured_tensors=computation.captured_inputs, Tout=[o.type for o in computation.definition.signature.output_arg], num_batch_threads=1, max_batch_size=10, batch_timeout_micros=100000, # 100ms allowed_batch_sizes=[3, 10], batching_queue="")
If more than one session.run call is simultaneously trying to compute b
the values of a
will be gathered, non-deterministically concatenated
along the first axis, and only one thread will run the computation.
Assumes that all arguments of the function are Tensors which will be batched along their first dimension.
Arguments that are captured, are not batched. The session.run call which does the concatenation, will use the values of the captured tensors available to it. Therefore, typical uses of captured tensors should involve values which remain unchanged across session.run calls. Inference is a good example of this.
SparseTensor is not supported. The return value of the decorated function must be a Tensor or a list/tuple of Tensors.
inTensors
- The tensors to be batched.capturedTensors
- The tensors which are captured in the function, and don't need
to be batched.f
- The value of the f attributenumBatchThreads
- Number of scheduling threads for processing batches of work.
Determines the number of batches processed in parallel.maxBatchSize
- Batch sizes will never be bigger than this.batchTimeoutMicros
- Maximum number of microseconds to wait before outputting
an incomplete batch.Tout
- the types of the output tensors.options
- carries optional attribute valuespublic <T extends TType> BatchToSpace<T> batchToSpace(Operand<T> input, Operand<? extends TNumber> crops, Long blockSize)
Rearranges (permutes) data from batch into blocks of spatial data, followed by
cropping. This is the reverse transformation of SpaceToBatch. More specifically,
this op outputs a copy of the input tensor where values from the batch
dimension are moved in spatial blocks to the height
and width
dimensions,
followed by cropping along the height
and width
dimensions.
T
- data type for output
outputT
- data type for BatchToSpace
output and operandsinput
- 4-D tensor with shape
[batch*block_size*block_size, height_pad/block_size, width_pad/block_size, depth]
. Note that the batch size of the input tensor must be divisible by
block_size * block_size
.crops
- 2-D tensor of non-negative integers with shape [2, 2]
. It specifies
how many elements to crop from the intermediate result across the spatial
dimensions as follows:
crops = [[crop_top, crop_bottom], [crop_left, crop_right]]
blockSize
- The value of the blockSize attributepublic <T extends TType> BatchToSpaceNd<T> batchToSpaceNd(Operand<T> input, Operand<? extends TNumber> blockShape, Operand<? extends TNumber> crops)
M + 1
dimensions of shape
block_shape + [batch]
, interleaves these blocks back into the grid defined by
the spatial dimensions [1, ..., M]
, to obtain a result with the same rank as
the input. The spatial dimensions of this intermediate result are then
optionally cropped according to crops
to produce the output. This is the
reverse of SpaceToBatch. See below for a precise description.T
- data type for output
outputT
- data type for BatchToSpaceND
output and operandsinput
- N-D with shape input_shape = [batch] + spatial_shape + remaining_shape
,
where spatial_shape has M dimensions.blockShape
- 1-D with shape [M]
, all values must be >= 1.crops
- 2-D with shape [M, 2]
, all values must be >= 0.
crops[i] = [crop_start, crop_end]
specifies the amount to crop from input
dimension i + 1
, which corresponds to spatial dimension i
. It is
required that
crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]
.
This operation is equivalent to the following steps:
Reshape input
to reshaped
of shape:
[block_shape[0], ..., block_shape[M-1],
batch / prod(block_shape),
input_shape[1], ..., input_shape[N-1]]
Permute dimensions of reshaped
to produce permuted
of shape
[batch / prod(block_shape),
input_shape[1], block_shape[0], ..., input_shape[M], block_shape[M-1],
input_shape[M+1], ..., input_shape[N-1]]
Reshape permuted
to produce reshaped_permuted
of shape
[batch / prod(block_shape),
input_shape[1] * block_shape[0], ..., input_shape[M] * block_shape[M-1],
input_shape[M+1], ..., input_shape[N-1]]
Crop the start and end of dimensions [1, ..., M]
of
reshaped_permuted
according to crops
to produce the output of shape:
[batch / prod(block_shape),
input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1], ..., input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1],
input_shape[M+1], ..., input_shape[N-1]]
Some examples:
(1) For the following input of shape [4, 1, 1, 1]
, block_shape = [2, 2]
, and
crops = [[0, 0], [0, 0]]
:
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
The output tensor has shape [1, 2, 2, 1]
and value:
x = [[[[1], [2]], [[3], [4]]]]
(2) For the following input of shape [4, 1, 1, 3]
, block_shape = [2, 2]
, and
crops = [[0, 0], [0, 0]]
:
[[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]
The output tensor has shape [1, 2, 2, 3]
and value:
x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]]
(3) For the following input of shape [4, 2, 2, 1]
, block_shape = [2, 2]
, and
crops = [[0, 0], [0, 0]]
:
x = [[[[1], [3]], [[9], [11]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]]
The output tensor has shape [1, 4, 4, 1]
and value:
x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]]
(4) For the following input of shape [8, 1, 3, 1]
, block_shape = [2, 2]
, and
crops = [[0, 0], [2, 0]]
:
x = [[[[0], [1], [3]]], [[[0], [9], [11]]], [[[0], [2], [4]]], [[[0], [10], [12]]], [[[0], [5], [7]]], [[[0], [13], [15]]], [[[0], [6], [8]]], [[[0], [14], [16]]]]
The output tensor has shape [2, 2, 4, 1]
and value:
x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]]], [[[9], [10], [11], [12]], [[13], [14], [15], [16]]]]
public <U extends TType> Bitcast<U> bitcast(Operand<? extends TType> input, Class<U> type)
input
, this operation returns a tensor that has the same buffer
data as input
with datatype type
.
If the input datatype T
is larger than the output datatype type
then the
shape changes from [...] to [..., sizeof(T
)/sizeof(type
)].
If T
is smaller than type
, the operator requires that the rightmost
dimension be equal to sizeof(type
)/sizeof(T
). The shape then goes from
[..., sizeof(type
)/sizeof(T
)] to [...].
tf.bitcast() and tf.cast() work differently when real dtype is casted as a complex dtype (e.g. tf.complex64 or tf.complex128) as tf.cast() make imaginary part 0 while tf.bitcast() gives module error. For example,
Example 1:
a = [1., 2., 3.] equality_bitcast = tf.bitcast(a, tf.complex128) Traceback (most recent call last): ... InvalidArgumentError: Cannot bitcast from 1 to 18 [Op:Bitcast] equality_cast = tf.cast(a, tf.complex128) print(equality_cast) tf.Tensor([1.+0.j 2.+0.j 3.+0.j], shape=(3,), dtype=complex128)
Example 2:
tf.bitcast(tf.constant(0xffffffff, dtype=tf.uint32), tf.uint8) <tf.Tensor: shape=(4,), dtype=uint8, numpy=array([255, 255, 255, 255], dtype=uint8)>
Example 3:
x = [1., 2., 3.] y = [0., 2., 3.] equality= tf.equal(x,y) equality_cast = tf.cast(equality,tf.float32) equality_bitcast = tf.bitcast(equality_cast,tf.uint8) print(equality) tf.Tensor([False True True], shape=(3,), dtype=bool) print(equality_cast) tf.Tensor([0. 1. 1.], shape=(3,), dtype=float32) print(equality_bitcast) tf.Tensor( [[ 0 0 0 0] [ 0 0 128 63] [ 0 0 128 63]], shape=(3, 4), dtype=uint8)
NOTE: Bitcast is implemented as a low-level cast, so machines with different endian orderings will give different results.
U
- data type for output
outputU
- data type for Bitcast
output and operandsinput
- The input valuetype
- The value of the type attributepublic <T extends TType> Operand<T> booleanMask(Operand<T> tensor, Operand<TBool> mask, BooleanMask.Options... options)
true
in the mask.
Numpy equivalent is tensor[mask]
.
In general, 0 < dim(mask) = K <= dim(tensor)
, and mask
's shape must match
the first K dimensions of tensor
's shape. We then have:
booleanMask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]
where (i1,...,iK)
is the ith true
entry of mask
(row-major order).
The axis
could be used with mask
to indicate the axis to mask from (it's 0 by default).
In that case, axis + dim(mask) <= dim(tensor)
and mask
's shape must match
the first axis + dim(mask)
dimensions of tensor
's shape.
tensor
- The tensor to mask.mask
- The mask to apply.options
- carries optional attributes valuespublic <T extends TType> Operand<T> booleanMaskUpdate(Operand<T> tensor, Operand<TBool> mask, Operand<T> updates, BooleanMaskUpdate.Options... options)
updates
will be broadcasted by default
Numpy equivalent is `tensor[mask] = updates`.
In general, 0 < dim(mask) = K <= dim(tensor)
, and mask
's shape must match the first K dimensions of
tensor
's shape. We then have: booleanMask(tensor, mask)[i, j1,...,jd] =
tensor[i1,...,iK,j1,...,jd]
where (i1,...,iK)
is the ith true
entry of mask
(row-major
order).
The axis
could be used with mask
to indicate the axis to mask from (it's 0 by default). In that
case, axis + dim(mask) <= dim(tensor)
and mask
's shape must match the first axis +
dim(mask)
dimensions of tensor
's shape.
The shape of updates
should be [n, t_1, t_2, ...]
where n
is the number of true values in
mask
and t_i
is the i
th dimension of tensor
after axis
and mask
.
updates
will be broadcasted to this shape by default, which can be disabled using options
.
tensor
- The tensor to mask.mask
- The mask to apply.updates
- the new valuesoptions
- carries optional attributes valuespublic <T extends TNumber> BroadcastDynamicShape<T> broadcastDynamicShape(Operand<T> s0, Operand<T> s1)
s0
and s1
, tensors that represent shapes, compute r0
, the
broadcasted shape. s0
, s1
and r0
are all integer vectors.T
- data type for r0
outputT
- data type for BroadcastArgs
output and operandss0
- The s0 values1
- The s1 valuepublic <T extends TType> BroadcastTo<T> broadcastTo(Operand<T> input, Operand<? extends TNumber> shape)
For example,
x = tf.constant([1, 2, 3]) y = tf.broadcast_to(x, [3, 3]) print(y) tf.Tensor( [[1 2 3] [1 2 3] [1 2 3]], shape=(3, 3), dtype=int32)
In the above example, the input Tensor with the shape of [1, 3]
is broadcasted to output Tensor with shape of [3, 3]
.
When doing broadcasted operations such as multiplying a tensor by a scalar, broadcasting (usually) confers some time or space benefit, as the broadcasted tensor is never materialized.
However, broadcast_to
does not carry with it any such benefits.
The newly-created tensor takes the full memory of the broadcasted
shape. (In a graph context, broadcast_to
might be fused to
subsequent operation and then be optimized away, however.)
T
- data type for output
outputT
- data type for BroadcastTo
output and operandsinput
- A Tensor to broadcast.shape
- An 1-D int
Tensor. The shape of the desired output.public Bucketize bucketize(Operand<? extends TNumber> input, List<Float> boundaries)
then the output will be output = [[0, 3] [3, 2] [1, 3]]
input
- Any shape of Tensor contains with int or float type.boundaries
- A sorted list of floats gives the boundary of the buckets.public Operand<?> call(ConcreteFunction function, Operand<?> argument)
argument
- the argument to the callConcreteFunction.call(Ops, Operand)
public Map<String,Operand<?>> call(ConcreteFunction function, Map<String,Operand<?>> arguments)
Signature
.arguments
- the arguments to the callConcreteFunction.call(Ops, Map)
public Case caseOp(Operand<TInt32> branchIndex, Iterable<Operand<?>> input, List<Class<? extends TType>> Tout, List<ConcreteFunction> branches, Case.Options... options)
An n-way switch statement, implementing the following: ``` switch (branch_index) { case 0: output = branches[0](input); break; case 1: output = branches[1](input); break; ... case [[nbranches-1]]: default: output = branches[nbranches-1](input); break; } ```
Selects between StatefulCase
and StatelessCase
based on the statefulness of the function arguments.
branchIndex
- The branch selector, an int32 Tensor.input
- A list of input tensors passed to the branch function.Tout
- A list of output types.branches
- A list of functions each of which takes 'inputs' and returns a list of tensors, whose types are the same as what every other branch returns.
options
- carries optional attribute valuespublic <T extends TType> ClipByValue<T> clipByValue(Operand<T> t, Operand<T> clipValueMin, Operand<T> clipValueMax)
t
, this operation returns a tensor of the same type and
shape as t
with its values clipped to clip_value_min
and clip_value_max
.
Any values less than clip_value_min
are set to clip_value_min
. Any values
greater than clip_value_max
are set to clip_value_max
.T
- data type for output
outputT
- data type for ClipByValue
output and operandst
- A Tensor
.clipValueMin
- A 0-D (scalar) Tensor
, or a Tensor
with the same shape
as t
. The minimum value to clip by.clipValueMax
- A 0-D (scalar) Tensor
, or a Tensor
with the same shape
as t
. The maximum value to clip by.public <T extends TType> Concat<T> concat(Iterable<Operand<T>> values, Operand<? extends TNumber> axis)
T
- data type for output
outputT
- data type for ConcatV2
output and operandsvalues
- List of N
Tensors to concatenate. Their ranks and types must match,
and their sizes must match in all dimensions except concat_dim
.axis
- 0-D. The dimension along which to concatenate. Must be in the
range [-rank(values), rank(values)).public Constant<TInt32> constant(int data)
int
element.data
- The value to put into the new constant.public Constant<TFloat64> constant(double[][][] data)
double
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TUint8> constant(byte[][][][][] data)
byte
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TString> constant(org.tensorflow.ndarray.NdArray<String> data)
String
elements that is a copy of a given n-dimensional array,
using the default UTF-8 encoding.data
- an n-dimensional array of String
elements.public Constant<TInt32> constant(int[][][][] data)
int
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TUint8> constant(byte data)
byte
element.data
- The value to put into the new constant.public Constant<TInt64> constant(long[][] data)
long
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TFloat32> constant(float[][][][][][] data)
float
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TBool> constant(boolean[][][][][][] data)
boolean
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TBool> constant(boolean[][][][] data)
boolean
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TFloat32> constant(float[][][] data)
float
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TFloat32> constant(float[][][][][] data)
float
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TInt64> constant(long[][][][][] data)
long
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TInt32> constant(int[] data)
int
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TFloat32> constant(float[][] data)
float
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TBool> constant(boolean[][] data)
boolean
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TFloat64> constant(double data)
double
element.data
- The value to put into the new constant.public Constant<TBool> constant(boolean data)
boolean
element.data
- The value to put into the new constant.public Constant<TInt64> constant(long data)
long
element.data
- The value to put into the new constant.public Constant<TString> constant(String data)
String
constant using the default, UTF-8 encoding.data
- The string to put into the new constant.public Constant<TBool> constant(org.tensorflow.ndarray.BooleanNdArray data)
boolean
elements that is a copy of a given n-dimensional array.data
- an n-dimensional array of boolean
elements.public Constant<TFloat64> constant(double[] data)
double
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TInt64> constant(org.tensorflow.ndarray.LongNdArray data)
long
elements that is a copy of a given n-dimensional array.data
- an n-dimensional array of long
elements.public Constant<TFloat32> constant(float[] data)
float
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TInt64> constant(long[][][] data)
long
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TBool> constant(boolean[][][] data)
boolean
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TUint8> constant(byte[] data)
byte
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TInt32> constant(int[][][] data)
int
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TInt32> constant(org.tensorflow.ndarray.IntNdArray data)
int
elements that is a copy of a given n-dimensional array.data
- an n-dimensional array of int
elements.public Constant<TInt64> constant(long[] data)
long
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TFloat32> constant(org.tensorflow.ndarray.FloatNdArray data)
float
elements that is a copy of a given n-dimensional array.data
- an n-dimensional array of float
elements.public Constant<TInt32> constant(int[][][][][] data)
int
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TFloat64> constant(double[][][][][] data)
double
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TBool> constant(boolean[][][][][] data)
boolean
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TInt32> constant(int[][][][][][] data)
int
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TFloat64> constant(org.tensorflow.ndarray.DoubleNdArray data)
double
elements that is a copy of a given n-dimensional array.data
- an n-dimensional array of double
elements.public Constant<TFloat64> constant(double[][][][][][] data)
double
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TInt64> constant(long[][][][][][] data)
long
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TInt32> constant(int[][] data)
int
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TBool> constant(boolean[] data)
boolean
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TFloat32> constant(float data)
float
element.data
- The value to put into the new constant.public Constant<TUint8> constant(byte[][][][] data)
byte
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TFloat32> constant(float[][][][] data)
float
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TUint8> constant(org.tensorflow.ndarray.ByteNdArray data)
byte
elements that is a copy of a given n-dimensional array.data
- an n-dimensional array of byte
elements.public Constant<TUint8> constant(byte[][][][][][] data)
byte
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TInt64> constant(long[][][][] data)
long
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TUint8> constant(byte[][] data)
byte
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TFloat64> constant(double[][] data)
double
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TUint8> constant(byte[][][] data)
byte
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TFloat64> constant(double[][][][] data)
double
elements.data
- An array containing the values to put into the new constant. The dimensions of the
new constant will match those of the array.public Constant<TInt64> constant(org.tensorflow.ndarray.Shape shape)
long
elements representing the size of each dimensions of
the given shape.shape
- a shapepublic Constant<TString> constant(Charset charset, org.tensorflow.ndarray.NdArray<String> data)
String
elements that is a copy of a given n-dimensional array,
using the given encoding.charset
- charset used to encode/decode string bytes.data
- an n-dimensional array of String
elements.public Constant<TString> constant(Charset charset, String[] data)
String
elements, using the given charset.charset
- charset for encoding/decoding strings bytes.data
- An array containing the values to put into the new constant. String elements are
sequences of bytes from the last array dimension.String
constantpublic Constant<TString> constant(Charset charset, String data)
String
constant using a specified encoding.charset
- The encoding from String to bytes.data
- The string to put into the new constant.public Constant<TBool> constant(org.tensorflow.ndarray.Shape shape, org.tensorflow.ndarray.buffer.BooleanDataBuffer data)
TBool
constant with data from the given buffer.shape
- the tensor shape.data
- a buffer containing the tensor data.IllegalArgumentException
- If the tensor shape is not compatible with the bufferpublic Constant<TString> constant(org.tensorflow.ndarray.Shape shape, org.tensorflow.ndarray.buffer.DataBuffer<String> data)
TString
constant with data from the given buffer, using the default UTF-8
encoding.shape
- the tensor shape.data
- a buffer containing the tensor data.IllegalArgumentException
- If the tensor shape is not compatible with the bufferpublic Constant<TUint8> constant(org.tensorflow.ndarray.Shape shape, org.tensorflow.ndarray.buffer.ByteDataBuffer data)
TUint8
constant with data from the given buffer.shape
- the tensor shape.data
- a buffer containing the tensor data.IllegalArgumentException
- If the tensor shape is not compatible with the bufferpublic Constant<TInt32> constant(org.tensorflow.ndarray.Shape shape, org.tensorflow.ndarray.buffer.IntDataBuffer data)
TInt32
constant with data from the given buffer.shape
- the tensor shape.data
- a buffer containing the tensor data.IllegalArgumentException
- If the tensor shape is not compatible with the bufferpublic Constant<TInt64> constant(org.tensorflow.ndarray.Shape shape, org.tensorflow.ndarray.buffer.LongDataBuffer data)
TInt64
constant with data from the given buffer.shape
- the tensor shape.data
- a buffer containing the tensor data.IllegalArgumentException
- If the tensor shape is not compatible with the bufferpublic Constant<TFloat64> constant(org.tensorflow.ndarray.Shape shape, org.tensorflow.ndarray.buffer.DoubleDataBuffer data)
TFloat64
constant with data from the given buffer.shape
- the tensor shape.data
- a buffer containing the tensor data.IllegalArgumentException
- If the tensor shape is not compatible with the bufferpublic Constant<TFloat32> constant(org.tensorflow.ndarray.Shape shape, org.tensorflow.ndarray.buffer.FloatDataBuffer data)
TFloat32
constant with data from the given buffer.shape
- the tensor shape.data
- a buffer containing the tensor data.IllegalArgumentException
- If the tensor shape is not compatible with the bufferpublic <T extends TNumber> Constant<T> constant(Class<T> type, Number number)
type
, with the value of number
. number
may be
truncated if it does not fit in the target type.type
- the type of tensor to create. Must be concrete (i.e. not TFloating
)number
- the value of the tensorIllegalArgumentException
- if the type is abstract (i.e. TFloating
) or unknown.public Constant<TString> constant(Charset charset, org.tensorflow.ndarray.Shape shape, org.tensorflow.ndarray.buffer.DataBuffer<String> data)
TString
constant with data from the given buffer, using the given encoding.charset
- charset used to encode/decode string bytes.shape
- the tensor shape.data
- a buffer containing the tensor data.IllegalArgumentException
- If the tensor shape is not compatible with the bufferpublic <T extends TType> Constant<T> constant(Class<T> type, org.tensorflow.ndarray.Shape shape, org.tensorflow.ndarray.buffer.ByteDataBuffer data)
T
- the tensor typetype
- the tensor type classshape
- the tensor shape.data
- a buffer containing the tensor data.IllegalArgumentException
- If the tensor datatype or shape is not compatible with the
bufferpublic <T extends TType> Constant<T> constantOf(T tensor)
tensor
. tensor
may be closed
afterwards without issue.
Note: this endpoint cannot be simply called constant
since it will conflict with
other endpoints accepting an NdArray in parameter {e.g. #tensorOf(Scope,
FloatNdArray)
}.
tensor
- a Tensor holding the constant valuepublic <T extends TNumber> Constant<T> constantOfSameType(Operand<T> toMatch, Number number)
toMatch
, with the value of number
. number
may be truncated if it does not fit in the target type.toMatch
- the operand providing the target typenumber
- the value of the tensortoMatch
IllegalArgumentException
- if the type is unknown (which should be impossible).constant(Class, Number)
public ConsumeMutexLock consumeMutexLock(Operand<? extends TType> mutexLock)
MutexLock
.
This op exists to consume a tensor created by MutexLock
(other than
direct control dependencies). It should be the only that consumes the tensor,
and will raise an error if it is not. Its only purpose is to keep the
mutex lock tensor alive until it is consumed by this op.
NOTE: This operation must run on the same device as its input. This may
be enforced via the colocate_with
mechanism.
mutexLock
- A tensor returned by MutexLock
.public ControlTrigger controlTrigger()
public <T extends TNumber> CountUpTo<T> countUpTo(Operand<T> ref, Long limit)
T
- data type for output
outputT
- data type for CountUpTo
output and operandsref
- Should be from a scalar Variable
node.limit
- If incrementing ref would bring it above limit, instead generates an
'OutOfRange' error.public DecodeProto decodeProto(Operand<TString> bytes, String messageType, List<String> fieldNames, List<Class<? extends TType>> outputTypes, DecodeProto.Options... options)
decode_proto
op extracts fields from a serialized protocol buffers
message into tensors. The fields in field_names
are decoded and converted
to the corresponding output_types
if possible.
A message_type
name must be provided to give context for the field names.
The actual message descriptor can be looked up either in the linked-in
descriptor pool or a filename provided by the caller using the
descriptor_source
attribute.
Each output tensor is a dense tensor. This means that it is padded to hold
the largest number of repeated elements seen in the input minibatch. (The
shape is also padded by one to prevent zero-sized dimensions). The actual
repeat counts for each example in the minibatch can be found in the sizes
output. In many cases the output of decode_proto
is fed immediately into
tf.squeeze if missing values are not a concern. When using tf.squeeze, always
pass the squeeze dimension explicitly to avoid surprises.
For the most part, the mapping between Proto field types and TensorFlow dtypes is straightforward. However, there are a few special cases:
A proto field that contains a submessage or group can only be converted
to DT_STRING
(the serialized submessage). This is to reduce the complexity
of the API. The resulting string can be used as input to another instance of
the decode_proto op.
TensorFlow lacks support for unsigned integers. The ops represent uint64
types as a DT_INT64
with the same twos-complement bit pattern (the obvious
way). Unsigned int32 values can be represented exactly by specifying type
DT_INT64
, or using twos-complement if the caller specifies DT_INT32
in
the output_types
attribute.
Both binary and text proto serializations are supported, and can be
chosen using the format
attribute.
The descriptor_source
attribute selects the source of protocol
descriptors to consult when looking up message_type
. This may be:
An empty string or "local://", in which case protocol descriptors are created for C++ (not Python) proto definitions linked to the binary.
A file, in which case protocol descriptors are created from the file,
which is expected to contain a FileDescriptorSet
serialized as a string.
NOTE: You can build a descriptor_source
file using the --descriptor_set_out
and --include_imports
options to the protocol compiler protoc
.
A "bytes://<bytes>", in which protocol descriptors are created from <bytes>
,
which is expected to be a FileDescriptorSet
serialized as a string.
bytes
- Tensor of serialized protos with shape batch_shape
.messageType
- Name of the proto message type to decode.fieldNames
- List of strings containing proto field names. An extension field can be decoded
by using its full name, e.g. EXT_PACKAGE.EXT_FIELD_NAME.outputTypes
- List of TF types to use for the respective field in field_names.options
- carries optional attribute valuespublic <T extends TType> DeepCopy<T> deepCopy(Operand<T> x)
x
.T
- data type for y
outputT
- data type for DeepCopy
output and operandsx
- The source tensor of type T
.public DeleteSessionTensor deleteSessionTensor(Operand<TString> handle)
handle
- The handle for a tensor stored in the session state.public DestroyResourceOp destroyResourceOp(Operand<? extends TType> resource, DestroyResourceOp.Options... options)
resource
- handle to the resource to delete.options
- carries optional attribute valuespublic <T extends TType> DestroyTemporaryVariable<T> destroyTemporaryVariable(Operand<T> ref, String varName)
Outputs the final value of the tensor pointed to by 'ref'.
T
- data type for value
outputT
- data type for DestroyTemporaryVariable
output and operandsref
- A reference to the temporary variable tensor.varName
- Name of the temporary variable, usually the name of the matching
'TemporaryVariable' op.public <T extends TType> DynamicPartition<T> dynamicPartition(Operand<T> data, Operand<TInt32> partitions, Long numPartitions)
data
into num_partitions
tensors using indices from partitions
.
For each index tuple js
of size partitions.ndim
, the slice data[js, ...]
becomes part of outputs[partitions[js]]
. The slices with partitions[js] = i
are placed in outputs[i]
in lexicographic order of js
, and the first
dimension of outputs[i]
is the number of entries in partitions
equal to i
.
In detail,
outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:] outputs[i] = pack([data[js, ...] for js if partitions[js] == i])
data.shape
must start with partitions.shape
.
For example:
# Scalar partitions. partitions = 1 num_partitions = 2 data = [10, 20] outputs[0] = [] # Empty with shape [0, 2] outputs[1] = [[10, 20]] # Vector partitions. partitions = [0, 0, 1, 1, 0] num_partitions = 2 data = [10, 20, 30, 40, 50] outputs[0] = [10, 20, 50] outputs[1] = [30, 40]
See dynamic_stitch
for an example on how to merge partitions back.
T
- data type for outputs
outputT
- data type for DynamicPartition
output and operandsdata
- The data valuepartitions
- Any shape. Indices in the range [0, num_partitions)
.numPartitions
- The number of partitions to output.public <T extends TType> DynamicStitch<T> dynamicStitch(Iterable<Operand<TInt32>> indices, Iterable<Operand<T>> data)
data
tensors into a single tensor.
Builds a merged tensor such that
merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]
For example, if each indices[m]
is scalar or vector, we have
# Scalar indices: merged[indices[m], ...] = data[m][...] # Vector indices: merged[indices[m][i], ...] = data[m][i, ...]
Each data[i].shape
must start with the corresponding indices[i].shape
,
and the rest of data[i].shape
must be constant w.r.t. i
. That is, we
must have data[i].shape = indices[i].shape + constant
. In terms of this
constant
, the output shape is
merged.shape = [max(indices)] + constant
Values are merged in order, so if an index appears in both indices[m][i]
and
indices[n][j]
for (m,i) < (n,j)
the slice data[n][j]
will appear in the
merged result. If you do not need this guarantee, ParallelDynamicStitch might
perform better on some devices.
For example:
indices[0] = 6 indices[1] = [4, 1] indices[2] = [[5, 2], [0, 3]] data[0] = [61, 62] data[1] = [[41, 42], [11, 12]] data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]] merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42], [51, 52], [61, 62]]
This method can be used to merge partitions created by dynamic_partition
as illustrated on the following example:
# Apply function (increments x_i) on elements for which a certain condition # apply (x_i != -1 in this example). x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4]) condition_mask=tf.not_equal(x,tf.constant(-1.)) partitioned_data = tf.dynamic_partition( x, tf.cast(condition_mask, tf.int32) , 2) partitioned_data[1] = partitioned_data[1] + 1.0 condition_indices = tf.dynamic_partition( tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2) x = tf.dynamic_stitch(condition_indices, partitioned_data) # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain # unchanged.
T
- data type for merged
outputT
- data type for DynamicStitch
output and operandsindices
- The indices valuedata
- The data valuepublic <T extends TType> EditDistance editDistance(Operand<TInt64> hypothesisIndices, Operand<T> hypothesisValues, Operand<TInt64> hypothesisShape, Operand<TInt64> truthIndices, Operand<T> truthValues, Operand<TInt64> truthShape, EditDistance.Options... options)
The inputs are:
T
- data type for EditDistance
output and operandshypothesisIndices
- The indices of the hypothesis list SparseTensor.
This is an N x R int64 matrix.hypothesisValues
- The values of the hypothesis list SparseTensor.
This is an N-length vector.hypothesisShape
- The shape of the hypothesis list SparseTensor.
This is an R-length vector.truthIndices
- The indices of the truth list SparseTensor.
This is an M x R int64 matrix.truthValues
- The values of the truth list SparseTensor.
This is an M-length vector.truthShape
- truth indices, vector.options
- carries optional attribute valuespublic <T extends TType> Empty<T> empty(Operand<TInt32> shape, Class<T> dtype, Empty.Options... options)
This operation creates a tensor of shape
and dtype
.
T
- data type for output
outputT
- data type for Empty
output and operandsshape
- 1-D. Represents the shape of the output tensor.dtype
- The value of the dtype attributeoptions
- carries optional attribute valuespublic <U extends TType> EmptyTensorList emptyTensorList(Operand<? extends TNumber> elementShape, Operand<TInt32> maxNumElements, Class<U> elementDtype)
handle: an empty tensor list. element_dtype: the type of elements in the list. element_shape: a shape compatible with that of elements in the list.
U
- data type for EmptyTensorList
output and operandselementShape
- The elementShape valuemaxNumElements
- The maxNumElements valueelementDtype
- The value of the elementDtype attributepublic EmptyTensorMap emptyTensorMap()
public EncodeProto encodeProto(Operand<TInt32> sizes, Iterable<Operand<?>> values, List<String> fieldNames, String messageType, EncodeProto.Options... options)
values
must match the schema for the fields
specified in field_names
. All the tensors in values
must have a common
shape prefix, batch_shape.
The sizes
tensor specifies repeat counts for each field. The repeat count
(last dimension) of a each tensor in values
must be greater than or equal
to corresponding repeat count in sizes
.
A message_type
name must be provided to give context for the field names.
The actual message descriptor can be looked up either in the linked-in
descriptor pool or a filename provided by the caller using the
descriptor_source
attribute.
For the most part, the mapping between Proto field types and TensorFlow dtypes is straightforward. However, there are a few special cases:
A proto field that contains a submessage or group can only be converted
to DT_STRING
(the serialized submessage). This is to reduce the complexity
of the API. The resulting string can be used as input to another instance of
the decode_proto op.
TensorFlow lacks support for unsigned integers. The ops represent uint64
types as a DT_INT64
with the same twos-complement bit pattern (the obvious
way). Unsigned int32 values can be represented exactly by specifying type
DT_INT64
, or using twos-complement if the caller specifies DT_INT32
in
the output_types
attribute.
The descriptor_source
attribute selects the source of protocol
descriptors to consult when looking up message_type
. This may be:
An empty string or "local://", in which case protocol descriptors are created for C++ (not Python) proto definitions linked to the binary.
A file, in which case protocol descriptors are created from the file,
which is expected to contain a FileDescriptorSet
serialized as a string.
NOTE: You can build a descriptor_source
file using the --descriptor_set_out
and --include_imports
options to the protocol compiler protoc
.
A "bytes://<bytes>", in which protocol descriptors are created from <bytes>
,
which is expected to be a FileDescriptorSet
serialized as a string.
sizes
- Tensor of int32 with shape [batch_shape, len(field_names)]
.values
- List of tensors containing values for the corresponding field.fieldNames
- List of strings containing proto field names.messageType
- Name of the proto message type to decode.options
- carries optional attribute valuespublic <T extends TType> EnsureShape<T> ensureShape(Operand<T> input, org.tensorflow.ndarray.Shape shape)
T
- data type for output
outputT
- data type for EnsureShape
output and operandsinput
- A tensor, whose shape is to be validated.shape
- The expected (possibly partially specified) shape of the input tensor.public <T extends TType> ExpandDims<T> expandDims(Operand<T> input, Operand<? extends TNumber> axis)
input
, this operation inserts a dimension of 1 at the
dimension index axis
of input
's shape. The dimension index axis
starts at
zero; if you specify a negative number for axis
it is counted backward from
the end.
This operation is useful if you want to add a batch dimension to a single
element. For example, if you have a single image of shape [height, width, channels]
, you can make it a batch of 1 image with expand_dims(image, 0)
,
which will make the shape [1, height, width, channels]
.
Other examples:
# 't' is a tensor of shape [2] shape(expand_dims(t, 0)) ==> [1, 2] shape(expand_dims(t, 1)) ==> [2, 1] shape(expand_dims(t, -1)) ==> [2, 1] # 't2' is a tensor of shape [2, 3, 5] shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5] shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5] shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1]
This operation requires that:
-1-input.dims() <= dim <= input.dims()
This operation is related to squeeze()
, which removes dimensions of
size 1.
T
- data type for output
outputT
- data type for ExpandDims
output and operandsinput
- The input valueaxis
- 0-D (scalar). Specifies the dimension index at which to
expand the shape of input
. Must be in the range
[-rank(input) - 1, rank(input)]
.public <T extends TNumber> ExtractVolumePatches<T> extractVolumePatches(Operand<T> input, List<Long> ksizes, List<Long> strides, String padding)
patches
from input
and put them in the "depth"
output dimension. 3D extension of extract_image_patches
.T
- data type for patches
outputT
- data type for ExtractVolumePatches
output and operandsinput
- 5-D Tensor with shape [batch, in_planes, in_rows, in_cols, depth]
.ksizes
- The size of the sliding window for each dimension of input
.strides
- 1-D of length 5. How far the centers of two consecutive patches are in
input
. Must be: [1, stride_planes, stride_rows, stride_cols, 1]
.padding
- The type of padding algorithm to use.
The size-related attributes are specified as follows:
ksizes = [1, ksize_planes, ksize_rows, ksize_cols, 1] strides = [1, stride_planes, strides_rows, strides_cols, 1]
public <U extends TType> Fill<U> fill(Operand<? extends TNumber> dims, Operand<U> value)
dims
and fills it with value
.
For example:
# Output tensor has shape [2, 3]. fill([2, 3], 9) ==> [[9, 9, 9] [9, 9, 9]]
tf.fill
differs from tf.constant
in a few ways:
tf.fill
only supports scalar contents, whereas tf.constant
supports
Tensor values.tf.fill
creates an Op in the computation graph that constructs the actual
Tensor value at runtime. This is in contrast to tf.constant
which embeds
the entire Tensor into the graph with a Const
node.tf.fill
evaluates at graph runtime, it supports dynamic shapes
based on other runtime Tensors, unlike tf.constant
.U
- data type for output
outputU
- data type for Fill
output and operandsdims
- 1-D. Represents the shape of the output tensor.value
- 0-D (scalar). Value to fill the returned tensor.
@compatibility(numpy)
Equivalent to np.full
@end_compatibility
public Fingerprint fingerprint(Operand<? extends TType> data, Operand<TString> method)
data
.
Fingerprint op considers the first dimension of data
as the batch dimension,
and output[i]
contains the fingerprint value generated from contents in
data[i, ...]
for all i
.
Fingerprint op writes fingerprint values as byte arrays. For example, the
default method farmhash64
generates a 64-bit fingerprint value at a time.
This 8-byte value is written out as an uint8
array of size 8, in little-endian
order.
For example, suppose that data
has data type DT_INT32
and shape (2, 3, 4),
and that the fingerprint method is farmhash64
. In this case, the output shape
is (2, 8), where 2 is the batch dimension size of data
, and 8 is the size of
each fingerprint value in bytes. output[0, :]
is generated from 12 integers in
data[0, :, :]
and similarly output[1, :]
is generated from other 12 integers
in data[1, :, :]
.
Note that this op fingerprints the raw underlying buffer, and it does not fingerprint Tensor's metadata such as data type and/or shape. For example, the fingerprint values are invariant under reshapes and bitcasts as long as the batch dimension remain the same:
Fingerprint(data) == Fingerprint(Reshape(data, ...)) Fingerprint(data) == Fingerprint(Bitcast(data, ...))
For string data, one should expect Fingerprint(data) != Fingerprint(ReduceJoin(data))
in general.
data
- Must have rank 1 or higher.method
- Fingerprint method used by this op. Currently available method is
farmhash::fingerprint64
.public For forOp(Operand<TInt32> start, Operand<TInt32> limit, Operand<TInt32> delta, Iterable<Operand<?>> input, ConcreteFunction body)
output = input; for i in range(start, limit, delta) output = body(i, output);
start
- The lower bound. An int32limit
- The upper bound. An int32delta
- The increment. An int32input
- A list of input tensors whose types are T.body
- A function that takes a list of tensors (int32, T) and returns another list of tensors (T).
public <T extends TType> Gather<T> gather(Operand<T> params, Operand<? extends TNumber> indices, Operand<? extends TNumber> axis, Gather.Options... options)
params
axis axis
according to indices
.
indices
must be an integer tensor of any dimension (usually 0-D or 1-D).
Produces an output tensor with shape params.shape[:axis] + indices.shape[batch_dims:] + params.shape[axis + 1:]
where:
# Scalar indices (output is rank(params) - 1). output[a_0, ..., a_n, b_0, ..., b_n] = params[a_0, ..., a_n, indices, b_0, ..., b_n] # Vector indices (output is rank(params)). output[a_0, ..., a_n, i, b_0, ..., b_n] = params[a_0, ..., a_n, indices[i], b_0, ..., b_n] # Higher rank indices (output is rank(params) + rank(indices) - 1). output[a_0, ..., a_n, i, ..., j, b_0, ... b_n] = params[a_0, ..., a_n, indices[i, ..., j], b_0, ..., b_n]
Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, a 0 is stored in the corresponding output value.
See also tf.batch_gather
and tf.gather_nd
.
T
- data type for output
outputT
- data type for GatherV2
output and operandsparams
- The tensor from which to gather values. Must be at least rank
axis + 1
.indices
- Index tensor. Must be in range [0, params.shape[axis])
.axis
- The axis in params
to gather indices
from. Defaults to the first
dimension. Supports negative indexes.options
- carries optional attribute valuespublic <T extends TType> GatherNd<T> gatherNd(Operand<T> params, Operand<? extends TNumber> indices)
params
into a Tensor with shape specified by indices
.
indices
is a K-dimensional integer tensor, best thought of as a
(K-1)-dimensional tensor of indices into params
, where each element defines a
slice of params
:
output[\\(i_0, ..., i_{K-2}\\)] = params[indices[\\(i_0, ..., i_{K-2}\\)]]
Whereas in tf.gather
indices
defines slices into the axis
dimension of params
, in tf.gather_nd
, indices
defines slices into the
first N
dimensions of params
, where N = indices.shape[-1]
.
The last dimension of indices
can be at most the rank of
params
:
indices.shape[-1] <= params.rank
The last dimension of indices
corresponds to elements
(if indices.shape[-1] == params.rank
) or slices
(if indices.shape[-1] < params.rank
) along dimension indices.shape[-1]
of params
. The output tensor has shape
indices.shape[:-1] + params.shape[indices.shape[-1]:]
Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, a 0 is stored in the corresponding output value.
Some examples below.
Simple indexing into a matrix:
indices = [[0, 0], [1, 1]] params = [['a', 'b'], ['c', 'd']] output = ['a', 'd']
Slice indexing into a matrix:
indices = [[1], [0]] params = [['a', 'b'], ['c', 'd']] output = [['c', 'd'], ['a', 'b']]
Indexing into a 3-tensor:
indices = [[1]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [[['a1', 'b1'], ['c1', 'd1']]] indices = [[0, 1], [1, 0]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [['c0', 'd0'], ['a1', 'b1']] indices = [[0, 0, 1], [1, 0, 1]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = ['b0', 'b1']
Batched indexing into a matrix:
indices = [[[0, 0]], [[0, 1]]] params = [['a', 'b'], ['c', 'd']] output = [['a'], ['b']]
Batched slice indexing into a matrix:
indices = [[[1]], [[0]]] params = [['a', 'b'], ['c', 'd']] output = [[['c', 'd']], [['a', 'b']]]
Batched indexing into a 3-tensor:
indices = [[[1]], [[0]]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [[[['a1', 'b1'], ['c1', 'd1']]], [[['a0', 'b0'], ['c0', 'd0']]]] indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [[['c0', 'd0'], ['a1', 'b1']], [['a0', 'b0'], ['c1', 'd1']]] indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [['b0', 'b1'], ['d0', 'c1']]
See also tf.gather
and tf.batch_gather
.
T
- data type for output
outputT
- data type for GatherNd
output and operandsparams
- The tensor from which to gather values.indices
- Index tensor.public GetSessionHandle getSessionHandle(Operand<? extends TType> value)
value
- The tensor to be stored.public <T extends TType> GetSessionTensor<T> getSessionTensor(Operand<TString> handle, Class<T> dtype)
T
- data type for value
outputT
- data type for GetSessionTensor
output and operandshandle
- The handle for a tensor stored in the session state.dtype
- The type of the output value.public Gradients gradients(Iterable<? extends Operand<?>> y, Iterable<? extends Operand<?>> x, Gradients.Options... options)
y
- outputs of the function to derivex
- inputs of the function for which partial derivatives are computedoptions
- carries optional attributes valuesGradients
IllegalArgumentException
- if execution environment is not a graphpublic Gradients gradients(Operand<?> y, Iterable<? extends Operand<?>> x, Gradients.Options... options)
y
s w.r.t x
s,
i.e., d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2...
If Options.dx()
values are set, they are as the initial symbolic partial derivatives of some loss
function L
w.r.t. y
. Options.dx()
must have the size of y
.
If Options.dx()
is not set, the implementation will use dx of OnesLike
for all
shapes in y
.
The partial derivatives are returned in output dy
, with the size of x
.
Example of usage:
Gradients gradients = tf.gradients(loss, Arrays.asList(w, b));
Constant<TFloat32> alpha = tf.constant(1.0f);
tf.train.applyGradientDescent(w, alpha, gradients.<Float>dy(0));
tf.train.applyGradientDescent(b, alpha, gradients.<Float>dy(1));
y
- output of the function to derivex
- inputs of the function for which partial derivatives are computedoptions
- carries optional attributes valuesGradients
IllegalArgumentException
- if execution environment is not a graphpublic <T extends TType> GuaranteeConst<T> guaranteeConst(Operand<T> input)
Only accepts value typed tensors as inputs and rejects resource variable handles as input.
Returns the input tensor without modification.
T
- data type for output
outputT
- data type for GuaranteeConst
output and operandsinput
- The input valuepublic <T extends TType,U extends TType> HashTable hashTable(Class<T> keyDtype, Class<U> valueDtype, HashTable.Options... options)
T
- data type for HashTableV2
output and operandsU
- data type for HashTableV2
output and operandskeyDtype
- Type of the table keys.valueDtype
- Type of the table values.options
- carries optional attribute valuespublic <T extends TNumber> HistogramFixedWidth<TInt32> histogramFixedWidth(Operand<T> values, Operand<T> valueRange, Operand<TInt32> nbins)
values
, this operation returns a rank 1 histogram counting
the number of entries in values
that fall into every bin. The bins are
equal width and determined by the arguments value_range
and nbins
.
# Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf) nbins = 5 value_range = [0.0, 5.0] new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15] with tf.get_default_session() as sess: hist = tf.histogram_fixed_width(new_values, value_range, nbins=5) variables.global_variables_initializer().run() sess.run(hist) => [2, 1, 1, 0, 2]
U
- data type for out
outputT
- data type for HistogramFixedWidth
output and operandsvalues
- Numeric Tensor
.valueRange
- Shape [2] Tensor
of same dtype
as values
.
values <= value_range[0] will be mapped to hist[0],
values >= value_range[1] will be mapped to hist[-1].nbins
- Scalar int32 Tensor
. Number of histogram bins.public <U extends TNumber,T extends TNumber> HistogramFixedWidth<U> histogramFixedWidth(Operand<T> values, Operand<T> valueRange, Operand<TInt32> nbins, Class<U> dtype)
values
, this operation returns a rank 1 histogram counting
the number of entries in values
that fall into every bin. The bins are
equal width and determined by the arguments value_range
and nbins
.
# Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf) nbins = 5 value_range = [0.0, 5.0] new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15] with tf.get_default_session() as sess: hist = tf.histogram_fixed_width(new_values, value_range, nbins=5) variables.global_variables_initializer().run() sess.run(hist) => [2, 1, 1, 0, 2]
U
- data type for out
outputU
- data type for HistogramFixedWidth
output and operandsT
- data type for HistogramFixedWidth
output and operandsvalues
- Numeric Tensor
.valueRange
- Shape [2] Tensor
of same dtype
as values
.
values <= value_range[0] will be mapped to hist[0],
values >= value_range[1] will be mapped to hist[-1].nbins
- Scalar int32 Tensor
. Number of histogram bins.dtype
- The value of the dtype attributepublic <T extends TType> Identity<T> identity(Operand<T> input)
T
- data type for output
outputT
- data type for Identity
output and operandsinput
- The input valuepublic IdentityN identityN(Iterable<Operand<?>> input)
This op can be used to override the gradient for complicated functions. For example, suppose y = f(x) and we wish to apply a custom function g for backprop such that dx = g(dy). In Python,
with tf.get_default_graph().gradient_override_map( {'IdentityN': 'OverrideGradientWithG'}): y, _ = identity_n([f(x), x]) @tf.RegisterGradient('OverrideGradientWithG') def ApplyG(op, dy, _): return [None, g(dy)] # Do not backprop to f(x).
input
- The input valuepublic If ifOp(Operand<? extends TType> cond, Iterable<Operand<?>> input, List<Class<? extends TType>> Tout, ConcreteFunction thenBranch, ConcreteFunction elseBranch, If.Options... options)
Selects between StatefulIf
and StatelessIf
based on the statefulness of the function arguments.
cond
- A Tensor. If the tensor is a scalar of non-boolean type, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means `True` and zero means False; if the scalar is a string, non-empty means `True` and empty means `False`. If the tensor is not a scalar, being empty means False and being non-empty means True.
input
- A list of input tensors.Tout
- A list of output types.thenBranch
- A function that takes 'inputs' and returns a list of tensors, whose types are the same as what else_branch returns.
elseBranch
- A function that takes 'inputs' and returns a list of tensors, whose types are the same as what then_branch returns.
options
- carries optional attribute valuespublic <T extends TType> ImmutableConst<T> immutableConst(Class<T> dtype, org.tensorflow.ndarray.Shape shape, String memoryRegionName)
T
- data type for tensor
outputT
- data type for ImmutableConst
output and operandsdtype
- Type of the returned tensor.shape
- Shape of the returned tensor.memoryRegionName
- Name of readonly memory region used by the tensor, see
NewReadOnlyMemoryRegionFromFile in tensorflow::Env.public InitializeTable initializeTable(Operand<? extends TType> tableHandle, Operand<? extends TType> keys, Operand<? extends TType> values)
tableHandle
- Handle to a table which will be initialized.keys
- Keys of type Tkey.values
- Values of type Tval.public InitializeTableFromTextFile initializeTableFromTextFile(Operand<? extends TType> tableHandle, Operand<TString> filename, Long keyIndex, Long valueIndex, InitializeTableFromTextFile.Options... options)
delimiter
or the line number (starting from zero).
Where to extract the key and value from a line is specified by key_index
and
value_index
.
int64
.string
.delimiter
.tableHandle
- Handle to a table which will be initialized.filename
- Filename of a vocabulary text file.keyIndex
- Column index in a line to get the table key
values from.valueIndex
- Column index that represents information of a line to get the table
value
values from.options
- carries optional attribute valuespublic <T extends TType> InplaceAdd<T> inplaceAdd(Operand<T> x, Operand<TInt32> i, Operand<T> v)
Computes y = x; y[i, :] += v; return y.
T
- data type for y
outputT
- data type for InplaceAdd
output and operandsx
- A Tensor
of type T.i
- A vector. Indices into the left-most dimension of x
.v
- A Tensor
of type T. Same dimension sizes as x except the first dimension, which must be the same as i's size.public <T extends TType> InplaceSub<T> inplaceSub(Operand<T> x, Operand<TInt32> i, Operand<T> v)
Subtracts `v` into specified rows of `x`. Computes y = x; y[i, :] -= v; return y.
T
- data type for y
outputT
- data type for InplaceSub
output and operandsx
- A Tensor
of type T.i
- A vector. Indices into the left-most dimension of x
.v
- A Tensor
of type T. Same dimension sizes as x except the first dimension, which must be the same as i's size.public <T extends TType> InplaceUpdate<T> inplaceUpdate(Operand<T> x, Operand<TInt32> i, Operand<T> v)
x[i, :] = v; return x
.
Originally this function is mutative however for compilation we make this
operation create / operate on a copy of x
.
T
- data type for y
outputT
- data type for InplaceUpdate
output and operandsx
- A tensor of type T
.i
- A vector. Indices into the left-most dimension of x
.v
- A Tensor
of type T. Same dimension sizes as x except the first dimension, which must be the same as i's size.public IsVariableInitialized isVariableInitialized(Operand<? extends TType> ref)
ref
- Should be from a Variable
node. May be uninitialized.public KthOrderStatistic kthOrderStatistic(Operand<TFloat32> input, Long k)
input
- The input valuek
- The value of the k attributepublic <T extends TType,U extends TType> LookupTableExport<T,U> lookupTableExport(Operand<? extends TType> tableHandle, Class<T> Tkeys, Class<U> Tvalues)
T
- data type for keys
outputU
- data type for values
outputT
- data type for LookupTableExportV2
output and operandsU
- data type for LookupTableExportV2
output and operandstableHandle
- Handle to the table.Tkeys
- The value of the Tkeys attributeTvalues
- The value of the Tvalues attributepublic <U extends TType> LookupTableFind<U> lookupTableFind(Operand<? extends TType> tableHandle, Operand<? extends TType> keys, Operand<U> defaultValue)
keys
must of the same type as the keys of the table.
The output values
is of the type of the table values.
The scalar default_value
is the value output for keys not present in the
table. It must also be of the same type as the table values.
U
- data type for values
outputU
- data type for LookupTableFindV2
output and operandstableHandle
- Handle to the table.keys
- Any shape. Keys to look up.defaultValue
- The defaultValue valuepublic LookupTableImport lookupTableImport(Operand<? extends TType> tableHandle, Operand<? extends TType> keys, Operand<? extends TType> values)
keys
must be of the same type as the keys of the table.
The tensor values
must be of the type of the table values.tableHandle
- Handle to the table.keys
- Any shape. Keys to look up.values
- Values to associate with keys.public LookupTableInsert lookupTableInsert(Operand<? extends TType> tableHandle, Operand<? extends TType> keys, Operand<? extends TType> values)
keys
must be of the same type as the keys of the table.
The tensor values
must be of the type of the table values.tableHandle
- Handle to the table.keys
- Any shape. Keys to look up.values
- Values to associate with keys.public LookupTableSize lookupTableSize(Operand<? extends TType> tableHandle)
tableHandle
- Handle to the table.public LoopCond loopCond(Operand<TBool> input)
input
- A boolean scalar, representing the branch predicate of the Switch op.public MakeUnique makeUnique(Operand<TFloat32> input)
input
- The input valuepublic MapClear mapClear(List<Class<? extends TType>> dtypes, MapClear.Options... options)
dtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic MapIncompleteSize mapIncompleteSize(List<Class<? extends TType>> dtypes, MapIncompleteSize.Options... options)
dtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic MapPeek mapPeek(Operand<TInt64> key, Operand<TInt32> indices, List<Class<? extends TType>> dtypes, MapPeek.Options... options)
key
- The key valueindices
- The indices valuedtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic MapSize mapSize(List<Class<? extends TType>> dtypes, MapSize.Options... options)
dtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic MapStage mapStage(Operand<TInt64> key, Operand<TInt32> indices, Iterable<Operand<?>> values, List<Class<? extends TType>> dtypes, MapStage.Options... options)
key
- int64indices
- The indices valuevalues
- a list of tensors
dtypes A list of data types that inserted values should adhere to.dtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic MapUnstage mapUnstage(Operand<TInt64> key, Operand<TInt32> indices, List<Class<? extends TType>> dtypes, MapUnstage.Options... options)
key
- The key valueindices
- The indices valuedtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic MapUnstageNoKey mapUnstageNoKey(Operand<TInt32> indices, List<Class<? extends TType>> dtypes, MapUnstageNoKey.Options... options)
indices
- The indices valuedtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic <T extends TNumber> Max<T> max(Operand<T> input, Operand<? extends TNumber> axis, Max.Options... options)
input
along the dimensions given in axis
. Unless
keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in
axis
. If keep_dims
is true, the reduced dimensions are
retained with length 1.T
- data type for output
outputT
- data type for Max
output and operandsinput
- The tensor to reduce.axis
- The dimensions to reduce. Must be in the range
[-rank(input), rank(input))
.options
- carries optional attribute valuespublic <T extends TType> Merge<T> merge(Iterable<Operand<T>> inputs)
inputs
to output
.
Merge
waits for at least one of the tensors in inputs
to become available.
It is usually combined with Switch
to implement branching.
Merge
forwards the first tensor to become available to output
, and sets
value_index
to its index in inputs
.
T
- data type for output
outputT
- data type for Merge
output and operandsinputs
- The input tensors, exactly one of which will become available.public <T extends TNumber> Min<T> min(Operand<T> input, Operand<? extends TNumber> axis, Min.Options... options)
input
along the dimensions given in axis
. Unless
keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in
axis
. If keep_dims
is true, the reduced dimensions are
retained with length 1.T
- data type for output
outputT
- data type for Min
output and operandsinput
- The tensor to reduce.axis
- The dimensions to reduce. Must be in the range
[-rank(input), rank(input))
.options
- carries optional attribute valuespublic <T extends TType> MirrorPad<T> mirrorPad(Operand<T> input, Operand<? extends TNumber> paddings, String mode)
input
with mirrored values according to the paddings
you specify. paddings
is an integer tensor with shape [n, 2]
, where n is
the rank of input
. For each dimension D of input
, paddings[D, 0]
indicates
how many values to add before the contents of input
in that dimension, and
paddings[D, 1]
indicates how many values to add after the contents of input
in that dimension. Both paddings[D, 0]
and paddings[D, 1]
must be no greater
than input.dim_size(D)
(or input.dim_size(D) - 1
) if copy_border
is true
(if false, respectively).
The padded size of each dimension D of the output is:
paddings(D, 0) + input.dim_size(D) + paddings(D, 1)
For example:
# 't' is [[1, 2, 3], [4, 5, 6]]. # 'paddings' is [[1, 1]], [2, 2]]. # 'mode' is SYMMETRIC. # rank of 't' is 2. pad(t, paddings) ==> [[2, 1, 1, 2, 3, 3, 2] [2, 1, 1, 2, 3, 3, 2] [5, 4, 4, 5, 6, 6, 5] [5, 4, 4, 5, 6, 6, 5]]
T
- data type for output
outputT
- data type for MirrorPad
output and operandsinput
- The input tensor to be padded.paddings
- A two-column matrix specifying the padding sizes. The number of
rows must be the same as the rank of input
.mode
- Either REFLECT
or SYMMETRIC
. In reflect mode the padded regions
do not include the borders, while in symmetric mode the padded regions
do include the borders. For example, if input
is [1, 2, 3]
and paddings
is [0, 2]
, then the output is [1, 2, 3, 2, 1]
in reflect mode, and
it is [1, 2, 3, 3, 2]
in symmetric mode.public MlirPassthroughOp mlirPassthroughOp(Iterable<Operand<?>> inputs, String mlirModule, List<Class<? extends TType>> Toutputs)
import tensorflow as tf from tensorflow.compiler.mlir.tensorflow.gen_mlir_passthrough_op import mlir_passthrough_op mlir_module = '''python func @main(%arg0 : tensor<10xf32>, %arg1 : tensor<10xf32>) -> tensor<10x10xf32> { %add = "magic.op"(%arg0, %arg1) : (tensor<10xf32>, tensor<10xf32>) -> tensor<10x10xf32> return %ret : tensor<10x10xf32> } ''' @tf.function def foo(x, y): return mlir_passthrough_op([x, y], mlir_module, Toutputs=[tf.float32]) graph_def = foo.get_concrete_function(tf.TensorSpec([10], tf.float32), tf.TensorSpec([10], tf.float32)).graph.as_graph_def()
inputs
- The inputs valuemlirModule
- The value of the mlirModule attributeToutputs
- The value of the Toutputs attributepublic <T extends TType,U extends TType> MutableDenseHashTable mutableDenseHashTable(Operand<T> emptyKey, Operand<T> deletedKey, Class<U> valueDtype, MutableDenseHashTable.Options... options)
This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a scalar. Data can be inserted into the table using the insert operations. It does not support the initialization operation.
T
- data type for MutableDenseHashTableV2
output and operandsU
- data type for MutableDenseHashTableV2
output and operandsemptyKey
- The key used to represent empty key buckets internally. Must not
be used in insert or lookup operations.deletedKey
- The deletedKey valuevalueDtype
- Type of the table values.options
- carries optional attribute valuespublic <T extends TType,U extends TType> MutableHashTable mutableHashTable(Class<T> keyDtype, Class<U> valueDtype, MutableHashTable.Options... options)
T
- data type for MutableHashTableV2
output and operandsU
- data type for MutableHashTableV2
output and operandskeyDtype
- Type of the table keys.valueDtype
- Type of the table values.options
- carries optional attribute valuespublic <T extends TType,U extends TType> MutableHashTableOfTensors mutableHashTableOfTensors(Class<T> keyDtype, Class<U> valueDtype, MutableHashTableOfTensors.Options... options)
T
- data type for MutableHashTableOfTensorsV2
output and operandsU
- data type for MutableHashTableOfTensorsV2
output and operandskeyDtype
- Type of the table keys.valueDtype
- Type of the table values.options
- carries optional attribute valuespublic Mutex mutex(Mutex.Options... options)
MutexLock
.options
- carries optional attribute valuespublic MutexLock mutexLock(Operand<? extends TType> mutex)
MutexLock
with this mutex will wait.
This is particularly useful for creating a critical section when used in
conjunction with MutexLockIdentity
:
mutex = mutex_v2( shared_name=handle_name, container=container, name=name) def execute_in_critical_section(fn, *args, **kwargs): lock = gen_resource_variable_ops.mutex_lock(mutex) with ops.control_dependencies([lock]): r = fn(*args, **kwargs) with ops.control_dependencies(nest.flatten(r)): with ops.colocate_with(mutex): ensure_lock_exists = mutex_lock_identity(lock) # Make sure that if any element of r is accessed, all of # them are executed together. r = nest.map_structure(tf.identity, r) with ops.control_dependencies([ensure_lock_exists]): return nest.map_structure(tf.identity, r)
While fn
is running in the critical section, no other functions which wish to
use this critical section may run.
Often the use case is that two executions of the same graph, in parallel,
wish to run fn
; and we wish to ensure that only one of them executes
at a time. This is especially important if fn
modifies one or more
variables at a time.
It is also useful if two separate functions must share a resource, but we wish to ensure the usage is exclusive.
mutex
- The mutex resource to lock.public <T extends TType> NextIteration<T> nextIteration(Operand<T> data)
T
- data type for output
outputT
- data type for NextIteration
output and operandsdata
- The tensor to be made available to the next iteration.public NoOp noOp()
public <U extends TType> OneHot<U> oneHot(Operand<? extends TNumber> indices, Operand<TInt32> depth, Operand<U> onValue, Operand<U> offValue, OneHot.Options... options)
indices
take value on_value
,
while all other locations take value off_value
.
If the input indices
is rank N
, the output will have rank N+1
,
The new axis is created at dimension axis
(default: the new axis is
appended at the end).
If indices
is a scalar the output shape will be a vector of length depth
.
If indices
is a vector of length features
, the output shape will be:
features x depth if axis == -1 depth x features if axis == 0
If indices
is a matrix (batch) with shape [batch, features]
,
the output shape will be:
batch x features x depth if axis == -1 batch x depth x features if axis == 1 depth x batch x features if axis == 0Examples
Suppose that
indices = [0, 2, -1, 1] depth = 3 on_value = 5.0 off_value = 0.0 axis = -1
Then output is [4 x 3]
:
output = [5.0 0.0 0.0] // one_hot(0) [0.0 0.0 5.0] // one_hot(2) [0.0 0.0 0.0] // one_hot(-1) [0.0 5.0 0.0] // one_hot(1)
Suppose that
indices = [0, 2, -1, 1] depth = 3 on_value = 0.0 off_value = 3.0 axis = 0
Then output is [3 x 4]
:
output = [0.0 3.0 3.0 3.0] [3.0 3.0 3.0 0.0] [3.0 3.0 3.0 3.0] [3.0 0.0 3.0 3.0] // ^ one_hot(0) // ^ one_hot(2) // ^ one_hot(-1) // ^ one_hot(1)
Suppose that
indices = [[0, 2], [1, -1]] depth = 3 on_value = 1.0 off_value = 0.0 axis = -1
Then output is [2 x 2 x 3]
:
output = [ [1.0, 0.0, 0.0] // one_hot(0) [0.0, 0.0, 1.0] // one_hot(2) ][ [0.0, 1.0, 0.0] // one_hot(1) [0.0, 0.0, 0.0] // one_hot(-1) ]
U
- data type for output
outputU
- data type for OneHot
output and operandsindices
- A tensor of indices.depth
- A scalar defining the depth of the one hot dimension.onValue
- A scalar defining the value to fill in output when indices[j] = i
.offValue
- A scalar defining the value to fill in output when indices[j] != i
.options
- carries optional attribute valuespublic <T extends TType> Ones<T> ones(Operand<? extends TNumber> dims, Class<T> type)
dims
- a 1-D operand that represents the shape of the output tensortype
- the output tensor type class. Can not be TString.IllegalArgumentException
- if the tensor type or shape cannot be initialized with ones.public <T extends TType> OnesLike<T> onesLike(Operand<T> x)
T
- data type for y
outputT
- data type for OnesLike
output and operandsx
- a tensor of type T.public OrderedMapClear orderedMapClear(List<Class<? extends TType>> dtypes, OrderedMapClear.Options... options)
dtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic OrderedMapIncompleteSize orderedMapIncompleteSize(List<Class<? extends TType>> dtypes, OrderedMapIncompleteSize.Options... options)
dtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic OrderedMapPeek orderedMapPeek(Operand<TInt64> key, Operand<TInt32> indices, List<Class<? extends TType>> dtypes, OrderedMapPeek.Options... options)
key
- The key valueindices
- The indices valuedtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic OrderedMapSize orderedMapSize(List<Class<? extends TType>> dtypes, OrderedMapSize.Options... options)
dtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic OrderedMapStage orderedMapStage(Operand<TInt64> key, Operand<TInt32> indices, Iterable<Operand<?>> values, List<Class<? extends TType>> dtypes, OrderedMapStage.Options... options)
key
- int64indices
- The indices valuevalues
- a list of tensors
dtypes A list of data types that inserted values should adhere to.dtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic OrderedMapUnstage orderedMapUnstage(Operand<TInt64> key, Operand<TInt32> indices, List<Class<? extends TType>> dtypes, OrderedMapUnstage.Options... options)
key
- The key valueindices
- The indices valuedtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic OrderedMapUnstageNoKey orderedMapUnstageNoKey(Operand<TInt32> indices, List<Class<? extends TType>> dtypes, OrderedMapUnstageNoKey.Options... options)
indices
- The indices valuedtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic <T extends TType> Pad<T> pad(Operand<T> input, Operand<? extends TNumber> paddings, Operand<T> constantValues)
input
according to the paddings
and constant_values
you specify. paddings
is an integer tensor with shape [Dn, 2]
, where n is
the rank of input
. For each dimension D of input
, paddings[D, 0]
indicates
how many padding values to add before the contents of input
in that dimension,
and paddings[D, 1]
indicates how many padding values to add after the contents
of input
in that dimension. constant_values
is a scalar tensor of the same
type as input
that indicates the value to use for padding input
.
The padded size of each dimension D of the output is:
paddings(D, 0) + input.dim_size(D) + paddings(D, 1)
For example:
# 't' is [[1, 1], [2, 2]] # 'paddings' is [[1, 1], [2, 2]] # 'constant_values' is 0 # rank of 't' is 2 pad(t, paddings) ==> [[0, 0, 0, 0, 0, 0] [0, 0, 1, 1, 0, 0] [0, 0, 2, 2, 0, 0] [0, 0, 0, 0, 0, 0]]
T
- data type for output
outputT
- data type for PadV2
output and operandsinput
- The input valuepaddings
- The paddings valueconstantValues
- The constantValues valuepublic <T extends TType> ParallelConcat<T> parallelConcat(Iterable<Operand<T>> values, org.tensorflow.ndarray.Shape shape)
N
tensors along the first dimension.
The input tensors are all required to have size 1 in the first dimension.
For example:
# 'x' is [[1, 4]] # 'y' is [[2, 5]] # 'z' is [[3, 6]] parallel_concat([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim.
The difference between concat and parallel_concat is that concat requires all of the inputs be computed before the operation will begin but doesn't require that the input shapes be known during graph construction. Parallel concat will copy pieces of the input into the output as they become available, in some situations this can provide a performance benefit.
T
- data type for output
outputT
- data type for ParallelConcat
output and operandsvalues
- Tensors to be concatenated. All must have size 1 in the first dimension
and same shape.shape
- the final shape of the result; should be equal to the shapes of any input
but with the number of input values in the first dimension.public <T extends TType> ParallelDynamicStitch<T> parallelDynamicStitch(Iterable<Operand<TInt32>> indices, Iterable<Operand<T>> data)
data
tensors into a single tensor.
Builds a merged tensor such that
merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]
For example, if each indices[m]
is scalar or vector, we have
# Scalar indices: merged[indices[m], ...] = data[m][...] # Vector indices: merged[indices[m][i], ...] = data[m][i, ...]
Each data[i].shape
must start with the corresponding indices[i].shape
,
and the rest of data[i].shape
must be constant w.r.t. i
. That is, we
must have data[i].shape = indices[i].shape + constant
. In terms of this
constant
, the output shape is
merged.shape = [max(indices)] + constant
Values may be merged in parallel, so if an index appears in both indices[m][i]
and indices[n][j]
, the result may be invalid. This differs from the normal
DynamicStitch operator that defines the behavior in that case.
For example:
indices[0] = 6 indices[1] = [4, 1] indices[2] = [[5, 2], [0, 3]] data[0] = [61, 62] data[1] = [[41, 42], [11, 12]] data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]] merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42], [51, 52], [61, 62]]
This method can be used to merge partitions created by dynamic_partition
as illustrated on the following example:
# Apply function (increments x_i) on elements for which a certain condition # apply (x_i != -1 in this example). x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4]) condition_mask=tf.not_equal(x,tf.constant(-1.)) partitioned_data = tf.dynamic_partition( x, tf.cast(condition_mask, tf.int32) , 2) partitioned_data[1] = partitioned_data[1] + 1.0 condition_indices = tf.dynamic_partition( tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2) x = tf.dynamic_stitch(condition_indices, partitioned_data) # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain # unchanged.
T
- data type for merged
outputT
- data type for ParallelDynamicStitch
output and operandsindices
- The indices valuedata
- The data valuepublic PartitionedCall partitionedCall(Iterable<Operand<?>> args, List<Class<? extends TType>> Tout, ConcreteFunction f, PartitionedCall.Options... options)
f(inputs)
, where f
's body is placed and partitioned.
Selects between StatefulPartitionedCall
and StatelessPartitionedCall
based on the statefulness of the function arguments.
args
- A list of input tensors.Tout
- A list of output types.f
- A function that takes 'args', a list of tensors, and returns 'output', another list of tensors. Input and output types are specified by 'Tin' and 'Tout'. The function body of f will be placed and partitioned across devices, setting this op apart from the regular Call op. This op is stateful.
options
- carries optional attribute valuespublic <T extends TType> Placeholder<T> placeholder(Class<T> dtype, Placeholder.Options... options)
T
- data type for output
outputT
- data type for Placeholder
output and operandsdtype
- The type of elements in the tensor.options
- carries optional attribute valuespublic <T extends TType> PlaceholderWithDefault<T> placeholderWithDefault(Operand<T> input, org.tensorflow.ndarray.Shape shape)
input
when its output is not fed.T
- data type for output
outputT
- data type for PlaceholderWithDefault
output and operandsinput
- The default value to produce when output
is not fed.shape
- The (possibly partial) shape of the tensor.public Print print(Operand<TString> input, Print.Options... options)
input
- The string scalar to print.options
- carries optional attribute valuespublic <T extends TType> Prod<T> prod(Operand<T> input, Operand<? extends TNumber> axis, Prod.Options... options)
input
along the dimensions given in axis
. Unless
keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in
axis
. If keep_dims
is true, the reduced dimensions are
retained with length 1.T
- data type for output
outputT
- data type for Prod
output and operandsinput
- The tensor to reduce.axis
- The dimensions to reduce. Must be in the range
[-rank(input), rank(input))
.options
- carries optional attribute valuespublic <T extends TType> QuantizedReshape<T> quantizedReshape(Operand<T> tensor, Operand<? extends TNumber> shape, Operand<TFloat32> inputMin, Operand<TFloat32> inputMax)
T
- data type for output
outputT
- data type for QuantizedReshape
output and operandstensor
- The tensor valueshape
- Defines the shape of the output tensor.inputMin
- The minimum value of the input.inputMax
- The maximum value of the input.public <T extends TNumber> Range<T> range(Operand<T> start, Operand<T> limit, Operand<T> delta)
start
and
extends by increments of delta
up to but not including limit
.
For example:
# 'start' is 3 # 'limit' is 18 # 'delta' is 3 tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15]
T
- data type for output
outputT
- data type for Range
output and operandsstart
- 0-D (scalar). First entry in the sequence.limit
- 0-D (scalar). Upper limit of sequence, exclusive.delta
- 0-D (scalar). Optional. Default is 1. Number that increments start
.public Rank rank(Operand<? extends TType> input)
input
.
For example:
# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] # shape of tensor 't' is [2, 2, 3] rank(t) ==> 3
Note: The rank of a tensor is not the same as the rank of a matrix. The rank of a tensor is the number of indices required to uniquely select each element of the tensor. Rank is also known as "order", "degree", or "ndims."
input
- The input valuepublic <T extends TType> ReadVariableOp<T> readVariableOp(Operand<? extends TType> resource, Class<T> dtype)
The value returned by this operation is guaranteed to be influenced by all the writes on which this operation depends directly or indirectly, and to not be influenced by any of the writes which depend directly or indirectly on this operation.
T
- data type for value
outputT
- data type for ReadVariableOp
output and operandsresource
- handle to the resource in which to store the variable.dtype
- the dtype of the value.public ReduceAll reduceAll(Operand<TBool> input, Operand<? extends TNumber> axis, ReduceAll.Options... options)
input
along the dimensions given in axis
. Unless
keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in
axis
. If keep_dims
is true, the reduced dimensions are
retained with length 1.input
- The tensor to reduce.axis
- The dimensions to reduce. Must be in the range
[-rank(input), rank(input))
.options
- carries optional attribute valuespublic ReduceAny reduceAny(Operand<TBool> input, Operand<? extends TNumber> axis, ReduceAny.Options... options)
input
along the dimensions given in axis
. Unless
keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in
axis
. If keep_dims
is true, the reduced dimensions are
retained with length 1.input
- The tensor to reduce.axis
- The dimensions to reduce. Must be in the range
[-rank(input), rank(input))
.options
- carries optional attribute valuespublic <T extends TNumber> ReduceMax<T> reduceMax(Operand<T> input, Operand<? extends TNumber> axis, ReduceMax.Options... options)
input
along the dimensions given in axis
. Unless
keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in
axis
. If keep_dims
is true, the reduced dimensions are
retained with length 1.T
- data type for output
outputT
- data type for Max
output and operandsinput
- The tensor to reduce.axis
- The dimensions to reduce. Must be in the range
[-rank(input), rank(input))
.options
- carries optional attribute valuespublic <T extends TNumber> ReduceMin<T> reduceMin(Operand<T> input, Operand<? extends TNumber> axis, ReduceMin.Options... options)
input
along the dimensions given in axis
. Unless
keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in
axis
. If keep_dims
is true, the reduced dimensions are
retained with length 1.T
- data type for output
outputT
- data type for Min
output and operandsinput
- The tensor to reduce.axis
- The dimensions to reduce. Must be in the range
[-rank(input), rank(input))
.options
- carries optional attribute valuespublic <T extends TType> ReduceProd<T> reduceProd(Operand<T> input, Operand<? extends TNumber> axis, ReduceProd.Options... options)
input
along the dimensions given in axis
. Unless
keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in
axis
. If keep_dims
is true, the reduced dimensions are
retained with length 1.T
- data type for output
outputT
- data type for Prod
output and operandsinput
- The tensor to reduce.axis
- The dimensions to reduce. Must be in the range
[-rank(input), rank(input))
.options
- carries optional attribute valuespublic <T extends TType> ReduceSum<T> reduceSum(Operand<T> input, Operand<? extends TNumber> axis, ReduceSum.Options... options)
input
along the dimensions given in axis
. Unless
keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in
axis
. If keep_dims
is true, the reduced dimensions are
retained with length 1.T
- data type for output
outputT
- data type for Sum
output and operandsinput
- The tensor to reduce.axis
- The dimensions to reduce. Must be in the range
[-rank(input), rank(input))
.options
- carries optional attribute valuespublic <T extends TType> RefNextIteration<T> refNextIteration(Operand<T> data)
T
- data type for output
outputT
- data type for RefNextIteration
output and operandsdata
- The tensor to be made available to the next iteration.public <T extends TType> RefSelect<T> refSelect(Operand<TInt32> index, Iterable<Operand<T>> inputs)
index
th element of inputs
to output
.T
- data type for output
outputT
- data type for RefSelect
output and operandsindex
- A scalar that determines the input that gets selected.inputs
- A list of ref tensors, one of which will be forwarded to output
.public <T extends TType> RefSwitch<T> refSwitch(Operand<T> data, Operand<TBool> pred)
data
to the output port determined by pred
.
If pred
is true, the data
input is forwarded to output_true
. Otherwise,
the data goes to output_false
.
See also Switch
and Merge
.
T
- data type for output_false
outputT
- data type for RefSwitch
output and operandsdata
- The ref tensor to be forwarded to the appropriate output.pred
- A scalar that specifies which output port will receive data.public RemoteCall remoteCall(Operand<TString> target, Iterable<Operand<?>> args, List<Class<? extends TType>> Tout, ConcreteFunction f)
f
on a remote device indicated by target
.target
- A fully specified device name where we want to run the function.args
- A list of arguments for the function.Tout
- The type list for the return values.f
- The function to run remotely.public <T extends TType> Reshape<T> reshape(Operand<T> tensor, Operand<? extends TNumber> shape)
tensor
, this operation returns a tensor that has the same values
as tensor
with shape shape
.
If one component of 1-D tensor shape
is the special value -1, the size of that
dimension is computed so that the total size remains constant. In particular, a
shape
of [-1]
flattens into 1-D. At most one component of shape
may be
unknown.
The shape
must be 1-D and the operation returns a tensor with shape
shape
filled with the values of tensor
. In this case, the number of elements
implied by shape
must be the same as the number of elements in tensor
.
It is an error if shape
is not 1-D.
For example:
# tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9] # tensor 't' has shape [9] reshape(t, [3, 3]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9]] # tensor 't' is [[[1, 1], [2, 2]], # [[3, 3], [4, 4]]] # tensor 't' has shape [2, 2, 2] reshape(t, [2, 4]) ==> [[1, 1, 2, 2], [3, 3, 4, 4]] # tensor 't' is [[[1, 1, 1], # [2, 2, 2]], # [[3, 3, 3], # [4, 4, 4]], # [[5, 5, 5], # [6, 6, 6]]] # tensor 't' has shape [3, 2, 3] # pass '[-1]' to flatten 't' reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6] # -1 can also be used to infer the shape # -1 is inferred to be 9: reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], [4, 4, 4, 5, 5, 5, 6, 6, 6]] # -1 is inferred to be 2: reshape(t, [-1, 9]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3], [4, 4, 4, 5, 5, 5, 6, 6, 6]] # -1 is inferred to be 3: reshape(t, [ 2, -1, 3]) ==> [[[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[4, 4, 4], [5, 5, 5], [6, 6, 6]]] # tensor 't' is [7] # shape `[]` reshapes to a scalar reshape(t, []) ==> 7
T
- data type for output
outputT
- data type for Reshape
output and operandstensor
- The tensor valueshape
- Defines the shape of the output tensor.public <T extends TNumber> ResourceCountUpTo<T> resourceCountUpTo(Operand<? extends TType> resource, Long limit, Class<T> T)
T
- data type for output
outputT
- data type for ResourceCountUpTo
output and operandsresource
- Should be from a scalar Variable
node.limit
- If incrementing ref would bring it above limit, instead generates an
'OutOfRange' error.T
- The value of the T attributepublic <U extends TType> ResourceGather<U> resourceGather(Operand<? extends TType> resource, Operand<? extends TNumber> indices, Class<U> dtype, ResourceGather.Options... options)
resource
according to indices
.
indices
must be an integer tensor of any dimension (usually 0-D or 1-D).
Produces an output tensor with shape indices.shape + params.shape[1:]
where:
# Scalar indices output[:, ..., :] = params[indices, :, ... :] # Vector indices output[i, :, ..., :] = params[indices[i], :, ... :] # Higher rank indices output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]
U
- data type for output
outputU
- data type for ResourceGather
output and operandsresource
- The resource valueindices
- The indices valuedtype
- The value of the dtype attributeoptions
- carries optional attribute valuespublic <U extends TType> ResourceGatherNd<U> resourceGatherNd(Operand<? extends TType> resource, Operand<? extends TNumber> indices, Class<U> dtype)
U
- data type for output
outputU
- data type for ResourceGatherNd
output and operandsresource
- The resource valueindices
- The indices valuedtype
- The value of the dtype attributepublic ResourceScatterAdd resourceScatterAdd(Operand<? extends TType> resource, Operand<? extends TNumber> indices, Operand<? extends TType> updates)
resource
.
This operation computes
# Scalar indices ref[indices, ...] += updates[...] # Vector indices (for each i) ref[indices[i], ...] += updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their contributions add.
Requires updates.shape = indices.shape + ref.shape[1:]
or updates.shape = []
.
resource
- Should be from a Variable
node.indices
- A tensor of indices into the first dimension of ref
.updates
- A tensor of updated values to add to ref
.public ResourceScatterDiv resourceScatterDiv(Operand<? extends TType> resource, Operand<? extends TNumber> indices, Operand<? extends TType> updates)
resource
.
This operation computes
# Scalar indices ref[indices, ...] /= updates[...] # Vector indices (for each i) ref[indices[i], ...] /= updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their contributions multiply.
Requires updates.shape = indices.shape + ref.shape[1:]
or updates.shape = []
.
resource
- Should be from a Variable
node.indices
- A tensor of indices into the first dimension of ref
.updates
- A tensor of updated values to add to ref
.public ResourceScatterMax resourceScatterMax(Operand<? extends TType> resource, Operand<? extends TNumber> indices, Operand<? extends TType> updates)
resource
using the max
operation.
This operation computes
# Scalar indices ref[indices, ...] = max(ref[indices, ...], updates[...]) # Vector indices (for each i) ref[indices[i], ...] = max(ref[indices[i], ...], updates[i, ...]) # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] = max(ref[indices[i, ..., j], ...], updates[i, ..., j, ...])
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their contributions are combined.
Requires updates.shape = indices.shape + ref.shape[1:]
or updates.shape = []
.
resource
- Should be from a Variable
node.indices
- A tensor of indices into the first dimension of ref
.updates
- A tensor of updated values to add to ref
.public ResourceScatterMin resourceScatterMin(Operand<? extends TType> resource, Operand<? extends TNumber> indices, Operand<? extends TType> updates)
resource
using the min
operation.
This operation computes
# Scalar indices ref[indices, ...] = min(ref[indices, ...], updates[...]) # Vector indices (for each i) ref[indices[i], ...] = min(ref[indices[i], ...], updates[i, ...]) # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] = min(ref[indices[i, ..., j], ...], updates[i, ..., j, ...])
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their contributions are combined.
Requires updates.shape = indices.shape + ref.shape[1:]
or updates.shape = []
.
resource
- Should be from a Variable
node.indices
- A tensor of indices into the first dimension of ref
.updates
- A tensor of updated values to add to ref
.public ResourceScatterMul resourceScatterMul(Operand<? extends TType> resource, Operand<? extends TNumber> indices, Operand<? extends TType> updates)
resource
.
This operation computes
# Scalar indices ref[indices, ...] *= updates[...] # Vector indices (for each i) ref[indices[i], ...] *= updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...]
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their contributions multiply.
Requires updates.shape = indices.shape + ref.shape[1:]
or updates.shape = []
.
resource
- Should be from a Variable
node.indices
- A tensor of indices into the first dimension of ref
.updates
- A tensor of updated values to add to ref
.public ResourceScatterNdAdd resourceScatterNdAdd(Operand<? extends TType> ref, Operand<? extends TNumber> indices, Operand<? extends TType> updates, ResourceScatterNdAdd.Options... options)
ref
is a Tensor
with rank P
and indices
is a Tensor
of rank Q
.
indices
must be integer tensor, containing indices into ref
.
It must be shape [d_0, ..., d_{Q-2}, K]
where 0 < K <= P
.
The innermost dimension of indices
(with length K
) corresponds to
indices into elements (if K = P
) or slices (if K < P
) along the K
th
dimension of ref
.
updates
is Tensor
of rank Q-1+P-K
with shape:
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]
For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that addition would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8], use_resource=True) indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) add = tf.scatter_nd_add(ref, indices, updates) with tf.Session() as sess: print sess.run(add)
The resulting update to ref would look like this:
[1, 13, 3, 14, 14, 6, 7, 20]
See tf.scatter_nd
for more details about how to make updates to
slices.
ref
- A resource handle. Must be from a VarHandleOp.indices
- A Tensor. Must be one of the following types: int32, int64.
A tensor of indices into ref.updates
- A Tensor. Must have the same type as ref. A tensor of
values to add to ref.options
- carries optional attribute valuespublic ResourceScatterNdMax resourceScatterNdMax(Operand<? extends TType> ref, Operand<? extends TNumber> indices, Operand<? extends TType> updates, ResourceScatterNdMax.Options... options)
ref
- A resource handle. Must be from a VarHandleOp.indices
- A Tensor. Must be one of the following types: int32, int64.
A tensor of indices into ref.updates
- A Tensor. Must have the same type as ref. A tensor of
values whose element wise max is taken with refoptions
- carries optional attribute valuespublic ResourceScatterNdMin resourceScatterNdMin(Operand<? extends TType> ref, Operand<? extends TNumber> indices, Operand<? extends TType> updates, ResourceScatterNdMin.Options... options)
ref
- A resource handle. Must be from a VarHandleOp.indices
- A Tensor. Must be one of the following types: int32, int64.
A tensor of indices into ref.updates
- A Tensor. Must have the same type as ref. A tensor of
values whose element wise min is taken with ref.options
- carries optional attribute valuespublic ResourceScatterNdSub resourceScatterNdSub(Operand<? extends TType> ref, Operand<? extends TNumber> indices, Operand<? extends TType> updates, ResourceScatterNdSub.Options... options)
ref
is a Tensor
with rank P
and indices
is a Tensor
of rank Q
.
indices
must be integer tensor, containing indices into ref
.
It must be shape [d_0, ..., d_{Q-2}, K]
where 0 < K <= P
.
The innermost dimension of indices
(with length K
) corresponds to
indices into elements (if K = P
) or slices (if K < P
) along the K
th
dimension of ref
.
updates
is Tensor
of rank Q-1+P-K
with shape:
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]
For example, say we want to subtract 4 scattered elements from a rank-1 tensor with 8 elements. In Python, that subtraction would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8], use_resource=True) indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) sub = tf.scatter_nd_sub(ref, indices, updates) with tf.Session() as sess: print sess.run(sub)
The resulting update to ref would look like this:
[1, -9, 3, -6, -4, 6, 7, -4]
See tf.scatter_nd
for more details about how to make updates to
slices.
ref
- A resource handle. Must be from a VarHandleOp.indices
- A Tensor. Must be one of the following types: int32, int64.
A tensor of indices into ref.updates
- A Tensor. Must have the same type as ref. A tensor of
values to add to ref.options
- carries optional attribute valuespublic ResourceScatterNdUpdate resourceScatterNdUpdate(Operand<? extends TType> ref, Operand<? extends TNumber> indices, Operand<? extends TType> updates, ResourceScatterNdUpdate.Options... options)
updates
to individual values or slices within a given
variable according to indices
.
ref
is a Tensor
with rank P
and indices
is a Tensor
of rank Q
.
indices
must be integer tensor, containing indices into ref
.
It must be shape [d_0, ..., d_{Q-2}, K]
where 0 < K <= P
.
The innermost dimension of indices
(with length K
) corresponds to
indices into elements (if K = P
) or slices (if K < P
) along the K
th
dimension of ref
.
updates
is Tensor
of rank Q-1+P-K
with shape:
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
For example, say we want to update 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) update = tf.scatter_nd_update(ref, indices, updates) with tf.Session() as sess: print sess.run(update)
The resulting update to ref would look like this:
[1, 11, 3, 10, 9, 6, 7, 12]
See tf.scatter_nd
for more details about how to make updates to
slices.
ref
- A resource handle. Must be from a VarHandleOp.indices
- A Tensor. Must be one of the following types: int32, int64.
A tensor of indices into ref.updates
- A Tensor. Must have the same type as ref. A tensor of updated
values to add to ref.options
- carries optional attribute valuespublic ResourceScatterSub resourceScatterSub(Operand<? extends TType> resource, Operand<? extends TNumber> indices, Operand<? extends TType> updates)
resource
.
This operation computes
# Scalar indices ref[indices, ...] -= updates[...] # Vector indices (for each i) ref[indices[i], ...] -= updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their contributions add.
Requires updates.shape = indices.shape + ref.shape[1:]
or updates.shape = []
.
resource
- Should be from a Variable
node.indices
- A tensor of indices into the first dimension of ref
.updates
- A tensor of updated values to add to ref
.public ResourceScatterUpdate resourceScatterUpdate(Operand<? extends TType> resource, Operand<? extends TNumber> indices, Operand<? extends TType> updates)
resource
.
This operation computes
# Scalar indices ref[indices, ...] = updates[...] # Vector indices (for each i) ref[indices[i], ...] = updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]
resource
- Should be from a Variable
node.indices
- A tensor of indices into the first dimension of ref
.updates
- A tensor of updated values to add to ref
.public <T extends TNumber> ResourceStridedSliceAssign resourceStridedSliceAssign(Operand<? extends TType> ref, Operand<T> begin, Operand<T> end, Operand<T> strides, Operand<? extends TType> value, ResourceStridedSliceAssign.Options... options)
value
to the sliced l-value reference of ref
.
The values of value
are assigned to the positions in the variable
ref
that are selected by the slice parameters. The slice parameters
begin,
end,
strides, etc. work exactly as in
StridedSlice`.
NOTE this op currently does not support broadcasting and so value
's
shape must be exactly the shape produced by the slice of ref
.
T
- data type for ResourceStridedSliceAssign
output and operandsref
- The ref valuebegin
- The begin valueend
- The end valuestrides
- The strides valuevalue
- The value valueoptions
- carries optional attribute valuespublic <T extends TType> Reverse<T> reverse(Operand<T> tensor, Operand<? extends TNumber> axis)
tensor
, and a int32
tensor axis
representing the set of
dimensions of tensor
to reverse. This operation reverses each dimension
i
for which there exists j
s.t. axis[j] == i
.
tensor
can have up to 8 dimensions. The number of dimensions specified
in axis
may be 0 or more entries. If an index is specified more than
once, a InvalidArgument error is raised.
For example:
# tensor 't' is [[[[ 0, 1, 2, 3], # [ 4, 5, 6, 7], # [ 8, 9, 10, 11]], # [[12, 13, 14, 15], # [16, 17, 18, 19], # [20, 21, 22, 23]]]] # tensor 't' shape is [1, 2, 3, 4] # 'dims' is [3] or 'dims' is [-1] reverse(t, dims) ==> [[[[ 3, 2, 1, 0], [ 7, 6, 5, 4], [ 11, 10, 9, 8]], [[15, 14, 13, 12], [19, 18, 17, 16], [23, 22, 21, 20]]]] # 'dims' is '[1]' (or 'dims' is '[-3]') reverse(t, dims) ==> [[[[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23] [[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]]] # 'dims' is '[2]' (or 'dims' is '[-2]') reverse(t, dims) ==> [[[[8, 9, 10, 11], [4, 5, 6, 7], [0, 1, 2, 3]] [[20, 21, 22, 23], [16, 17, 18, 19], [12, 13, 14, 15]]]]
T
- data type for output
outputT
- data type for ReverseV2
output and operandstensor
- Up to 8-D.axis
- 1-D. The indices of the dimensions to reverse. Must be in the range
[-rank(tensor), rank(tensor))
.public <T extends TType> ReverseSequence<T> reverseSequence(Operand<T> input, Operand<? extends TNumber> seqLengths, Long seqDim, ReverseSequence.Options... options)
input
along the dimension batch_dim
, and for each
slice i
, reverses the first seq_lengths[i]
elements along
the dimension seq_dim
.
The elements of seq_lengths
must obey seq_lengths[i] <= input.dims[seq_dim]
,
and seq_lengths
must be a vector of length input.dims[batch_dim]
.
The output slice i
along dimension batch_dim
is then given by input
slice i
, with the first seq_lengths[i]
slices along dimension
seq_dim
reversed.
For example:
# Given this: batch_dim = 0 seq_dim = 1 input.dims = (4, 8, ...) seq_lengths = [7, 2, 3, 5] # then slices of input are reversed on seq_dim, but only up to seq_lengths: output[0, 0:7, :, ...] = input[0, 7:0:-1, :, ...] output[1, 0:2, :, ...] = input[1, 2:0:-1, :, ...] output[2, 0:3, :, ...] = input[2, 3:0:-1, :, ...] output[3, 0:5, :, ...] = input[3, 5:0:-1, :, ...] # while entries past seq_lens are copied through: output[0, 7:, :, ...] = input[0, 7:, :, ...] output[1, 2:, :, ...] = input[1, 2:, :, ...] output[2, 3:, :, ...] = input[2, 3:, :, ...] output[3, 2:, :, ...] = input[3, 2:, :, ...]
In contrast, if:
# Given this: batch_dim = 2 seq_dim = 0 input.dims = (8, ?, 4, ...) seq_lengths = [7, 2, 3, 5] # then slices of input are reversed on seq_dim, but only up to seq_lengths: output[0:7, :, 0, :, ...] = input[7:0:-1, :, 0, :, ...] output[0:2, :, 1, :, ...] = input[2:0:-1, :, 1, :, ...] output[0:3, :, 2, :, ...] = input[3:0:-1, :, 2, :, ...] output[0:5, :, 3, :, ...] = input[5:0:-1, :, 3, :, ...] # while entries past seq_lens are copied through: output[7:, :, 0, :, ...] = input[7:, :, 0, :, ...] output[2:, :, 1, :, ...] = input[2:, :, 1, :, ...] output[3:, :, 2, :, ...] = input[3:, :, 2, :, ...] output[2:, :, 3, :, ...] = input[2:, :, 3, :, ...]
T
- data type for output
outputT
- data type for ReverseSequence
output and operandsinput
- The input to reverse.seqLengths
- 1-D with length input.dims(batch_dim)
and
max(seq_lengths) <= input.dims(seq_dim)
seqDim
- The dimension which is partially reversed.options
- carries optional attribute valuespublic <T extends TType> Roll<T> roll(Operand<T> input, Operand<? extends TNumber> shift, Operand<? extends TNumber> axis)
shift
along the dimension of axis
. Negative shift
values will shift
elements in the opposite direction. Elements that roll passed the last position
will wrap around to the first and vice versa. Multiple shifts along multiple
axes may be specified.
For example:
# 't' is [0, 1, 2, 3, 4] roll(t, shift=2, axis=0) ==> [3, 4, 0, 1, 2] # shifting along multiple dimensions # 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]] roll(t, shift=[1, -2], axis=[0, 1]) ==> [[7, 8, 9, 5, 6], [2, 3, 4, 0, 1]] # shifting along the same axis multiple times # 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]] roll(t, shift=[2, -3], axis=[1, 1]) ==> [[1, 2, 3, 4, 0], [6, 7, 8, 9, 5]]
T
- data type for output
outputT
- data type for Roll
output and operandsinput
- The input valueshift
- Dimension must be 0-D or 1-D. shift[i]
specifies the number of places by which
elements are shifted positively (towards larger indices) along the dimension
specified by axis[i]
. Negative shifts will roll the elements in the opposite
direction.axis
- Dimension must be 0-D or 1-D. axis[i]
specifies the dimension that the shift
shift[i]
should occur. If the same axis is referenced more than once, the
total shift for that axis will be the sum of all the shifts that belong to that
axis.public <T extends TType> ScatterAdd<T> scatterAdd(Operand<T> ref, Operand<? extends TNumber> indices, Operand<T> updates, ScatterAdd.Options... options)
# Scalar indices ref[indices, ...] += updates[...] # Vector indices (for each i) ref[indices[i], ...] += updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]
This operation outputs ref
after the update is done.
This makes it easier to chain operations that need to use the reset value.
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their contributions add.
Requires updates.shape = indices.shape + ref.shape[1:]
or updates.shape = []
.
T
- data type for output_ref
outputT
- data type for ScatterAdd
output and operandsref
- Should be from a Variable
node.indices
- A tensor of indices into the first dimension of ref
.updates
- A tensor of updated values to add to ref
.options
- carries optional attribute valuespublic <T extends TType> ScatterDiv<T> scatterDiv(Operand<T> ref, Operand<? extends TNumber> indices, Operand<T> updates, ScatterDiv.Options... options)
# Scalar indices ref[indices, ...] /= updates[...] # Vector indices (for each i) ref[indices[i], ...] /= updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]
This operation outputs ref
after the update is done.
This makes it easier to chain operations that need to use the reset value.
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their contributions divide.
Requires updates.shape = indices.shape + ref.shape[1:]
or updates.shape = []
.
T
- data type for output_ref
outputT
- data type for ScatterDiv
output and operandsref
- Should be from a Variable
node.indices
- A tensor of indices into the first dimension of ref
.updates
- A tensor of values that ref
is divided by.options
- carries optional attribute valuespublic <T extends TNumber> ScatterMax<T> scatterMax(Operand<T> ref, Operand<? extends TNumber> indices, Operand<T> updates, ScatterMax.Options... options)
max
operation.
This operation computes
# Scalar indices ref[indices, ...] = max(ref[indices, ...], updates[...]) # Vector indices (for each i) ref[indices[i], ...] = max(ref[indices[i], ...], updates[i, ...]) # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] = max(ref[indices[i, ..., j], ...], updates[i, ..., j, ...])
This operation outputs ref
after the update is done.
This makes it easier to chain operations that need to use the reset value.
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their contributions combine.
Requires updates.shape = indices.shape + ref.shape[1:]
or updates.shape = []
.
T
- data type for output_ref
outputT
- data type for ScatterMax
output and operandsref
- Should be from a Variable
node.indices
- A tensor of indices into the first dimension of ref
.updates
- A tensor of updated values to reduce into ref
.options
- carries optional attribute valuespublic <T extends TNumber> ScatterMin<T> scatterMin(Operand<T> ref, Operand<? extends TNumber> indices, Operand<T> updates, ScatterMin.Options... options)
min
operation.
This operation computes
# Scalar indices ref[indices, ...] = min(ref[indices, ...], updates[...]) # Vector indices (for each i) ref[indices[i], ...] = min(ref[indices[i], ...], updates[i, ...]) # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] = min(ref[indices[i, ..., j], ...], updates[i, ..., j, ...])
This operation outputs ref
after the update is done.
This makes it easier to chain operations that need to use the reset value.
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their contributions combine.
Requires updates.shape = indices.shape + ref.shape[1:]
or updates.shape = []
.
T
- data type for output_ref
outputT
- data type for ScatterMin
output and operandsref
- Should be from a Variable
node.indices
- A tensor of indices into the first dimension of ref
.updates
- A tensor of updated values to reduce into ref
.options
- carries optional attribute valuespublic <T extends TType> ScatterMul<T> scatterMul(Operand<T> ref, Operand<? extends TNumber> indices, Operand<T> updates, ScatterMul.Options... options)
# Scalar indices ref[indices, ...] *= updates[...] # Vector indices (for each i) ref[indices[i], ...] *= updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...]
This operation outputs ref
after the update is done.
This makes it easier to chain operations that need to use the reset value.
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their contributions multiply.
Requires updates.shape = indices.shape + ref.shape[1:]
or updates.shape = []
.
T
- data type for output_ref
outputT
- data type for ScatterMul
output and operandsref
- Should be from a Variable
node.indices
- A tensor of indices into the first dimension of ref
.updates
- A tensor of updated values to multiply to ref
.options
- carries optional attribute valuespublic <U extends TType,T extends TNumber> ScatterNd<U> scatterNd(Operand<T> indices, Operand<U> updates, Operand<T> shape)
updates
into a tensor of shape shape
according to indices
.
Update the input tensor by scattering sparse updates
according to individual values at the specified indices
.
This op returns an output
tensor with the shape
you specify. This op is the
inverse of the tf.gather_nd
operator which extracts values or slices from a
given tensor.
This operation is similar to tf.tensor_scatter_add
, except that the tensor is
zero-initialized. Calling tf.scatter_nd(indices, values, shape)
is identical to calling
tf.tensor_scatter_add(tf.zeros(shape, values.dtype), indices, values)
.
If indices
contains duplicates, the duplicate values
are accumulated
(summed).
WARNING: The order in which updates are applied is nondeterministic, so the
output will be nondeterministic if indices
contains duplicates;
numbers summed in different order may yield different results because of some
numerical approximation issues.
indices
is an integer tensor of shape shape
. The last dimension
of indices
can be at most the rank of shape
:
indices.shape[-1] <= shape.rank
The last dimension of indices
corresponds to indices of elements
(if indices.shape[-1] = shape.rank
) or slices
(if indices.shape[-1] < shape.rank
) along dimension indices.shape[-1]
of
shape
.
updates
is a tensor with shape:
indices.shape[:-1] + shape[indices.shape[-1]:]
The simplest form of the scatter op is to insert individual elements in a tensor by index. Consider an example where you want to insert 4 scattered elements in a rank-1 tensor with 8 elements.
In Python, this scatter operation would look like this:
indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) shape = tf.constant([8]) scatter = tf.scatter_nd(indices, updates, shape) print(scatter)
The resulting tensor would look like this:
[0, 11, 0, 10, 9, 0, 0, 12]
You can also insert entire slices of a higher rank tensor all at once. For example, you can insert two slices in the first dimension of a rank-3 tensor with two matrices of new values.
In Python, this scatter operation would look like this:
indices = tf.constant([[0], [2]]) updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]) shape = tf.constant([4, 4, 4]) scatter = tf.scatter_nd(indices, updates, shape) print(scatter)
The resulting tensor would look like this:
[[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]
Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, the index is ignored.
U
- data type for output
outputU
- data type for ScatterNd
output and operandsT
- data type for ScatterNd
output and operandsindices
- Tensor of indices.updates
- Values to scatter into the output tensor.shape
- 1-D. The shape of the output tensor.public <T extends TType> ScatterNdAdd<T> scatterNdAdd(Operand<T> ref, Operand<? extends TNumber> indices, Operand<T> updates, ScatterNdAdd.Options... options)
ref
is a Tensor
with rank P
and indices
is a Tensor
of rank Q
.
indices
must be integer tensor, containing indices into ref
.
It must be shape [d_0, ..., d_{Q-2}, K]
where 0 < K <= P
.
The innermost dimension of indices
(with length K
) corresponds to
indices into elements (if K = P
) or slices (if K < P
) along the K
th
dimension of ref
.
updates
is Tensor
of rank Q-1+P-K
with shape:
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]
For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that addition would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) add = tf.scatter_nd_add(ref, indices, updates) with tf.Session() as sess: print sess.run(add)
The resulting update to ref would look like this:
[1, 13, 3, 14, 14, 6, 7, 20]
See tf.scatter_nd
for more details about how to make updates to
slices.
T
- data type for output_ref
outputT
- data type for ScatterNdAdd
output and operandsref
- A mutable Tensor. Should be from a Variable node.indices
- A Tensor. Must be one of the following types: int32, int64.
A tensor of indices into ref.updates
- A Tensor. Must have the same type as ref. A tensor of updated values
to add to ref.options
- carries optional attribute valuespublic <T extends TType> ScatterNdNonAliasingAdd<T> scatterNdNonAliasingAdd(Operand<T> input, Operand<? extends TNumber> indices, Operand<T> updates)
input
using individual values or slices
from updates
according to indices indices
. The updates are non-aliasing:
input
is only modified in-place if no other operations will use it.
Otherwise, a copy of input
is made. This operation has a gradient with
respect to both input
and updates
.
input
is a Tensor
with rank P
and indices
is a Tensor
of rank Q
.
indices
must be integer tensor, containing indices into input
.
It must be shape \([d_0, ..., d_{Q-2}, K]\) where 0 < K <= P
.
The innermost dimension of indices
(with length K
) corresponds to
indices into elements (if K = P
) or (P-K)
-dimensional slices
(if K < P
) along the K
th dimension of input
.
updates
is Tensor
of rank Q-1+P-K
with shape:
$$[d_0, ..., d_{Q-2}, input.shape[K], ..., input.shape[P-1]].$$
For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that addition would look like this:
input = tf.constant([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) output = tf.scatter_nd_non_aliasing_add(input, indices, updates) with tf.Session() as sess: print(sess.run(output))
The resulting value output
would look like this:
[1, 13, 3, 14, 14, 6, 7, 20]
See tf.scatter_nd
for more details about how to make updates to slices.
T
- data type for output
outputT
- data type for ScatterNdNonAliasingAdd
output and operandsinput
- A Tensor.indices
- A Tensor. Must be one of the following types: int32
, int64
.
A tensor of indices into input
.updates
- A Tensor. Must have the same type as ref. A tensor of updated values
to add to input
.public <T extends TType> ScatterNdSub<T> scatterNdSub(Operand<T> ref, Operand<? extends TNumber> indices, Operand<T> updates, ScatterNdSub.Options... options)
indices
.
ref
is a Tensor
with rank P
and indices
is a Tensor
of rank Q
.
indices
must be integer tensor, containing indices into ref
.
It must be shape [d_0, ..., d_{Q-2}, K]
where 0 < K <= P
.
The innermost dimension of indices
(with length K
) corresponds to
indices into elements (if K = P
) or slices (if K < P
) along the K
th
dimension of ref
.
updates
is Tensor
of rank Q-1+P-K
with shape:
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]
For example, say we want to subtract 4 scattered elements from a rank-1 tensor with 8 elements. In Python, that subtraction would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) sub = tf.scatter_nd_sub(ref, indices, updates) with tf.Session() as sess: print sess.run(sub)
The resulting update to ref would look like this:
[1, -9, 3, -6, -4, 6, 7, -4]
See tf.scatter_nd
for more details about how to make updates to
slices.
T
- data type for output_ref
outputT
- data type for ScatterNdSub
output and operandsref
- A mutable Tensor. Should be from a Variable node.indices
- A Tensor. Must be one of the following types: int32, int64.
A tensor of indices into ref.updates
- A Tensor. Must have the same type as ref. A tensor of updated values
to subtract from ref.options
- carries optional attribute valuespublic <T extends TType> ScatterNdUpdate<T> scatterNdUpdate(Operand<T> ref, Operand<? extends TNumber> indices, Operand<T> updates, ScatterNdUpdate.Options... options)
updates
to individual values or slices within a given
variable according to indices
.
ref
is a Tensor
with rank P
and indices
is a Tensor
of rank Q
.
indices
must be integer tensor, containing indices into ref
.
It must be shape \([d_0, ..., d_{Q-2}, K]\) where 0 < K <= P
.
The innermost dimension of indices
(with length K
) corresponds to
indices into elements (if K = P
) or slices (if K < P
) along the K
th
dimension of ref
.
updates
is Tensor
of rank Q-1+P-K
with shape:
$$[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].$$
For example, say we want to update 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) update = tf.scatter_nd_update(ref, indices, updates) with tf.Session() as sess: print sess.run(update)
The resulting update to ref would look like this:
[1, 11, 3, 10, 9, 6, 7, 12]
See tf.scatter_nd
for more details about how to make updates to
slices.
See also tf.scatter_update
and tf.batch_scatter_update
.
T
- data type for output_ref
outputT
- data type for ScatterNdUpdate
output and operandsref
- A mutable Tensor. Should be from a Variable node.indices
- A Tensor. Must be one of the following types: int32, int64.
A tensor of indices into ref.updates
- A Tensor. Must have the same type as ref. A tensor of updated
values to add to ref.options
- carries optional attribute valuespublic <T extends TType> ScatterSub<T> scatterSub(Operand<T> ref, Operand<? extends TNumber> indices, Operand<T> updates, ScatterSub.Options... options)
# Scalar indices ref[indices, ...] -= updates[...] # Vector indices (for each i) ref[indices[i], ...] -= updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]This operation outputs
ref
after the update is done.
This makes it easier to chain operations that need to use the reset value.
Duplicate entries are handled correctly: if multiple indices
reference
the same location, their (negated) contributions add.
Requires updates.shape = indices.shape + ref.shape[1:]
or updates.shape = []
.
T
- data type for output_ref
outputT
- data type for ScatterSub
output and operandsref
- Should be from a Variable
node.indices
- A tensor of indices into the first dimension of ref
.updates
- A tensor of updated values to subtract from ref
.options
- carries optional attribute valuespublic <T extends TType> ScatterUpdate<T> scatterUpdate(Operand<T> ref, Operand<? extends TNumber> indices, Operand<T> updates, ScatterUpdate.Options... options)
# Scalar indices ref[indices, ...] = updates[...] # Vector indices (for each i) ref[indices[i], ...] = updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]
This operation outputs ref
after the update is done.
This makes it easier to chain operations that need to use the reset value.
If values in ref
is to be updated more than once, because there are
duplicate entries in indices
, the order at which the updates happen
for each value is undefined.
Requires updates.shape = indices.shape + ref.shape[1:]
or updates.shape = []
.
See also tf.batch_scatter_update
and tf.scatter_nd_update
.
T
- data type for output_ref
outputT
- data type for ScatterUpdate
output and operandsref
- Should be from a Variable
node.indices
- A tensor of indices into the first dimension of ref
.updates
- A tensor of updated values to store in ref
.options
- carries optional attribute valuespublic <T extends TType> Select<T> select(Operand<TBool> condition, Operand<T> t, Operand<T> e)
T
- data type for output
outputT
- data type for SelectV2
output and operandscondition
- The condition valuet
- The t valuee
- The e valuepublic <T extends TType> SetDiff1d<T,TInt32> setDiff1d(Operand<T> x, Operand<T> y)
x
and a list y
, this operation returns a list out
that
represents all values that are in x
but not in y
. The returned list out
is sorted in the same order that the numbers appear in x
(duplicates are
preserved). This operation also returns a list idx
that represents the
position of each out
element in x
. In other words:
out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]
For example, given this input:
x = [1, 2, 3, 4, 5, 6] y = [1, 3, 5]
This operation would return:
out ==> [2, 4, 6] idx ==> [1, 3, 5]
T
- data type for out
outputU
- data type for idx
outputT
- data type for ListDiff
output and operandsx
- 1-D. Values to keep.y
- 1-D. Values to remove.public <T extends TType,U extends TNumber> SetDiff1d<T,U> setDiff1d(Operand<T> x, Operand<T> y, Class<U> outIdx)
x
and a list y
, this operation returns a list out
that
represents all values that are in x
but not in y
. The returned list out
is sorted in the same order that the numbers appear in x
(duplicates are
preserved). This operation also returns a list idx
that represents the
position of each out
element in x
. In other words:
out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]
For example, given this input:
x = [1, 2, 3, 4, 5, 6] y = [1, 3, 5]
This operation would return:
out ==> [2, 4, 6] idx ==> [1, 3, 5]
T
- data type for out
outputU
- data type for idx
outputT
- data type for ListDiff
output and operandsU
- data type for ListDiff
output and operandsx
- 1-D. Values to keep.y
- 1-D. Values to remove.outIdx
- The value of the outIdx attributepublic SetSize setSize(Operand<TInt64> setIndices, Operand<? extends TType> setValues, Operand<TInt64> setShape, SetSize.Options... options)
set
.
Input set
is a SparseTensor
represented by set_indices
, set_values
,
and set_shape
. The last dimension contains values in a set, duplicates are
allowed but ignored.
If validate_indices
is True
, this op validates the order and range of set
indices.
setIndices
- 2D Tensor
, indices of a SparseTensor
.setValues
- 1D Tensor
, values of a SparseTensor
.setShape
- 1D Tensor
, shape of a SparseTensor
.options
- carries optional attribute valuespublic Shape<TInt32> shape(Operand<? extends TType> input)
input
.
For example:
# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] shape(t) ==> [2, 2, 3]
U
- data type for output
outputinput
- The input valuepublic <U extends TNumber> Shape<U> shape(Operand<? extends TType> input, Class<U> outType)
input
.
For example:
# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] shape(t) ==> [2, 2, 3]
U
- data type for output
outputU
- data type for Shape
output and operandsinput
- The input valueoutType
- The value of the outType attributepublic ShapeN<TInt32> shapeN(Iterable<Operand<? extends TType>> input)
input[i]s
.U
- data type for output
outputinput
- The input valuepublic <U extends TNumber> ShapeN<U> shapeN(Iterable<Operand<? extends TType>> input, Class<U> outType)
input[i]s
.U
- data type for output
outputU
- data type for ShapeN
output and operandsinput
- The input valueoutType
- The value of the outType attributepublic Size<TInt32> size(Operand<? extends TType> input)
input
.
For example:
# 't' is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]] size(t) ==> 12
U
- data type for output
outputinput
- The input valuepublic <U extends TNumber> Size<U> size(Operand<? extends TType> input, Class<U> outType)
input
.
For example:
# 't' is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]] size(t) ==> 12
U
- data type for output
outputU
- data type for Size
output and operandsinput
- The input valueoutType
- The value of the outType attributepublic Skipgram skipgram(String filename, Long batchSize, Skipgram.Options... options)
filename
- The corpus's text file name.batchSize
- The size of produced batch.options
- carries optional attribute valuespublic <T extends TType,U extends TNumber> Slice<T> slice(Operand<T> input, Operand<U> begin, Operand<U> sizeOutput)
Requirements: 0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n)
T
- data type for output
outputT
- data type for Slice
output and operandsU
- data type for Slice
output and operandsinput
- The input valuebegin
- begin[i] specifies the offset into the 'i'th dimension of
'input' to slice from.sizeOutput
- size[i] specifies the number of elements of the 'i'th dimension
of 'input' to slice. If size[i] is -1, all remaining elements in dimension
i are included in the slice (i.e. this is equivalent to setting
size[i] = input.dim_size(i) - begin[i]).public <T extends TType> Snapshot<T> snapshot(Operand<T> input)
T
- data type for output
outputT
- data type for Snapshot
output and operandsinput
- The input valuepublic <T extends TType> SpaceToBatchNd<T> spaceToBatchNd(Operand<T> input, Operand<? extends TNumber> blockShape, Operand<? extends TNumber> paddings)
[1, ..., M]
of the input into a
grid of blocks of shape block_shape
, and interleaves these blocks with the
"batch" dimension (0) such that in the output, the spatial dimensions
[1, ..., M]
correspond to the position within the grid, and the batch
dimension combines both the position within a spatial block and the original
batch position. Prior to division into blocks, the spatial dimensions of the
input are optionally zero padded according to paddings
. See below for a
precise description.
This operation is equivalent to the following steps:
Zero-pad the start and end of dimensions [1, ..., M]
of the
input according to paddings
to produce padded
of shape padded_shape
.
Reshape padded
to reshaped_padded
of shape:
[batch] + [padded_shape[1] / block_shape[0], block_shape[0], ..., padded_shape[M] / block_shape[M-1], block_shape[M-1]] + remaining_shape
Permute dimensions of reshaped_padded
to produce
permuted_reshaped_padded
of shape:
block_shape + [batch] + [padded_shape[1] / block_shape[0], ..., padded_shape[M] / block_shape[M-1]] + remaining_shape
Reshape permuted_reshaped_padded
to flatten block_shape
into the batch
dimension, producing an output tensor of shape:
[batch * prod(block_shape)] + [padded_shape[1] / block_shape[0], ..., padded_shape[M] / block_shape[M-1]] + remaining_shape
Some examples:
(1) For the following input of shape [1, 2, 2, 1]
, block_shape = [2, 2]
, and
paddings = [[0, 0], [0, 0]]
:
x = [[[[1], [2]], [[3], [4]]]]
The output tensor has shape [4, 1, 1, 1]
and value:
[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
(2) For the following input of shape [1, 2, 2, 3]
, block_shape = [2, 2]
, and
paddings = [[0, 0], [0, 0]]
:
x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]]
The output tensor has shape [4, 1, 1, 3]
and value:
[[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]
(3) For the following input of shape [1, 4, 4, 1]
, block_shape = [2, 2]
, and
paddings = [[0, 0], [0, 0]]
:
x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]]
The output tensor has shape [4, 2, 2, 1]
and value:
x = [[[[1], [3]], [[9], [11]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]]
(4) For the following input of shape [2, 2, 4, 1]
, block_shape = [2, 2]
, and
paddings = [[0, 0], [2, 0]]
:
x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]]], [[[9], [10], [11], [12]], [[13], [14], [15], [16]]]]
The output tensor has shape [8, 1, 3, 1]
and value:
x = [[[[0], [1], [3]]], [[[0], [9], [11]]], [[[0], [2], [4]]], [[[0], [10], [12]]], [[[0], [5], [7]]], [[[0], [13], [15]]], [[[0], [6], [8]]], [[[0], [14], [16]]]]
Among others, this operation is useful for reducing atrous convolution into regular convolution.
T
- data type for output
outputT
- data type for SpaceToBatchND
output and operandsinput
- N-D with shape input_shape = [batch] + spatial_shape + remaining_shape
,
where spatial_shape has M
dimensions.blockShape
- 1-D with shape [M]
, all values must be >= 1.paddings
- 2-D with shape [M, 2]
, all values must be >= 0.
paddings[i] = [pad_start, pad_end]
specifies the padding for input dimension
i + 1
, which corresponds to spatial dimension i
. It is required that
block_shape[i]
divides input_shape[i + 1] + pad_start + pad_end
.public <T extends TType> Split<T> split(Operand<TInt32> axis, Operand<T> value, Long numSplit)
num_split
tensors along one dimension.T
- data type for output
outputT
- data type for Split
output and operandsaxis
- 0-D. The dimension along which to split. Must be in the range
[-rank(value), rank(value))
.value
- The tensor to split.numSplit
- The number of ways to split. Must evenly divide
value.shape[split_dim]
.public <T extends TType> SplitV<T> splitV(Operand<T> value, Operand<? extends TNumber> sizeSplits, Operand<TInt32> axis, Long numSplit)
num_split
tensors along one dimension.T
- data type for output
outputT
- data type for SplitV
output and operandsvalue
- The tensor to split.sizeSplits
- list containing the sizes of each output tensor along the split
dimension. Must sum to the dimension of value along split_dim.
Can contain one -1 indicating that dimension is to be inferred.axis
- 0-D. The dimension along which to split. Must be in the range
[-rank(value), rank(value))
.numSplit
- The value of the numSplit attributepublic <T extends TType> Squeeze<T> squeeze(Operand<T> input, Squeeze.Options... options)
input
, this operation returns a tensor of the same type with
all dimensions of size 1 removed. If you don't want to remove all size 1
dimensions, you can remove specific size 1 dimensions by specifying
axis
.
For example:
# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] shape(squeeze(t)) ==> [2, 3]
Or, to remove specific size 1 dimensions:
# 't' is a tensor of shape [1, 2, 1, 3, 1, 1] shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1]
T
- data type for output
outputT
- data type for Squeeze
output and operandsinput
- The input
to squeeze.options
- carries optional attribute valuespublic <T extends TType> Stack<T> stack(Iterable<Operand<T>> values, Stack.Options... options)
N
rank-R
tensors into one rank-(R+1)
tensor.
Packs the N
tensors in values
into a tensor with rank one higher than each
tensor in values
, by packing them along the axis
dimension.
Given a list of tensors of shape (A, B, C)
;
if axis == 0
then the output
tensor will have the shape (N, A, B, C)
.
if axis == 1
then the output
tensor will have the shape (A, N, B, C)
.
Etc.
For example:
# 'x' is [1, 4] # 'y' is [2, 5] # 'z' is [3, 6] pack([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim. pack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]]
This is the opposite of unpack
.
T
- data type for output
outputT
- data type for Pack
output and operandsvalues
- Must be of same shape and type.options
- carries optional attribute valuespublic Stage stage(Iterable<Operand<?>> values, Stage.Options... options)
values
- a list of tensors
dtypes A list of data types that inserted values should adhere to.options
- carries optional attribute valuespublic StageClear stageClear(List<Class<? extends TType>> dtypes, StageClear.Options... options)
dtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic StagePeek stagePeek(Operand<TInt32> index, List<Class<? extends TType>> dtypes, StagePeek.Options... options)
index
- The index valuedtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic StageSize stageSize(List<Class<? extends TType>> dtypes, StageSize.Options... options)
dtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic StatefulCase statefulCase(Operand<TInt32> branchIndex, Iterable<Operand<?>> input, List<Class<? extends TType>> Tout, List<ConcreteFunction> branches, Case.Options... options)
An n-way switch statement, implementing the following: ``` switch (branch_index) { case 0: output = branches[0](input); break; case 1: output = branches[1](input); break; ... case [[nbranches-1]]: default: output = branches[nbranches-1](input); break; } ```
branchIndex
- The branch selector, an int32 Tensor.input
- A list of input tensors passed to the branch function.Tout
- A list of output types.branches
- A list of functions each of which takes 'inputs' and returns a list of tensors, whose types are the same as what every other branch returns.
options
- carries optional attribute valuespublic StatefulIf statefulIf(Operand<? extends TType> cond, Iterable<Operand<?>> input, List<Class<? extends TType>> Tout, ConcreteFunction thenBranch, ConcreteFunction elseBranch, If.Options... options)
cond
- A Tensor. If the tensor is a scalar of non-boolean type, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means `True` and zero means False; if the scalar is a string, non-empty means `True` and empty means `False`. If the tensor is not a scalar, being empty means False and being non-empty means True.
input
- A list of input tensors.Tout
- A list of output types.thenBranch
- A function that takes 'inputs' and returns a list of tensors, whose types are the same as what else_branch returns.
elseBranch
- A function that takes 'inputs' and returns a list of tensors, whose types are the same as what then_branch returns.
options
- carries optional attribute valuespublic StatefulPartitionedCall statefulPartitionedCall(Iterable<Operand<?>> args, List<Class<? extends TType>> Tout, ConcreteFunction f, PartitionedCall.Options... options)
f(inputs)
, where f
's body is placed and partitioned.args
- A list of input tensors.Tout
- A list of output types.f
- A function that takes 'args', a list of tensors, and returns 'output', another list of tensors. Input and output types are specified by 'Tin' and 'Tout'. The function body of f will be placed and partitioned across devices, setting this op apart from the regular Call op. This op is stateful.
options
- carries optional attribute valuespublic StatefulWhile statefulWhile(Iterable<Operand<?>> input, ConcreteFunction cond, ConcreteFunction body, While.Options... options)
input
- A list of input tensors whose types are T.cond
- A function takes 'input' and returns a tensor. If the tensor is a scalar of non-boolean, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means True and zero means False; if the scalar is a string, non-empty means True and empty means False. If the tensor is not a scalar, non-emptiness means True and False otherwise.
body
- A function that takes a list of tensors and returns another list of tensors. Both lists have the same types as specified by T.
options
- carries optional attribute valuespublic StatelessIf statelessIf(Operand<? extends TType> cond, Iterable<Operand<?>> input, List<Class<? extends TType>> Tout, ConcreteFunction thenBranch, ConcreteFunction elseBranch, If.Options... options)
cond
- A Tensor. If the tensor is a scalar of non-boolean type, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means `True` and zero means False; if the scalar is a string, non-empty means `True` and empty means `False`. If the tensor is not a scalar, being empty means False and being non-empty means True. This should only be used when the if then/else body functions do not have stateful ops.
input
- A list of input tensors.Tout
- A list of output types.thenBranch
- A function that takes 'inputs' and returns a list of tensors, whose types are the same as what else_branch returns.
elseBranch
- A function that takes 'inputs' and returns a list of tensors, whose types are the same as what then_branch returns.
options
- carries optional attribute valuespublic StatelessPartitionedCall statelessPartitionedCall(Iterable<Operand<?>> args, List<Class<? extends TType>> Tout, ConcreteFunction f, PartitionedCall.Options... options)
f(inputs)
, where f
's body is placed and partitioned.
Asynchronously executes a function, potentially across multiple devices but
within a single process. The kernel places and partitions a given function's
underlying graph, and executes each of the partitioned subgraphs as a function.args
- A list of input tensors.Tout
- A list of output types.f
- A function that takes 'args', a list of tensors, and returns 'output', another list of tensors. Input and output types are specified by 'Tin' and 'Tout'. The function body of f will be placed and partitioned across devices, setting this op apart from the regular Call op.
options
- carries optional attribute valuespublic StatelessWhile statelessWhile(Iterable<Operand<?>> input, ConcreteFunction cond, ConcreteFunction body, While.Options... options)
input
- A list of input tensors whose types are T.cond
- A function takes 'input' and returns a tensor. If the tensor is a scalar of non-boolean, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means True and zero means False; if the scalar is a string, non-empty means True and empty means False. If the tensor is not a scalar, non-emptiness means True and False otherwise. This should only be used when the while condition and body functions do not have stateful ops.
body
- A function that takes a list of tensors and returns another list of tensors. Both lists have the same types as specified by T.
options
- carries optional attribute valuespublic <T extends TType> StopGradient<T> stopGradient(Operand<T> input)
When building ops to compute gradients, this op prevents the contribution of its inputs to be taken into account. Normally, the gradient generator adds ops to a graph to compute the derivatives of a specified 'loss' by recursively finding out inputs that contributed to its computation. If you insert this op in the graph it inputs are masked from the gradient generator. They are not taken into account for computing gradients.
This is useful any time you want to compute a value with TensorFlow but need to pretend that the value was a constant. For example, the softmax function for a vector x can be written as
def softmax(x): numerator = tf.exp(x) denominator = tf.reduce_sum(numerator) return numerator / denominator
This however is susceptible to overflow if the values in x are large. An alternative more stable way is to subtract the maximum of x from each of the values.
def stable_softmax(x): z = x - tf.reduce_max(x) numerator = tf.exp(z) denominator = tf.reduce_sum(numerator) return numerator / denominator
However, when we backprop through the softmax to x, we dont want to backprop
through the tf.reduce_max(x)
(if the max values are not unique then the
gradient could flow to the wrong input) calculation and treat that as a
constant. Therefore, we should write this out as
def stable_softmax(x): z = x - tf.stop_gradient(tf.reduce_max(x)) numerator = tf.exp(z) denominator = tf.reduce_sum(numerator) return numerator / denominator
Some other examples include:
T
- data type for output
outputT
- data type for StopGradient
output and operandsinput
- The input valuepublic <T extends TType> StridedSlice<T> stridedSlice(Operand<T> input, org.tensorflow.ndarray.index.Index... indices)
The goal of this op is to produce a new tensor with a subset of the elements from the `n` dimensional `input` tensor. The subset is chosen using a sequence of `m` sparse range specifications encoded into the arguments of this function. Note, in some cases `m` could be equal to `n`, but this need not be the case. Each range specification entry can be one of the following:
- An ellipsis (...) using Indices#ellipsis()
. Ellipses are used to imply zero or more dimensions of
full-dimension selection. For example, stridedSlice(foo, Indices.ellipsis()
is the identity slice.
- A new axis using Indices#newAxis()
. This is used to insert a new shape=1 dimension.
For example, `stridedSlice(foo, Indices.newAxis())
where foo
is shape (3, 4)
produces a (1, 3, 4)
tensor.
- A range begin:end:stride
using Indices#slice(Long, Long, long)
Index.slice()} or Indices#all()
. This is used to specify
how much to choose from a given dimension. stride
can be any integer but 0. begin
is an integer which
represents the index of the first value to select while end
represents the index of the last value to select
(exclusive). Begin and end can be null, in which case the index begins or ends at the beginning or end of the dimension,
respectively (reversed if stride is negative). When both are null, slice()
is the same as all()
.
The number of values selected in each dimension is end - begin
if stride > 0
and begin - end
if stride < 0
. begin
and end
can be negative where -1
is the last element, -2
is the second to last. For example, given a shape (3,)
tensor stridedSlice(foo, Indices.all())
, the
effective begin
and end
are 0
and 3
. Do not assume this is equivalent to
stridedSlice(foo, Indices.slice(0, -1))
which has an effective begin
and end
of 0
and
2
. Another example is stridedSlice(foo, Indices.slice(-2, null, -1))
which reverses the first dimension
of a tensor while dropping the last two (in the original order elements). For example foo = [1,2,3,4];
stridedSlice(foo, Indices.slice(-2, null, -1)
is [4,3]
.
- A single index using Indices#at(long)
. This is used to keep only elements that have a given index. For
example (stridedSlice(foo, Indices.at(2))
on a shape (5,6)
tensor produces a shape (6,)
tensor.
The dimension can be kept with size one using Indices#at(long, boolean)
.
These semantics generally follow NumPy's indexing semantics, which can be found here: https://numpy.org/doc/stable/reference/arrays.indexing.html
Requirements: `0 != strides[i] for i in [0, m)` Only one ellipsis.
T
- data type for output()
outputindices
- The indices to slice. See Indices
.Indices
public <T extends TType,U extends TNumber> StridedSlice<T> stridedSlice(Operand<T> input, Operand<U> begin, Operand<U> end, Operand<U> strides, StridedSlice.Options... options)
input
.
Note, most python users will want to use the Python Tensor.__getitem__
or Variable.__getitem__
rather than this op directly.
The goal of this op is to produce a new tensor with a subset of
the elements from the n
dimensional input
tensor. The subset is chosen using
a sequence of m
sparse range specifications encoded into the arguments
of this function. Note, in some cases
m
could be equal to n
, but this need not be the case. Each
range specification entry can be one of the following:
An ellipsis (...). Ellipses are used to imply zero or more
dimensions of full-dimension selection and are produced using
ellipsis_mask
. For example, foo[...]
is the identity slice.
A new axis. This is used to insert a new shape=1 dimension and is
produced using new_axis_mask
. For example, foo[:, ...]
where
foo
is shape (3, 4)
produces a (1, 3, 4)
tensor.
A range begin:end:stride
. This is used to specify how much to choose from
a given dimension. stride
can be any integer but 0. begin
is an integer
which represents the index of the first value to select while end
represents
the index of the last value to select. The number of values selected in each
dimension is end - begin
if stride > 0
and begin - end
if stride < 0
.
begin
and end
can be negative where -1
is the last element, -2
is
the second to last. begin_mask
controls whether to replace the explicitly
given begin
with an implicit effective value of 0
if stride > 0
and
-1
if stride < 0
. end_mask
is analogous but produces the number
required to create the largest open interval. For example, given a shape
(3,)
tensor foo[:]
, the effective begin
and end
are 0
and 3
. Do
not assume this is equivalent to foo[0:-1]
which has an effective begin
and end
of 0
and 2
. Another example is foo[-2::-1]
which reverses the
first dimension of a tensor while dropping the last two (in the original
order elements). For example foo = [1,2,3,4]; foo[-2::-1]
is [4,3]
.
A single index. This is used to keep only elements that have a given
index. For example (foo[2, :]
on a shape (5,6)
tensor produces a
shape (6,)
tensor. This is encoded in begin
and end
and
shrink_axis_mask
.
Each conceptual range specification is encoded in the op's argument. This
encoding is best understand by considering a non-trivial example. In
particular,
foo[1, 2:4, None, ..., :-3:-1, :]
will be encoded as
begin = [1, 2, x, x, 0, x] # x denotes don't care (usually 0) end = [2, 4, x, x, -3, x] strides = [1, 1, x, x, -1, 1] begin_mask = 1<<4 | 1<<5 = 48 end_mask = 1<<5 = 32 ellipsis_mask = 1<<3 = 8 new_axis_mask = 1<<2 = 4 shrink_axis_mask = 1<<0 = 1
In this case if foo.shape
is (5, 5, 5, 5, 5, 5) the final shape of
the slice becomes (2, 1, 5, 5, 2, 5).
Let us walk step by step through each argument specification.
The first argument in the example slice is turned into begin = 1
and
end = begin + 1 = 2
. To disambiguate from the original spec 2:4
we
also set the appropriate bit in shrink_axis_mask
.
2:4
is contributes 2, 4, 1 to begin, end, and stride. All masks have
zero bits contributed.
None is a synonym for tf.newaxis
. This means insert a dimension of size 1
dimension in the final shape. Dummy values are contributed to begin,
end and stride, while the new_axis_mask bit is set.
...
grab the full ranges from as many dimensions as needed to
fully specify a slice for every dimension of the input shape.
:-3:-1
shows the use of negative indices. A negative index i
associated
with a dimension that has shape s
is converted to a positive index
s + i
. So -1
becomes s-1
(i.e. the last element). This conversion
is done internally so begin, end and strides receive x, -3, and -1.
The appropriate begin_mask bit is set to indicate the start range is the
full range (ignoring the x).
:
indicates that the entire contents of the corresponding dimension
is selected. This is equivalent to ::
or 0::1
. begin, end, and strides
receive 0, 0, and 1, respectively. The appropriate bits in begin_mask
and
end_mask
are also set.
Requirements:
0 != strides[i] for i in [0, m)
ellipsis_mask must be a power of two (only one ellipsis)
T
- data type for output
outputT
- data type for StridedSlice
output and operandsU
- data type for StridedSlice
output and operandsinput
- The input valuebegin
- begin[k]
specifies the offset into the k
th range specification.
The exact dimension this corresponds to will be determined by context.
Out-of-bounds values will be silently clamped. If the k
th bit of
begin_mask
then begin[k]
is ignored and the full range of the
appropriate dimension is used instead. Negative values causes indexing
to start from the highest element e.g. If foo==[1,2,3]
then foo[-1]==3
.end
- end[i]
is like begin
with the exception that end_mask
is
used to determine full ranges.strides
- strides[i]
specifies the increment in the i
th specification
after extracting a given element. Negative indices will reverse
the original order. Out or range values are
clamped to [0,dim[i]) if slice[i]>0
or [-1,dim[i]-1] if slice[i] < 0
options
- carries optional attribute valuespublic <T extends TType> StridedSliceAssign<T> stridedSliceAssign(Operand<T> ref, Operand<T> value, org.tensorflow.ndarray.index.Index... indices)
The values of `value` are assigned to the positions in the variable `ref` that are selected by the slice parameters. The slice parameters `begin`, `end`, `strides`, etc. work exactly as in `StridedSlice`.
NOTE this op currently does not support broadcasting and so `value`'s shape must be exactly the shape produced by the slice of `ref`.
T
- data type for outputRef()
outputref
- the tensor to assign to.value
- the value to assign.indices
- The indices to slice. See Indices
.stridedSlice(Operand, Index...)
public <T extends TType,U extends TNumber> StridedSliceAssign<T> stridedSliceAssign(Operand<T> ref, Operand<U> begin, Operand<U> end, Operand<U> strides, Operand<T> value, StridedSliceAssign.Options... options)
value
to the sliced l-value reference of ref
.
The values of value
are assigned to the positions in the variable
ref
that are selected by the slice parameters. The slice parameters
begin
, end
, strides
, etc. work exactly as in StridedSlice
.
NOTE this op currently does not support broadcasting and so value
's
shape must be exactly the shape produced by the slice of ref
.
T
- data type for output_ref
outputT
- data type for StridedSliceAssign
output and operandsU
- data type for StridedSliceAssign
output and operandsref
- The ref valuebegin
- The begin valueend
- The end valuestrides
- The strides valuevalue
- The value valueoptions
- carries optional attribute valuespublic <U extends TType,T extends TNumber> StridedSliceGrad<U> stridedSliceGrad(Operand<T> shape, Operand<T> begin, Operand<T> end, Operand<T> strides, Operand<U> dy, StridedSliceGrad.Options... options)
StridedSlice
.
Since StridedSlice
cuts out pieces of its input
which is size
shape
, its gradient will have the same shape (which is passed here
as shape
). The gradient will be zero in any element that the slice
does not select.
Arguments are the same as StridedSliceGrad with the exception that
dy
is the input gradient to be propagated and shape
is the
shape of StridedSlice
's input
.
U
- data type for output
outputU
- data type for StridedSliceGrad
output and operandsT
- data type for StridedSliceGrad
output and operandsshape
- The shape valuebegin
- The begin valueend
- The end valuestrides
- The strides valuedy
- The dy valueoptions
- carries optional attribute valuespublic <T extends TType> Sum<T> sum(Operand<T> input, Operand<? extends TNumber> axis, Sum.Options... options)
input
along the dimensions given in axis
. Unless
keep_dims
is true, the rank of the tensor is reduced by 1 for each entry in
axis
. If keep_dims
is true, the reduced dimensions are
retained with length 1.T
- data type for output
outputT
- data type for Sum
output and operandsinput
- The tensor to reduce.axis
- The dimensions to reduce. Must be in the range
[-rank(input), rank(input))
.options
- carries optional attribute valuespublic <T extends TType> SwitchCond<T> switchCond(Operand<T> data, Operand<TBool> pred)
data
to the output port determined by pred
.
If pred
is true, the data
input is forwarded to output_true
. Otherwise,
the data goes to output_false
.
See also RefSwitch
and Merge
.
T
- data type for output_false
outputT
- data type for Switch
output and operandsdata
- The tensor to be forwarded to the appropriate output.pred
- A scalar that specifies which output port will receive data.public <T extends TType> TemporaryVariable<T> temporaryVariable(org.tensorflow.ndarray.Shape shape, Class<T> dtype, TemporaryVariable.Options... options)
It is the caller's responsibility to ensure that 'ref' is eventually passed to a matching 'DestroyTemporaryVariable' op after all other uses have completed.
Outputs a ref to the tensor state so it may be read or modified.
E.g. var = state_ops.temporary_variable([1, 2], types.float) var_name = var.op.name var = state_ops.assign(var, [[4.0, 5.0]]) var = state_ops.assign_add(var, [[6.0, 7.0]]) final = state_ops._destroy_temporary_variable(var, var_name=var_name)
T
- data type for ref
outputT
- data type for TemporaryVariable
output and operandsshape
- The shape of the variable tensor.dtype
- The type of elements in the variable tensor.options
- carries optional attribute valuespublic <T extends TType> TensorArray tensorArray(Operand<TInt32> sizeOutput, Class<T> dtype, TensorArray.Options... options)
T
- data type for TensorArrayV3
output and operandssizeOutput
- The size of the array.dtype
- The type of the elements on the tensor_array.options
- carries optional attribute valuespublic TensorArrayClose tensorArrayClose(Operand<? extends TType> handle)
handle
- The handle to a TensorArray (output of TensorArray or TensorArrayGrad).public <T extends TType> TensorArrayConcat<T> tensorArrayConcat(Operand<? extends TType> handle, Operand<TFloat32> flowIn, Class<T> dtype, TensorArrayConcat.Options... options)
value
.
Takes T
elements of shapes
(n0 x d0 x d1 x ...), (n1 x d0 x d1 x ...), ..., (n(T-1) x d0 x d1 x ...)
and concatenates them into a Tensor of shape:
(n0 + n1 + ... + n(T-1) x d0 x d1 x ...)
All elements must have the same shape (excepting the first dimension).
T
- data type for value
outputT
- data type for TensorArrayConcatV3
output and operandshandle
- The handle to a TensorArray.flowIn
- A float scalar that enforces proper chaining of operations.dtype
- The type of the elem that is returned.options
- carries optional attribute valuespublic <T extends TType> TensorArrayGather<T> tensorArrayGather(Operand<? extends TType> handle, Operand<TInt32> indices, Operand<TFloat32> flowIn, Class<T> dtype, TensorArrayGather.Options... options)
value
.
All elements selected by indices
must have the same shape.T
- data type for value
outputT
- data type for TensorArrayGatherV3
output and operandshandle
- The handle to a TensorArray.indices
- The locations in the TensorArray from which to read tensor elements.flowIn
- A float scalar that enforces proper chaining of operations.dtype
- The type of the elem that is returned.options
- carries optional attribute valuespublic TensorArrayGrad tensorArrayGrad(Operand<? extends TType> handle, Operand<TFloat32> flowIn, String source)
Locks the size of the original TensorArray by disabling its dynamic size flag.
A note about the input flow_in:
The handle flow_in forces the execution of the gradient lookup to occur only after certain other operations have occurred. For example, when the forward TensorArray is dynamically sized, writes to this TensorArray may resize the object. The gradient TensorArray is statically sized based on the size of the forward TensorArray when this operation executes. Furthermore, the size of the forward TensorArray is frozen by this call. As a result, the flow is used to ensure that the call to generate the gradient TensorArray only happens after all writes are executed.
In the case of dynamically sized TensorArrays, gradient computation should only be performed on read operations that have themselves been chained via flow to occur only after all writes have executed. That way the final size of the forward TensorArray is known when this operation is called.
A note about the source attribute:
TensorArray gradient calls use an accumulator TensorArray object. If multiple gradients are calculated and run in the same session, the multiple gradient nodes may accidentally flow through the same accumulator TensorArray. This double counts and generally breaks the TensorArray gradient flow.
The solution is to identify which gradient call this particular
TensorArray gradient is being called in. This is performed by identifying
a unique string (e.g. "gradients", "gradients_1", ...) from the input
gradient Tensor's name. This string is used as a suffix when creating
the TensorArray gradient object here (the attribute source
).
The attribute source
is added as a suffix to the forward TensorArray's
name when performing the creation / lookup, so that each separate gradient
calculation gets its own TensorArray accumulator.
handle
- The handle to the forward TensorArray.flowIn
- A float scalar that enforces proper chaining of operations.source
- The gradient source string, used to decide which gradient TensorArray
to return.public TensorArrayGradWithShape tensorArrayGradWithShape(Operand<? extends TType> handle, Operand<TFloat32> flowIn, Operand<TInt32> shapeToPrepend, String source)
handle
- The handle to the forward TensorArray.flowIn
- A float scalar that enforces proper chaining of operations.shapeToPrepend
- An int32 vector representing a shape. Elements in the gradient accumulator will
have shape which is this shape_to_prepend value concatenated with shape of the
elements in the TensorArray corresponding to the input handle.source
- The gradient source string, used to decide which gradient TensorArray
to return.public <T extends TType> TensorArrayPack<T> tensorArrayPack(Operand<TString> handle, Operand<TFloat32> flowIn, Class<T> dtype, TensorArrayPack.Options... options)
T
- data type for value
outputT
- data type for TensorArrayPack
output and operandshandle
- The handle valueflowIn
- The flowIn valuedtype
- The value of the dtype attributeoptions
- carries optional attribute valuespublic <T extends TType> TensorArrayRead<T> tensorArrayRead(Operand<? extends TType> handle, Operand<TInt32> index, Operand<TFloat32> flowIn, Class<T> dtype)
value
.T
- data type for value
outputT
- data type for TensorArrayReadV3
output and operandshandle
- The handle to a TensorArray.index
- The index valueflowIn
- A float scalar that enforces proper chaining of operations.dtype
- The type of the elem that is returned.public TensorArrayScatter tensorArrayScatter(Operand<? extends TType> handle, Operand<TInt32> indices, Operand<? extends TType> value, Operand<TFloat32> flowIn)
indices
must be a vector, its length must match the first dim of value
.handle
- The handle to a TensorArray.indices
- The locations at which to write the tensor elements.value
- The concatenated tensor to write to the TensorArray.flowIn
- A float scalar that enforces proper chaining of operations.public TensorArraySize tensorArraySize(Operand<? extends TType> handle, Operand<TFloat32> flowIn)
handle
- The handle to a TensorArray (output of TensorArray or TensorArrayGrad).flowIn
- A float scalar that enforces proper chaining of operations.public TensorArraySplit tensorArraySplit(Operand<? extends TType> handle, Operand<? extends TType> value, Operand<TInt64> lengths, Operand<TFloat32> flowIn)
lengths
takes on values
(n0, n1, ..., n(T-1))
and that value
has shape
(n0 + n1 + ... + n(T-1) x d0 x d1 x ...)
,
this splits values into a TensorArray with T tensors.
TensorArray index t will be the subtensor of values with starting position
(n0 + n1 + ... + n(t-1), 0, 0, ...)
and having size
nt x d0 x d1 x ...
handle
- The handle to a TensorArray.value
- The concatenated tensor to write to the TensorArray.lengths
- The vector of lengths, how to split the rows of value into the
TensorArray.flowIn
- A float scalar that enforces proper chaining of operations.public TensorArrayUnpack tensorArrayUnpack(Operand<TString> handle, Operand<? extends TType> value, Operand<TFloat32> flowIn)
handle
- The handle valuevalue
- The value valueflowIn
- The flowIn valuepublic TensorArrayWrite tensorArrayWrite(Operand<? extends TType> handle, Operand<TInt32> index, Operand<? extends TType> value, Operand<TFloat32> flowIn)
handle
- The handle to a TensorArray.index
- The position to write to inside the TensorArray.value
- The tensor to write to the TensorArray.flowIn
- A float scalar that enforces proper chaining of operations.public <U extends TType> TensorListConcat<U> tensorListConcat(Operand<? extends TType> inputHandle, Operand<? extends TNumber> elementShape, Operand<TInt64> leadingDims, Class<U> elementDtype)
input_handle: The input list. element_shape: The shape of the uninitialized elements in the list. If the first dimension is not -1, it is assumed that all list elements have the same leading dim. leading_dims: The list of leading dims of uninitialized list elements. Used if the leading dim of input_handle.element_shape or the element_shape input arg is not already set. tensor: The concated result. lengths: Output tensor containing sizes of the 0th dimension of tensors in the list, used for computing the gradient.
U
- data type for tensor
outputU
- data type for TensorListConcatV2
output and operandsinputHandle
- The inputHandle valueelementShape
- The elementShape valueleadingDims
- The leadingDims valueelementDtype
- The value of the elementDtype attributepublic <T extends TType> TensorListConcatLists tensorListConcatLists(Operand<? extends TType> inputA, Operand<? extends TType> inputB, Class<T> elementDtype)
T
- data type for TensorListConcatLists
output and operandsinputA
- The inputA valueinputB
- The inputB valueelementDtype
- The value of the elementDtype attributepublic <T extends TNumber> TensorListElementShape<T> tensorListElementShape(Operand<? extends TType> inputHandle, Class<T> shapeType)
T
- data type for element_shape
outputT
- data type for TensorListElementShape
output and operandsinputHandle
- The inputHandle valueshapeType
- The value of the shapeType attributepublic TensorListFromTensor tensorListFromTensor(Operand<? extends TType> tensor, Operand<? extends TNumber> elementShape)
tensor
.
Each tensor in the result list corresponds to one row of the input tensor.
tensor: The input tensor. output_handle: The list.
tensor
- The tensor valueelementShape
- The elementShape valuepublic <T extends TType> TensorListGather<T> tensorListGather(Operand<? extends TType> inputHandle, Operand<TInt32> indices, Operand<TInt32> elementShape, Class<T> elementDtype)
tf.gather
).
input_handle: The input tensor list. indices: The indices used to index into the list. values: The tensor.
T
- data type for values
outputT
- data type for TensorListGather
output and operandsinputHandle
- The inputHandle valueindices
- The indices valueelementShape
- The elementShape valueelementDtype
- The value of the elementDtype attributepublic <T extends TType> TensorListGetItem<T> tensorListGetItem(Operand<? extends TType> inputHandle, Operand<TInt32> index, Operand<TInt32> elementShape, Class<T> elementDtype)
T
- data type for item
outputT
- data type for TensorListGetItem
output and operandsinputHandle
- The inputHandle valueindex
- The index valueelementShape
- The elementShape valueelementDtype
- The value of the elementDtype attributepublic TensorListLength tensorListLength(Operand<? extends TType> inputHandle)
inputHandle
- The inputHandle valuepublic <T extends TType> TensorListPopBack<T> tensorListPopBack(Operand<? extends TType> inputHandle, Operand<TInt32> elementShape, Class<T> elementDtype)
input_handle: the input list tensor: the withdrawn last element of the list element_dtype: the type of elements in the list element_shape: the shape of the output tensor
T
- data type for tensor
outputT
- data type for TensorListPopBack
output and operandsinputHandle
- The inputHandle valueelementShape
- The elementShape valueelementDtype
- The value of the elementDtype attributepublic TensorListPushBack tensorListPushBack(Operand<? extends TType> inputHandle, Operand<? extends TType> tensor)
Tensor
as last element and the other elements of the given list in input_handle
.
tensor: The tensor to put on the list.
input_handle: The old list.
output_handle: A list with the elements of the old list followed by tensor.
element_dtype: the type of elements in the list.
element_shape: a shape compatible with that of elements in the list.inputHandle
- The inputHandle valuetensor
- The tensor valuepublic TensorListPushBackBatch tensorListPushBackBatch(Operand<? extends TType> inputHandles, Operand<? extends TType> tensor)
inputHandles
- The inputHandles valuetensor
- The tensor valuepublic <U extends TType> TensorListReserve tensorListReserve(Operand<? extends TNumber> elementShape, Operand<TInt32> numElements, Class<U> elementDtype)
U
- data type for TensorListReserve
output and operandselementShape
- The elementShape valuenumElements
- The numElements valueelementDtype
- The value of the elementDtype attributepublic TensorListResize tensorListResize(Operand<? extends TType> inputHandle, Operand<TInt32> sizeOutput)
inputHandle
- The inputHandle valuesizeOutput
- The sizeOutput valuepublic TensorListScatter tensorListScatter(Operand<? extends TType> tensor, Operand<TInt32> indices, Operand<? extends TNumber> elementShape, Operand<TInt32> numElements)
tf.gather
).
tensor: The input tensor. indices: The indices used to index into the list. element_shape: The shape of the elements in the list (can be less specified than the shape of the tensor). num_elements: The size of the output list. Must be large enough to accommodate the largest index in indices. If -1, the list is just large enough to include the largest index in indices. output_handle: The TensorList.
tensor
- The tensor valueindices
- The indices valueelementShape
- The elementShape valuenumElements
- The numElements valuepublic TensorListScatterIntoExistingList tensorListScatterIntoExistingList(Operand<? extends TType> inputHandle, Operand<? extends TType> tensor, Operand<TInt32> indices)
tf.gather
).
input_handle: The list to scatter into. tensor: The input tensor. indices: The indices used to index into the list. output_handle: The TensorList.
inputHandle
- The inputHandle valuetensor
- The tensor valueindices
- The indices valuepublic TensorListSetItem tensorListSetItem(Operand<? extends TType> inputHandle, Operand<TInt32> index, Operand<? extends TType> item)
inputHandle
- The inputHandle valueindex
- The index valueitem
- The item valuepublic TensorListSplit tensorListSplit(Operand<? extends TType> tensor, Operand<? extends TNumber> elementShape, Operand<TInt64> lengths)
tensor: The input tensor. element_shape: A shape compatible with that of elements in the tensor. lengths: Vector of sizes of the 0th dimension of tensors in the list. output_handle: The list.
tensor
- The tensor valueelementShape
- The elementShape valuelengths
- The lengths valuepublic <T extends TType> TensorListStack<T> tensorListStack(Operand<? extends TType> inputHandle, Operand<TInt32> elementShape, Class<T> elementDtype, TensorListStack.Options... options)
input_handle: the input list tensor: the gathered result num_elements: optional. If not -1, the number of elements in the list.
T
- data type for tensor
outputT
- data type for TensorListStack
output and operandsinputHandle
- The inputHandle valueelementShape
- The elementShape valueelementDtype
- The value of the elementDtype attributeoptions
- carries optional attribute valuespublic <U extends TType> TensorMapErase tensorMapErase(Operand<? extends TType> inputHandle, Operand<? extends TType> key, Class<U> valueDtype)
U
- data type for TensorMapErase
output and operandsinputHandle
- The inputHandle valuekey
- The key valuevalueDtype
- The value of the valueDtype attributepublic TensorMapHasKey tensorMapHasKey(Operand<? extends TType> inputHandle, Operand<? extends TType> key)
inputHandle
- The inputHandle valuekey
- The key valuepublic TensorMapInsert tensorMapInsert(Operand<? extends TType> inputHandle, Operand<? extends TType> key, Operand<? extends TType> value)
inputHandle
- The inputHandle valuekey
- The key valuevalue
- The value valuepublic <U extends TType> TensorMapLookup<U> tensorMapLookup(Operand<? extends TType> inputHandle, Operand<? extends TType> key, Class<U> valueDtype)
U
- data type for value
outputU
- data type for TensorMapLookup
output and operandsinputHandle
- The inputHandle valuekey
- The key valuevalueDtype
- The value of the valueDtype attributepublic TensorMapSize tensorMapSize(Operand<? extends TType> inputHandle)
inputHandle
- The inputHandle valuepublic <T extends TType> TensorMapStackKeys<T> tensorMapStackKeys(Operand<? extends TType> inputHandle, Class<T> keyDtype)
T
- data type for keys
outputT
- data type for TensorMapStackKeys
output and operandsinputHandle
- The inputHandle valuekeyDtype
- The value of the keyDtype attributepublic <T extends TType> TensorScatterNdAdd<T> tensorScatterNdAdd(Operand<T> tensor, Operand<? extends TNumber> indices, Operand<T> updates)
updates
to an existing tensor according to indices
.
This operation creates a new tensor by adding sparse updates
to the passed
in tensor
.
This operation is very similar to tf.compat.v1.scatter_nd_add
, except that the updates
are added onto an existing tensor (as opposed to a variable). If the memory
for the existing tensor cannot be re-used, a copy is made and updated.
indices
is an integer tensor containing indices into a new tensor of shape
tensor.shape
. The last dimension of indices
can be at most the rank of
tensor.shape
:
indices.shape[-1] <= tensor.shape.rank
The last dimension of indices
corresponds to indices into elements
(if indices.shape[-1] = tensor.shape.rank
) or slices
(if indices.shape[-1] < tensor.shape.rank
) along dimension
indices.shape[-1]
of tensor.shape
. updates
is a tensor with shape
indices.shape[:-1] + tensor.shape[indices.shape[-1]:]
The simplest form of tensor_scatter_add is to add individual elements to a tensor by index. For example, say we want to add 4 elements in a rank-1 tensor with 8 elements.
In Python, this scatter add operation would look like this:
indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) tensor = tf.ones([8], dtype=tf.int32) updated = tf.tensor_scatter_nd_add(tensor, indices, updates) print(updated)
The resulting tensor would look like this:
[1, 12, 1, 11, 10, 1, 1, 13]
We can also, insert entire slices of a higher rank tensor all at once. For example, if we wanted to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values.
In Python, this scatter add operation would look like this:
indices = tf.constant([[0], [2]]) updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]) tensor = tf.ones([4, 4, 4],dtype=tf.int32) updated = tf.tensor_scatter_nd_add(tensor, indices, updates) print(updated)
The resulting tensor would look like this:
[[[6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8], [9, 9, 9, 9]], [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], [[6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8], [9, 9, 9, 9]], [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]]
Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, the index is ignored.
T
- data type for output
outputT
- data type for TensorScatterAdd
output and operandstensor
- Tensor to copy/update.indices
- Index tensor.updates
- Updates to scatter into output.public <T extends TType> TensorScatterNdMax<T> tensorScatterNdMax(Operand<T> tensor, Operand<? extends TNumber> indices, Operand<T> updates)
T
- data type for output
outputT
- data type for TensorScatterMax
output and operandstensor
- Tensor to update.indices
- Index tensor.updates
- Updates to scatter into output.public <T extends TType> TensorScatterNdMin<T> tensorScatterNdMin(Operand<T> tensor, Operand<? extends TNumber> indices, Operand<T> updates)
T
- data type for output
outputT
- data type for TensorScatterMin
output and operandstensor
- Tensor to update.indices
- Index tensor.updates
- Updates to scatter into output.public <T extends TType> TensorScatterNdSub<T> tensorScatterNdSub(Operand<T> tensor, Operand<? extends TNumber> indices, Operand<T> updates)
updates
from an existing tensor according to indices
.
This operation creates a new tensor by subtracting sparse updates
from the
passed in tensor
.
This operation is very similar to tf.scatter_nd_sub
, except that the updates
are subtracted from an existing tensor (as opposed to a variable). If the memory
for the existing tensor cannot be re-used, a copy is made and updated.
indices
is an integer tensor containing indices into a new tensor of shape
shape
. The last dimension of indices
can be at most the rank of shape
:
indices.shape[-1] <= shape.rank
The last dimension of indices
corresponds to indices into elements
(if indices.shape[-1] = shape.rank
) or slices
(if indices.shape[-1] < shape.rank
) along dimension indices.shape[-1]
of
shape
. updates
is a tensor with shape
indices.shape[:-1] + shape[indices.shape[-1]:]
The simplest form of tensor_scatter_sub is to subtract individual elements from a tensor by index. For example, say we want to insert 4 scattered elements in a rank-1 tensor with 8 elements.
In Python, this scatter subtract operation would look like this:
indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) tensor = tf.ones([8], dtype=tf.int32) updated = tf.tensor_scatter_nd_sub(tensor, indices, updates) print(updated)
The resulting tensor would look like this:
[1, -10, 1, -9, -8, 1, 1, -11]
We can also, insert entire slices of a higher rank tensor all at once. For example, if we wanted to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values.
In Python, this scatter add operation would look like this:
indices = tf.constant([[0], [2]]) updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]) tensor = tf.ones([4, 4, 4],dtype=tf.int32) updated = tf.tensor_scatter_nd_sub(tensor, indices, updates) print(updated)
The resulting tensor would look like this:
[[[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]], [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], [[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]], [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]]
Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, the index is ignored.
T
- data type for output
outputT
- data type for TensorScatterSub
output and operandstensor
- Tensor to copy/update.indices
- Index tensor.updates
- Updates to scatter into output.public <T extends TType> TensorScatterNdUpdate<T> tensorScatterNdUpdate(Operand<T> tensor, Operand<? extends TNumber> indices, Operand<T> updates)
updates
into an existing tensor according to indices
.
This operation creates a new tensor by applying sparse updates
to the passed
in tensor
.
This operation is very similar to tf.scatter_nd
, except that the updates are
scattered onto an existing tensor (as opposed to a zero-tensor). If the memory
for the existing tensor cannot be re-used, a copy is made and updated.
If indices
contains duplicates, then we pick the last update for the index.
If an out of bound index is found on CPU, an error is returned.
WARNING: There are some GPU specific semantics for this operation.
indices
contains duplicates.indices
is an integer tensor containing indices into a new tensor of shape
shape
.
indices
must have at least 2 axes: (num_updates, index_depth)
.indices
is how deep to index into tensor
so this index
depth must be less than the rank of tensor
: indices.shape[-1] <= tensor.ndim
if indices.shape[-1] = tensor.rank
this Op indexes and updates scalar elements.
if indices.shape[-1] < tensor.rank
it indexes and updates slices of the input
tensor
.
Each update
has a rank of tensor.rank - indices.shape[-1]
.
The overall shape of updates
is:
indices.shape[:-1] + tensor.shape[indices.shape[-1]:]
For usage examples see the python tf.tensor_scatter_nd_update tensorScatterNdUpdate(org.tensorflow.Operand<T>, org.tensorflow.Operand<? extends org.tensorflow.types.family.TNumber>, org.tensorflow.Operand<T>)
function
T
- data type for output
outputT
- data type for TensorScatterUpdate
output and operandstensor
- Tensor to copy/update.indices
- Index tensor.updates
- Updates to scatter into output.public <T extends TType,U extends TNumber> TensorStridedSliceUpdate<T> tensorStridedSliceUpdate(Operand<T> input, Operand<U> begin, Operand<U> end, Operand<U> strides, Operand<T> value, TensorStridedSliceUpdate.Options... options)
value
to the sliced l-value reference of input
.
The values of value
are assigned to the positions in the tensor input
that
are selected by the slice parameters. The slice parameters begin
end
strides
etc. work exactly as in StridedSlice
.
NOTE this op currently does not support broadcasting and so value
's shape
must be exactly the shape produced by the slice of input
.
T
- data type for output
outputT
- data type for TensorStridedSliceUpdate
output and operandsU
- data type for TensorStridedSliceUpdate
output and operandsinput
- The input valuebegin
- The begin valueend
- The end valuestrides
- The strides valuevalue
- The value valueoptions
- carries optional attribute valuespublic <T extends TType> Tile<T> tile(Operand<T> input, Operand<? extends TNumber> multiples)
input
multiples
times.
The output tensor's i'th dimension has input.dims(i) * multiples[i]
elements,
and the values of input
are replicated multiples[i]
times along the 'i'th
dimension. For example, tiling [a b c d]
by [2]
produces
[a b c d a b c d]
.
a = tf.constant([[1,2,3],[4,5,6]], tf.int32) b = tf.constant([1,2], tf.int32) tf.tile(a, b) <tf.Tensor: shape=(2, 6), dtype=int32, numpy= array([[1, 2, 3, 1, 2, 3], [4, 5, 6, 4, 5, 6]], dtype=int32)> c = tf.constant([2,1], tf.int32) tf.tile(a, c) <tf.Tensor: shape=(4, 3), dtype=int32, numpy= array([[1, 2, 3], [4, 5, 6], [1, 2, 3], [4, 5, 6]], dtype=int32)> d = tf.constant([2,2], tf.int32) tf.tile(a, d) <tf.Tensor: shape=(4, 6), dtype=int32, numpy= array([[1, 2, 3, 1, 2, 3], [4, 5, 6, 4, 5, 6], [1, 2, 3, 1, 2, 3], [4, 5, 6, 4, 5, 6]], dtype=int32)>
T
- data type for output
outputT
- data type for Tile
output and operandsinput
- 1-D or higher.multiples
- 1-D. Length must be the same as the number of dimensions in input
public Timestamp timestamp()
float64
for seconds since the Unix epoch.
Note: the timestamp is computed when the op is executed, not when it is added to the graph.
public TopKUnique topKUnique(Operand<TFloat32> input, Long k)
input
- The input valuek
- The value of the k attributepublic TopKWithUnique topKWithUnique(Operand<TFloat32> input, Long k)
input
- The input valuek
- The value of the k attributepublic <T extends TType> Unbatch<T> unbatch(Operand<T> batchedTensor, Operand<TInt64> batchIndex, Operand<TInt64> id, Long timeoutMicros, Unbatch.Options... options)
batched_tensor: The possibly transformed output of Batch. The size of the first dimension should remain unchanged by the transformations for the operation to work. batch_index: The matching batch_index obtained from Batch. id: The id scalar emitted by Batch. unbatched_tensor: The Tensor corresponding to this execution. timeout_micros: Maximum amount of time (in microseconds) to wait to receive the batched input tensor associated with a given invocation of the op. container: Container to control resource sharing. shared_name: Instances of Unbatch with the same container and shared_name are assumed to possibly belong to the same batch. If left empty, the op name will be used as the shared name.
T
- data type for unbatched_tensor
outputT
- data type for Unbatch
output and operandsbatchedTensor
- The batchedTensor valuebatchIndex
- The batchIndex valueid
- The id valuetimeoutMicros
- The value of the timeoutMicros attributeoptions
- carries optional attribute valuespublic <T extends TType> UnbatchGrad<T> unbatchGrad(Operand<T> originalInput, Operand<TInt64> batchIndex, Operand<T> grad, Operand<TInt64> id, UnbatchGrad.Options... options)
original_input: The input to the Unbatch operation this is the gradient of. batch_index: The batch_index given to the Unbatch operation this is the gradient of. grad: The downstream gradient. id: The id scalar emitted by Batch. batched_grad: The return value, either an empty tensor or the batched gradient. container: Container to control resource sharing. shared_name: Instances of UnbatchGrad with the same container and shared_name are assumed to possibly belong to the same batch. If left empty, the op name will be used as the shared name.
T
- data type for batched_grad
outputT
- data type for UnbatchGrad
output and operandsoriginalInput
- The originalInput valuebatchIndex
- The batchIndex valuegrad
- The grad valueid
- The id valueoptions
- carries optional attribute valuespublic <T extends TType> Unique<T,TInt32> unique(Operand<T> x, Operand<? extends TNumber> axis)
y
containing unique elements
along the axis
of a tensor. The returned unique elements is sorted
in the same order as they occur along axis
in x
.
This operation also returns a tensor idx
that is the same size as
the number of the elements in x
along the axis
dimension. It
contains the index in the unique output y
.
In other words, for an 1-D
tensor x
with `axis = None:
y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]
For example:
# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] y, idx = unique(x) y ==> [1, 2, 4, 7, 8] idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
For an 2-D
tensor x
with axis = 0
:
# tensor 'x' is [[1, 0, 0], # [1, 0, 0], # [2, 0, 0]] y, idx = unique(x, axis=0) y ==> [[1, 0, 0], [2, 0, 0]] idx ==> [0, 0, 1]
For an 2-D
tensor x
with axis = 1
:
# tensor 'x' is [[1, 0, 0], # [1, 0, 0], # [2, 0, 0]] y, idx = unique(x, axis=1) y ==> [[1, 0], [1, 0], [2, 0]] idx ==> [0, 1, 1]
T
- data type for y
outputV
- data type for idx
outputT
- data type for UniqueV2
output and operandsx
- A Tensor
.axis
- A Tensor
of type int32
(default: None). The axis of the Tensor to
find the unique elements.public <T extends TType,V extends TNumber> Unique<T,V> unique(Operand<T> x, Operand<? extends TNumber> axis, Class<V> outIdx)
y
containing unique elements
along the axis
of a tensor. The returned unique elements is sorted
in the same order as they occur along axis
in x
.
This operation also returns a tensor idx
that is the same size as
the number of the elements in x
along the axis
dimension. It
contains the index in the unique output y
.
In other words, for an 1-D
tensor x
with `axis = None:
y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]
For example:
# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] y, idx = unique(x) y ==> [1, 2, 4, 7, 8] idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
For an 2-D
tensor x
with axis = 0
:
# tensor 'x' is [[1, 0, 0], # [1, 0, 0], # [2, 0, 0]] y, idx = unique(x, axis=0) y ==> [[1, 0, 0], [2, 0, 0]] idx ==> [0, 0, 1]
For an 2-D
tensor x
with axis = 1
:
# tensor 'x' is [[1, 0, 0], # [1, 0, 0], # [2, 0, 0]] y, idx = unique(x, axis=1) y ==> [[1, 0], [1, 0], [2, 0]] idx ==> [0, 1, 1]
T
- data type for y
outputV
- data type for idx
outputT
- data type for UniqueV2
output and operandsV
- data type for UniqueV2
output and operandsx
- A Tensor
.axis
- A Tensor
of type int32
(default: None). The axis of the Tensor to
find the unique elements.outIdx
- The value of the outIdx attributepublic <T extends TType> UniqueWithCounts<T,TInt32> uniqueWithCounts(Operand<T> x, Operand<? extends TNumber> axis)
y
containing unique elements
along the axis
of a tensor. The returned unique elements is sorted
in the same order as they occur along axis
in x
.
This operation also returns a tensor idx
and a tensor count
that are the same size as the number of the elements in x
along the
axis
dimension. The idx
contains the index in the unique output y
and the count
contains the count in the unique output y
.
In other words, for an 1-D
tensor x
with `axis = None:
y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]
For example:
x = tf.constant([1, 1, 2, 4, 4, 4, 7, 8, 8]) y, idx, count = UniqueWithCountsV2(x, axis = [0]) y ==> [1, 2, 4, 7, 8] idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] count ==> [2, 1, 3, 1, 2]
For a 2-D
tensor x
with axis = 0
:
x = tf.constant([[1, 0, 0], [1, 0, 0], [2, 0, 0]]) y, idx, count = UniqueWithCountsV2(x, axis=[0]) y ==> [[1, 0, 0], [2, 0, 0]] idx ==> [0, 0, 1] count ==> [2, 1]
For a 2-D
tensor x
with axis = 1
:
x = tf.constant([[1, 0, 0], [1, 0, 0], [2, 0, 0]]) y, idx, count = UniqueWithCountsV2(x, axis=[1]) y ==> [[1, 0], [1, 0], [2, 0]] idx ==> [0, 1, 1] count ==> [1, 2]
T
- data type for y
outputV
- data type for idx
outputT
- data type for UniqueWithCountsV2
output and operandsx
- A Tensor
.axis
- A Tensor
of type int32
(default: None). The axis of the Tensor to
find the unique elements.public <T extends TType,V extends TNumber> UniqueWithCounts<T,V> uniqueWithCounts(Operand<T> x, Operand<? extends TNumber> axis, Class<V> outIdx)
y
containing unique elements
along the axis
of a tensor. The returned unique elements is sorted
in the same order as they occur along axis
in x
.
This operation also returns a tensor idx
and a tensor count
that are the same size as the number of the elements in x
along the
axis
dimension. The idx
contains the index in the unique output y
and the count
contains the count in the unique output y
.
In other words, for an 1-D
tensor x
with `axis = None:
y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]
For example:
x = tf.constant([1, 1, 2, 4, 4, 4, 7, 8, 8]) y, idx, count = UniqueWithCountsV2(x, axis = [0]) y ==> [1, 2, 4, 7, 8] idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] count ==> [2, 1, 3, 1, 2]
For a 2-D
tensor x
with axis = 0
:
x = tf.constant([[1, 0, 0], [1, 0, 0], [2, 0, 0]]) y, idx, count = UniqueWithCountsV2(x, axis=[0]) y ==> [[1, 0, 0], [2, 0, 0]] idx ==> [0, 0, 1] count ==> [2, 1]
For a 2-D
tensor x
with axis = 1
:
x = tf.constant([[1, 0, 0], [1, 0, 0], [2, 0, 0]]) y, idx, count = UniqueWithCountsV2(x, axis=[1]) y ==> [[1, 0], [1, 0], [2, 0]] idx ==> [0, 1, 1] count ==> [1, 2]
T
- data type for y
outputV
- data type for idx
outputT
- data type for UniqueWithCountsV2
output and operandsV
- data type for UniqueWithCountsV2
output and operandsx
- A Tensor
.axis
- A Tensor
of type int32
(default: None). The axis of the Tensor to
find the unique elements.outIdx
- The value of the outIdx attributepublic <T extends TNumber> UnravelIndex<T> unravelIndex(Operand<T> indices, Operand<T> dims)
y = tf.unravel_index(indices=[2, 5, 7], dims=[3, 3]) # 'dims' represent a hypothetical (3, 3) tensor of indices: # [[0, 1, *2*], # [3, 4, *5*], # [6, *7*, 8]] # For each entry from 'indices', this operation returns # its coordinates (marked with '*'), such as # 2 ==> (0, 2) # 5 ==> (1, 2) # 7 ==> (2, 1) y ==> [[0, 1, 2], [2, 2, 1]]
@compatibility(numpy)
Equivalent to np.unravel_index
@end_compatibility
T
- data type for output
outputT
- data type for UnravelIndex
output and operandsindices
- An 0-D or 1-D int
Tensor whose elements are indices into the
flattened version of an array of dimensions dims.dims
- An 1-D int
Tensor. The shape of the array to use for unraveling
indices.public <T extends TType> Unstack<T> unstack(Operand<T> value, Long num, Unstack.Options... options)
R
tensor into num
rank-(R-1)
tensors.
Unpacks num
tensors from value
by chipping it along the axis
dimension.
For example, given a tensor of shape (A, B, C, D)
;
If axis == 0
then the i'th tensor in output
is the slice value[i, :, :, :]
and each tensor in output
will have shape (B, C, D)
. (Note that the
dimension unpacked along is gone, unlike split
).
If axis == 1
then the i'th tensor in output
is the slice value[:, i, :, :]
and each tensor in output
will have shape (A, C, D)
.
Etc.
This is the opposite of pack
.
T
- data type for output
outputT
- data type for Unpack
output and operandsvalue
- 1-D or higher, with axis
dimension size equal to num
.num
- The value of the num attributeoptions
- carries optional attribute valuespublic Unstage unstage(List<Class<? extends TType>> dtypes, Unstage.Options... options)
dtypes
- The value of the dtypes attributeoptions
- carries optional attribute valuespublic <T extends TType> VarHandleOp varHandleOp(Class<T> dtype, org.tensorflow.ndarray.Shape shape, VarHandleOp.Options... options)
T
- data type for VarHandleOp
output and operandsdtype
- the type of this variable. Must agree with the dtypes
of all ops using this variable.shape
- The (possibly partially specified) shape of this variable.options
- carries optional attribute valuespublic VarIsInitializedOp varIsInitializedOp(Operand<? extends TType> resource)
resource
- the input resource handle.public <T extends TType> Variable<T> variable(Operand<T> init, Variable.Options... options)
Only supported on Graph sessions as the Assign
op does not
work in an EagerSession.
init
- The op to use to initialise this variable.options
- carries optional attributes valuespublic <T extends TType> Variable<T> variable(org.tensorflow.ndarray.Shape shape, Class<T> dtype, Variable.Options... options)
T
- data type for ref
outputT
- data type for VariableV2
output and operandsshape
- The shape of the variable tensor.dtype
- The type of elements in the variable tensor.options
- carries optional attribute valuespublic VariableShape<TInt32> variableShape(Operand<? extends TType> input)
resource
.
This operation returns a 1-D integer tensor representing the shape of input
.
For example:
# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] shape(t) ==> [2, 2, 3]
T
- data type for output
outputinput
- The input valuepublic <T extends TNumber> VariableShape<T> variableShape(Operand<? extends TType> input, Class<T> outType)
resource
.
This operation returns a 1-D integer tensor representing the shape of input
.
For example:
# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] shape(t) ==> [2, 2, 3]
T
- data type for output
outputT
- data type for VariableShape
output and operandsinput
- The input valueoutType
- The value of the outType attributepublic Where where(Operand<? extends TType> condition)
condition
. The
coordinates are returned in a 2-D tensor where the first dimension (rows)
represents the number of true elements, and the second dimension (columns)
represents the coordinates of the true elements. Keep in mind, the shape of
the output tensor can vary depending on how many true values there are in
condition
. Indices are output in row-major order.
For example:
# 'input' tensor is [[True, False] # [True, False]] # 'input' has two true values, so output has two coordinates. # 'input' has rank of 2, so coordinates have two indices. where(input) ==> [[0, 0], [1, 0]] # `condition` tensor is [[[True, False] # [True, False]] # [[False, True] # [False, True]] # [[False, False] # [False, True]]] # 'input' has 5 true values, so output has 5 coordinates. # 'input' has rank of 3, so coordinates have three indices. where(input) ==> [[0, 0, 0], [0, 1, 0], [1, 0, 1], [1, 1, 1], [2, 1, 1]] # `condition` tensor is [[[1.5, 0.0] # [-0.5, 0.0]] # [[0.0, 0.25] # [0.0, 0.75]] # [[0.0, 0.0] # [0.0, 0.01]]] # 'input' has 5 nonzero values, so output has 5 coordinates. # 'input' has rank of 3, so coordinates have three indices. where(input) ==> [[0, 0, 0], [0, 1, 0], [1, 0, 1], [1, 1, 1], [2, 1, 1]] # `condition` tensor is [[[1.5 + 0.0j, 0.0 + 0.0j] # [0.0 + 0.5j, 0.0 + 0.0j]] # [[0.0 + 0.0j, 0.25 + 1.5j] # [0.0 + 0.0j, 0.75 + 0.0j]] # [[0.0 + 0.0j, 0.0 + 0.0j] # [0.0 + 0.0j, 0.01 + 0.0j]]] # 'input' has 5 nonzero magnitude values, so output has 5 coordinates. # 'input' has rank of 3, so coordinates have three indices. where(input) ==> [[0, 0, 0], [0, 1, 0], [1, 0, 1], [1, 1, 1], [2, 1, 1]]
condition
- The condition valuepublic While whileOp(Iterable<Operand<?>> input, ConcreteFunction cond, ConcreteFunction body, While.Options... options)
Selects between StatefulWhile
and StatelessWhile
based on the statefulness of the function arguments.
input
- A list of input tensors whose types are T.cond
- A function takes 'input' and returns a tensor. If the tensor is a scalar of non-boolean, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means True and zero means False; if the scalar is a string, non-empty means True and empty means False. If the tensor is not a scalar, non-emptiness means True and False otherwise.
body
- A function that takes a list of tensors and returns another list of tensors. Both lists have the same types as specified by T.
options
- carries optional attribute valuespublic <T extends TType> Zeros<T> zeros(Operand<? extends TNumber> dims, Class<T> type)
dims
- a 1-D operand that represents the shape of the output tensortype
- the output tensor datatypeIllegalArgumentException
- if the tensor type or shape cannot be initialized with zeros.public <T extends TType> ZerosLike<T> zerosLike(Operand<T> x)
T
- data type for y
outputT
- data type for ZerosLike
output and operandsx
- a tensor of type T.public Ops withSubScope(String childScopeName)
Scope#withSubScope(String)}
public Ops withInitScope()
liftToInitScope(Operand)
will be called for all created operations.
Init operations will be initialized at session creation, will have their inputs (and control inputs) made init ops as well, and are ignored when used as control dependencies. Additionally, this scope ignores any control dependencies.
If an input can not be made an init op (i.e. a Placeholder), will throw an IllegalStateException
on op creation.
liftToInitScope(Operand)
public <T extends Operand> T liftToInitScope(T op)
op
an init operation, doing the same for all of it's inputs (and control inputs).
Init operations will be initialized at session creation, will have their inputs (and control inputs) made init ops as well, and are ignored when used as control dependencies. Additionally, this scope ignores any control dependencies.
If an input can not be made an init op (i.e. a Placeholder), will throw an IllegalStateException
on op creation.
IllegalStateException
- if the op or one of its inputs can't be made an init op.ExecutionEnvironment.registerInitOp(Operation)
public Ops withName(String opName)
Scope#withName(String)}
public Ops withDevice(DeviceSpec deviceSpec)
Scope#withDevice(DeviceSpec)}
public Ops withControlDependencies(Iterable<Op> controls)
Scope#withControlDependencies(Iterable>)}
public Ops withControlDependencies(Op... controls)
Scope#withControlDependencies(Iterable>)}
public Ops withControlDependencyOps(Iterable<Operation> controls)
Scope#withControlDependencyOps(Iterable)}
public Ops withControlDependencyOps(Operation... controls)
Scope#withControlDependencyOps(Iterable)}
public static Ops create(ExecutionEnvironment env)
public static Ops create()
Invoking this method is equivalent to Ops.create(EagerSession.getDefault())
.
Copyright © 2015–2022. All rights reserved.