Package org.tensorflow.framework
Interface CallableOptionsOrBuilder
- All Superinterfaces:
com.google.protobuf.MessageLiteOrBuilder,com.google.protobuf.MessageOrBuilder
- All Known Implementing Classes:
CallableOptions,CallableOptions.Builder
public interface CallableOptionsOrBuilder
extends com.google.protobuf.MessageOrBuilder
-
Method Summary
Modifier and TypeMethodDescriptionbooleanThe Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default.booleanmap<string, string> fetch_devices = 7;getFeed(int index) Tensors to be fed in the callable.com.google.protobuf.ByteStringgetFeedBytes(int index) Tensors to be fed in the callable.intTensors to be fed in the callable.Deprecated.intThe Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default.The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default.getFeedDevicesOrDefault(String key, String defaultValue) The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default.The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default.Tensors to be fed in the callable.getFetch(int index) Fetches.com.google.protobuf.ByteStringgetFetchBytes(int index) Fetches.intFetches.Deprecated.intmap<string, string> fetch_devices = 7;map<string, string> fetch_devices = 7;getFetchDevicesOrDefault(String key, String defaultValue) map<string, string> fetch_devices = 7;map<string, string> fetch_devices = 7;Fetches.booleanBy default, RunCallable() will synchronize the GPU stream before returning fetched tensors on a GPU device, to ensure that the values in those tensors have been produced.Options that will be applied to each run.Options that will be applied to each run.getTarget(int index) Target Nodes.com.google.protobuf.ByteStringgetTargetBytes(int index) Target Nodes.intTarget Nodes.Target Nodes.getTensorConnection(int index) Tensors to be connected in the callable.intTensors to be connected in the callable.Tensors to be connected in the callable.getTensorConnectionOrBuilder(int index) Tensors to be connected in the callable.List<? extends TensorConnectionOrBuilder> Tensors to be connected in the callable.booleanOptions that will be applied to each run.Methods inherited from interface com.google.protobuf.MessageLiteOrBuilder
isInitializedMethods inherited from interface com.google.protobuf.MessageOrBuilder
findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof
-
Method Details
-
getFeedList
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;- Returns:
- A list containing the feed.
-
getFeedCount
int getFeedCount()Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;- Returns:
- The count of feed.
-
getFeed
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;- Parameters:
index- The index of the element to return.- Returns:
- The feed at the given index.
-
getFeedBytes
com.google.protobuf.ByteString getFeedBytes(int index) Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;- Parameters:
index- The index of the value to return.- Returns:
- The bytes of the feed at the given index.
-
getFetchList
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;- Returns:
- A list containing the fetch.
-
getFetchCount
int getFetchCount()Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;- Returns:
- The count of fetch.
-
getFetch
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;- Parameters:
index- The index of the element to return.- Returns:
- The fetch at the given index.
-
getFetchBytes
com.google.protobuf.ByteString getFetchBytes(int index) Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;- Parameters:
index- The index of the value to return.- Returns:
- The bytes of the fetch at the given index.
-
getTargetList
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;- Returns:
- A list containing the target.
-
getTargetCount
int getTargetCount()Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;- Returns:
- The count of target.
-
getTarget
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;- Parameters:
index- The index of the element to return.- Returns:
- The target at the given index.
-
getTargetBytes
com.google.protobuf.ByteString getTargetBytes(int index) Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;- Parameters:
index- The index of the value to return.- Returns:
- The bytes of the target at the given index.
-
hasRunOptions
boolean hasRunOptions()Options that will be applied to each run.
.tensorflow.RunOptions run_options = 4;- Returns:
- Whether the runOptions field is set.
-
getRunOptions
RunOptions getRunOptions()Options that will be applied to each run.
.tensorflow.RunOptions run_options = 4;- Returns:
- The runOptions.
-
getRunOptionsOrBuilder
RunOptionsOrBuilder getRunOptionsOrBuilder()Options that will be applied to each run.
.tensorflow.RunOptions run_options = 4; -
getTensorConnectionList
List<TensorConnection> getTensorConnectionList()Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5; -
getTensorConnection
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5; -
getTensorConnectionCount
int getTensorConnectionCount()Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5; -
getTensorConnectionOrBuilderList
List<? extends TensorConnectionOrBuilder> getTensorConnectionOrBuilderList()Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5; -
getTensorConnectionOrBuilder
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5; -
getFeedDevicesCount
int getFeedDevicesCount()The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).map<string, string> feed_devices = 6; -
containsFeedDevices
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).map<string, string> feed_devices = 6; -
getFeedDevices
Deprecated.UsegetFeedDevicesMap()instead. -
getFeedDevicesMap
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).map<string, string> feed_devices = 6; -
getFeedDevicesOrDefault
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).map<string, string> feed_devices = 6; -
getFeedDevicesOrThrow
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).map<string, string> feed_devices = 6; -
getFetchDevicesCount
int getFetchDevicesCount()map<string, string> fetch_devices = 7; -
containsFetchDevices
map<string, string> fetch_devices = 7; -
getFetchDevices
Deprecated.UsegetFetchDevicesMap()instead. -
getFetchDevicesMap
map<string, string> fetch_devices = 7; -
getFetchDevicesOrDefault
map<string, string> fetch_devices = 7; -
getFetchDevicesOrThrow
map<string, string> fetch_devices = 7; -
getFetchSkipSync
boolean getFetchSkipSync()By default, RunCallable() will synchronize the GPU stream before returning fetched tensors on a GPU device, to ensure that the values in those tensors have been produced. This simplifies interacting with the tensors, but potentially incurs a performance hit. If this options is set to true, the caller is responsible for ensuring that the values in the fetched tensors have been produced before they are used. The caller can do this by invoking `Device::Sync()` on the underlying device(s), or by feeding the tensors back to the same Session using `feed_devices` with the same corresponding device name.
bool fetch_skip_sync = 8;- Returns:
- The fetchSkipSync.
-