Package org.tensorflow.framework
Interface ConfigProtoOrBuilder
- All Superinterfaces:
com.google.protobuf.MessageLiteOrBuilder,com.google.protobuf.MessageOrBuilder
- All Known Implementing Classes:
ConfigProto,ConfigProto.Builder
public interface ConfigProtoOrBuilder
extends com.google.protobuf.MessageOrBuilder
-
Method Summary
Modifier and TypeMethodDescriptionbooleanMap from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use.booleanWhether soft placement is allowed.Optional list of all workers to use in this session.Optional list of all workers to use in this session.Deprecated.intMap from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use.Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use.intgetDeviceCountOrDefault(String key, int defaultValue) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use.intMap from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use.getDeviceFilters(int index) When any filters are present sessions will ignore all devices which do not match the filters.com.google.protobuf.ByteStringgetDeviceFiltersBytes(int index) When any filters are present sessions will ignore all devices which do not match the filters.intWhen any filters are present sessions will ignore all devices which do not match the filters.When any filters are present sessions will ignore all devices which do not match the filters..tensorflow.ConfigProto.Experimental experimental = 16;.tensorflow.ConfigProto.Experimental experimental = 16;Options that apply to all GPUs.Options that apply to all GPUs.Options that apply to all graphs.Options that apply to all graphs.intNodes that perform blocking operations are enqueued on a pool of inter_op_parallelism_threads available in each process. 0 means the system picks an appropriate number.intThe execution of an individual op (for some op types) can be parallelized on a pool of intra_op_parallelism_threads. 0 means the system picks an appropriate number.booleanIf true, any resources such as Variables used in the session will not be shared with other sessions.booleanWhether device placements should be logged.longGlobal timeout for all blocking operations in this session.intAssignment of Nodes to Devices is recomputed every placement_period steps until the system warms up (at which point the recomputation typically slows down automatically).Options that apply to pluggable devices.Options that apply to pluggable devices.Options that apply when this session uses the distributed runtime.Options that apply when this session uses the distributed runtime.getSessionInterOpThreadPool(int index) This option is experimental - it may be replaced with a different mechanism in the future.intThis option is experimental - it may be replaced with a different mechanism in the future.This option is experimental - it may be replaced with a different mechanism in the future.getSessionInterOpThreadPoolOrBuilder(int index) This option is experimental - it may be replaced with a different mechanism in the future.List<? extends ThreadPoolOptionProtoOrBuilder> This option is experimental - it may be replaced with a different mechanism in the future.booleanWhen true, WorkerSessions are created with device attributes from the full cluster.booleanIf true, use a new set of threads for this session rather than the global pool of threads.booleanOptional list of all workers to use in this session.boolean.tensorflow.ConfigProto.Experimental experimental = 16;booleanOptions that apply to all GPUs.booleanOptions that apply to all graphs.booleanOptions that apply to pluggable devices.booleanOptions that apply when this session uses the distributed runtime.Methods inherited from interface com.google.protobuf.MessageLiteOrBuilder
isInitializedMethods inherited from interface com.google.protobuf.MessageOrBuilder
findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof
-
Method Details
-
getDeviceCountCount
int getDeviceCountCount()Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1; -
containsDeviceCount
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1; -
getDeviceCount
Deprecated.UsegetDeviceCountMap()instead. -
getDeviceCountMap
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1; -
getDeviceCountOrDefault
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1; -
getDeviceCountOrThrow
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1; -
getIntraOpParallelismThreads
int getIntraOpParallelismThreads()The execution of an individual op (for some op types) can be parallelized on a pool of intra_op_parallelism_threads. 0 means the system picks an appropriate number. If you create an ordinary session, e.g., from Python or C++, then there is exactly one intra op thread pool per process. The first session created determines the number of threads in this pool. All subsequent sessions reuse/share this one global pool. There are notable exceptions to the default behavior described above: 1. There is an environment variable for overriding this thread pool, named TF_OVERRIDE_GLOBAL_THREADPOOL. 2. When connecting to a server, such as a remote `tf.train.Server` instance, then this option will be ignored altogether.int32 intra_op_parallelism_threads = 2;- Returns:
- The intraOpParallelismThreads.
-
getInterOpParallelismThreads
int getInterOpParallelismThreads()Nodes that perform blocking operations are enqueued on a pool of inter_op_parallelism_threads available in each process. 0 means the system picks an appropriate number. Negative means all operations are performed in caller's thread. Note that the first Session created in the process sets the number of threads for all future sessions unless use_per_session_threads is true or session_inter_op_thread_pool is configured.
int32 inter_op_parallelism_threads = 5;- Returns:
- The interOpParallelismThreads.
-
getUsePerSessionThreads
boolean getUsePerSessionThreads()If true, use a new set of threads for this session rather than the global pool of threads. Only supported by direct sessions. If false, use the global threads created by the first session, or the per-session thread pools configured by session_inter_op_thread_pool. This option is deprecated. The same effect can be achieved by setting session_inter_op_thread_pool to have one element, whose num_threads equals inter_op_parallelism_threads.
bool use_per_session_threads = 9;- Returns:
- The usePerSessionThreads.
-
getSessionInterOpThreadPoolList
List<ThreadPoolOptionProto> getSessionInterOpThreadPoolList()This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12; -
getSessionInterOpThreadPool
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12; -
getSessionInterOpThreadPoolCount
int getSessionInterOpThreadPoolCount()This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12; -
getSessionInterOpThreadPoolOrBuilderList
List<? extends ThreadPoolOptionProtoOrBuilder> getSessionInterOpThreadPoolOrBuilderList()This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12; -
getSessionInterOpThreadPoolOrBuilder
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12; -
getPlacementPeriod
int getPlacementPeriod()Assignment of Nodes to Devices is recomputed every placement_period steps until the system warms up (at which point the recomputation typically slows down automatically).
int32 placement_period = 3;- Returns:
- The placementPeriod.
-
getDeviceFiltersList
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;- Returns:
- A list containing the deviceFilters.
-
getDeviceFiltersCount
int getDeviceFiltersCount()When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;- Returns:
- The count of deviceFilters.
-
getDeviceFilters
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;- Parameters:
index- The index of the element to return.- Returns:
- The deviceFilters at the given index.
-
getDeviceFiltersBytes
com.google.protobuf.ByteString getDeviceFiltersBytes(int index) When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;- Parameters:
index- The index of the value to return.- Returns:
- The bytes of the deviceFilters at the given index.
-
hasGpuOptions
boolean hasGpuOptions()Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;- Returns:
- Whether the gpuOptions field is set.
-
getGpuOptions
GPUOptions getGpuOptions()Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;- Returns:
- The gpuOptions.
-
getGpuOptionsOrBuilder
GPUOptionsOrBuilder getGpuOptionsOrBuilder()Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6; -
hasPluggableDeviceOptions
boolean hasPluggableDeviceOptions()Options that apply to pluggable devices.
.tensorflow.GPUOptions pluggable_device_options = 18;- Returns:
- Whether the pluggableDeviceOptions field is set.
-
getPluggableDeviceOptions
GPUOptions getPluggableDeviceOptions()Options that apply to pluggable devices.
.tensorflow.GPUOptions pluggable_device_options = 18;- Returns:
- The pluggableDeviceOptions.
-
getPluggableDeviceOptionsOrBuilder
GPUOptionsOrBuilder getPluggableDeviceOptionsOrBuilder()Options that apply to pluggable devices.
.tensorflow.GPUOptions pluggable_device_options = 18; -
getAllowSoftPlacement
boolean getAllowSoftPlacement()Whether soft placement is allowed. If allow_soft_placement is true, an op will be placed on CPU if 1. there's no GPU implementation for the OP or 2. no GPU devices are known or registered or 3. need to co-locate with reftype input(s) which are from CPU.
bool allow_soft_placement = 7;- Returns:
- The allowSoftPlacement.
-
getLogDevicePlacement
boolean getLogDevicePlacement()Whether device placements should be logged.
bool log_device_placement = 8;- Returns:
- The logDevicePlacement.
-
hasGraphOptions
boolean hasGraphOptions()Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;- Returns:
- Whether the graphOptions field is set.
-
getGraphOptions
GraphOptions getGraphOptions()Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;- Returns:
- The graphOptions.
-
getGraphOptionsOrBuilder
GraphOptionsOrBuilder getGraphOptionsOrBuilder()Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10; -
getOperationTimeoutInMs
long getOperationTimeoutInMs()Global timeout for all blocking operations in this session. If non-zero, and not overridden on a per-operation basis, this value will be used as the deadline for all blocking operations.
int64 operation_timeout_in_ms = 11;- Returns:
- The operationTimeoutInMs.
-
hasRpcOptions
boolean hasRpcOptions()Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;- Returns:
- Whether the rpcOptions field is set.
-
getRpcOptions
RpcOptions.RPCOptions getRpcOptions()Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;- Returns:
- The rpcOptions.
-
getRpcOptionsOrBuilder
RpcOptions.RPCOptionsOrBuilder getRpcOptionsOrBuilder()Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13; -
hasClusterDef
boolean hasClusterDef()Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;- Returns:
- Whether the clusterDef field is set.
-
getClusterDef
ClusterDef getClusterDef()Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;- Returns:
- The clusterDef.
-
getClusterDefOrBuilder
ClusterDefOrBuilder getClusterDefOrBuilder()Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14; -
getIsolateSessionState
boolean getIsolateSessionState()If true, any resources such as Variables used in the session will not be shared with other sessions. However, when clusterspec propagation is enabled, this field is ignored and sessions are always isolated.
bool isolate_session_state = 15;- Returns:
- The isolateSessionState.
-
hasExperimental
boolean hasExperimental().tensorflow.ConfigProto.Experimental experimental = 16;- Returns:
- Whether the experimental field is set.
-
getExperimental
ConfigProto.Experimental getExperimental().tensorflow.ConfigProto.Experimental experimental = 16;- Returns:
- The experimental.
-
getExperimentalOrBuilder
ConfigProto.ExperimentalOrBuilder getExperimentalOrBuilder().tensorflow.ConfigProto.Experimental experimental = 16;
-