Interface ConfigProto.ExperimentalOrBuilder

All Superinterfaces:
com.google.protobuf.MessageLiteOrBuilder, com.google.protobuf.MessageOrBuilder
All Known Implementing Classes:
ConfigProto.Experimental, ConfigProto.Experimental.Builder
Enclosing class:
ConfigProto

public static interface ConfigProto.ExperimentalOrBuilder extends com.google.protobuf.MessageOrBuilder
  • Method Details

    • getCollectiveGroupLeader

      String getCollectiveGroupLeader()
       Task name for group resolution.
       
      string collective_group_leader = 1;
      Returns:
      The collectiveGroupLeader.
    • getCollectiveGroupLeaderBytes

      com.google.protobuf.ByteString getCollectiveGroupLeaderBytes()
       Task name for group resolution.
       
      string collective_group_leader = 1;
      Returns:
      The bytes for collectiveGroupLeader.
    • getExecutorType

      String getExecutorType()
       Which executor to use, the default executor will be used
       if it is an empty string or "DEFAULT"
       
      string executor_type = 3;
      Returns:
      The executorType.
    • getExecutorTypeBytes

      com.google.protobuf.ByteString getExecutorTypeBytes()
       Which executor to use, the default executor will be used
       if it is an empty string or "DEFAULT"
       
      string executor_type = 3;
      Returns:
      The bytes for executorType.
    • getRecvBufMaxChunk

      int getRecvBufMaxChunk()
       Guidance to formatting of large RecvBuf fields for transfer.
       Any positive value sets the max chunk size.  0 defaults to 4096.
       Any negative value indicates no max, i.e. one chunk only.
       
      int32 recv_buf_max_chunk = 4;
      Returns:
      The recvBufMaxChunk.
    • getUseNumaAffinity

      boolean getUseNumaAffinity()
       If true, and supported by the platform, the runtime will attempt to
       use NUMA affinity where applicable.  One consequence will be the
       existence of as many CPU devices as there are available NUMA nodes.
       
      bool use_numa_affinity = 5;
      Returns:
      The useNumaAffinity.
    • getCollectiveDeterministicSequentialExecution

      boolean getCollectiveDeterministicSequentialExecution()
       If true, make collective op execution order sequential and deterministic
       for potentially concurrent collective instances.
       
      bool collective_deterministic_sequential_execution = 6;
      Returns:
      The collectiveDeterministicSequentialExecution.
    • getCollectiveNccl

      boolean getCollectiveNccl()
       If true, use NCCL for CollectiveOps.  This feature is highly
       experimental.
       
      bool collective_nccl = 7;
      Returns:
      The collectiveNccl.
    • getShareSessionStateInClusterspecPropagation

      boolean getShareSessionStateInClusterspecPropagation()
       In the following, session state means the value of a variable, elements
       in a hash table, or any other resource, accessible by worker sessions
       held by a TF server.
      
       When ClusterSpec propagation is enabled, the value of
       isolate_session_state is ignored when deciding whether to share session
       states in a TF server (for backwards compatibility reasons).
       - If share_session_state_in_clusterspec_propagation is true, the session
       states are shared.
       - If share_session_state_in_clusterspec_propagation is false, session
       states are isolated.
      
       When clusterspec propagation is not used, the value of
       share_session_state_in_clusterspec_propagation is ignored when deciding
       whether to share session states in a TF server.
       - If isolate_session_state is true, session states are isolated.
       - If isolate_session_state is false, session states are shared.
      
       TODO(b/129330037): Add a single API that consistently treats
       isolate_session_state and ClusterSpec propagation.
       
      bool share_session_state_in_clusterspec_propagation = 8;
      Returns:
      The shareSessionStateInClusterspecPropagation.
    • getDisableThreadSpinning

      boolean getDisableThreadSpinning()
       If using a direct session, disable spinning while waiting for work in
       the thread pool. This may result in higher latency for completing ops,
       but in the case where there is a lot of spinning may result in lower
       CPU usage.
       
      bool disable_thread_spinning = 9;
      Returns:
      The disableThreadSpinning.
    • getShareClusterDevicesInSession

      boolean getShareClusterDevicesInSession()
       This was promoted to a non-experimental API. Please use
       ConfigProto.share_cluster_devices_in_session instead.
       
      bool share_cluster_devices_in_session = 10;
      Returns:
      The shareClusterDevicesInSession.
    • hasSessionMetadata

      boolean hasSessionMetadata()
       Metadata about the session.
      
       If set, this can be used by the runtime and the Ops for debugging,
       monitoring, etc.
      
       NOTE: This is currently used and propagated only by the direct session
       and EagerContext.
       
      .tensorflow.SessionMetadata session_metadata = 11;
      Returns:
      Whether the sessionMetadata field is set.
    • getSessionMetadata

      SessionMetadata getSessionMetadata()
       Metadata about the session.
      
       If set, this can be used by the runtime and the Ops for debugging,
       monitoring, etc.
      
       NOTE: This is currently used and propagated only by the direct session
       and EagerContext.
       
      .tensorflow.SessionMetadata session_metadata = 11;
      Returns:
      The sessionMetadata.
    • getSessionMetadataOrBuilder

      SessionMetadataOrBuilder getSessionMetadataOrBuilder()
       Metadata about the session.
      
       If set, this can be used by the runtime and the Ops for debugging,
       monitoring, etc.
      
       NOTE: This is currently used and propagated only by the direct session
       and EagerContext.
       
      .tensorflow.SessionMetadata session_metadata = 11;
    • getOptimizeForStaticGraph

      boolean getOptimizeForStaticGraph()
       If true, the session may treat the graph as being static for optimization
       purposes.
      
       If this option is set to true when a session is created, the full
       GraphDef must be passed in a single call to Session::Create(), and
       Session::Extend() may not be supported.
       
      bool optimize_for_static_graph = 12;
      Returns:
      The optimizeForStaticGraph.
    • getEnableMlirBridge

      boolean getEnableMlirBridge()
       Whether to enable the MLIR-based TF->XLA bridge. This is only used if set
       to true. Default value or false is ignored. Use mlir_bridge_rollout for
       finer control.
      
       If this option is set to true when a session is created, MLIR is used to
       perform the set of graph transformations to put the graph in a form that
       can be executed with delegation of some computations to an accelerator.
       This builds on the model of XLA where a subset of the graph is
       encapsulated and attached to a "compile" operation, whose result is fed
       to an "execute" operation. The kernel for these operations is responsible
       to lower the encapsulated graph to a particular device.
       
      bool enable_mlir_bridge = 13;
      Returns:
      The enableMlirBridge.
    • getMlirBridgeRolloutValue

      int getMlirBridgeRolloutValue()
       Whether to enable the MLIR-based TF->XLA bridge.
       
      .tensorflow.ConfigProto.Experimental.MlirBridgeRollout mlir_bridge_rollout = 17;
      Returns:
      The enum numeric value on the wire for mlirBridgeRollout.
    • getMlirBridgeRollout

       Whether to enable the MLIR-based TF->XLA bridge.
       
      .tensorflow.ConfigProto.Experimental.MlirBridgeRollout mlir_bridge_rollout = 17;
      Returns:
      The mlirBridgeRollout.
    • getEnableMlirGraphOptimization

      boolean getEnableMlirGraphOptimization()
       Whether to enable the MLIR-based Graph optimizations.
      
       This will become a part of standard Tensorflow graph optimization
       pipeline, currently this is only used for gradual migration and testing
       new passes that are replacing existing optimizations in Grappler.
       
      bool enable_mlir_graph_optimization = 16;
      Returns:
      The enableMlirGraphOptimization.
    • getDisableOutputPartitionGraphs

      boolean getDisableOutputPartitionGraphs()
       If true, the session will not store an additional copy of the graph for
       each subgraph.
      
       If this option is set to true when a session is created, the
       `RunOptions.output_partition_graphs` options must not be set.
       
      bool disable_output_partition_graphs = 14;
      Returns:
      The disableOutputPartitionGraphs.
    • getXlaFusionAutotunerThresh

      long getXlaFusionAutotunerThresh()
       Minimum number of batches run through the XLA graph before XLA fusion
       autotuner is enabled. Default value of zero disables the autotuner.
      
       The XLA fusion autotuner can improve performance by executing a heuristic
       search on the compiler parameters.
       
      int64 xla_fusion_autotuner_thresh = 15;
      Returns:
      The xlaFusionAutotunerThresh.
    • getUseTfrt

      boolean getUseTfrt()
       Whether runtime execution uses TFRT.
       
      bool use_tfrt = 18;
      Returns:
      The useTfrt.
    • getEnableMultiHost

      boolean getEnableMultiHost()
       If true, use Pathways with TFRT API for multi host support.
       
      bool enable_multi_host = 27;
      Returns:
      The enableMultiHost.
    • getTfrtUseIfrt

      boolean getTfrtUseIfrt()
       If true, use ifrt as the backend for TFRT. This is only used when
       `use_tfrt` is true.
       
      bool tfrt_use_ifrt = 32;
      Returns:
      The tfrtUseIfrt.
    • getBackendServerPort

      int getBackendServerPort()
       Port for the Pathways server. Ignored if enable_multi_host=false.
       
      int32 backend_server_port = 28;
      Returns:
      The backendServerPort.
    • getTargetTpu

      boolean getTargetTpu()
       If true, TFRT will use TPU specific compiler passes and perform TPU
       specific initialization.
       
      bool target_tpu = 29;
      Returns:
      The targetTpu.
    • getTargetGpu

      boolean getTargetGpu()
       If true, TFRT will use GPU specific compiler passes and perform GPU
       specific initialization.
       
      bool target_gpu = 30;
      Returns:
      The targetGpu.
    • getStreamMergeThreshold

      int getStreamMergeThreshold()
       The threshold to merge small streams in TFRT. The stream with cost
       smaller than the threshold will be merged. Setting it to value 1
       disables all merges.
       
      int32 stream_merge_threshold = 31;
      Returns:
      The streamMergeThreshold.
    • getDisableFunctionalOpsLowering

      boolean getDisableFunctionalOpsLowering()
       Whether functional control flow op lowering should be disabled. This is
       useful when executing within a portable runtime where control flow op
       kernels may not be loaded due to selective registration.
       
      bool disable_functional_ops_lowering = 21;
      Returns:
      The disableFunctionalOpsLowering.
    • getXlaPreferSingleGraphCluster

      boolean getXlaPreferSingleGraphCluster()
       Provides a hint to XLA auto clustering to prefer forming a single large
       cluster that encompases most of the graph.
       
      bool xla_prefer_single_graph_cluster = 22;
      Returns:
      The xlaPreferSingleGraphCluster.
    • hasCoordinationConfig

      boolean hasCoordinationConfig()
       Distributed coordination service configurations.
       
      .tensorflow.CoordinationServiceConfig coordination_config = 23;
      Returns:
      Whether the coordinationConfig field is set.
    • getCoordinationConfig

       Distributed coordination service configurations.
       
      .tensorflow.CoordinationServiceConfig coordination_config = 23;
      Returns:
      The coordinationConfig.
    • getCoordinationConfigOrBuilder

       Distributed coordination service configurations.
       
      .tensorflow.CoordinationServiceConfig coordination_config = 23;
    • getDisableOptimizeForStaticGraph

      boolean getDisableOptimizeForStaticGraph()
       If true, the session will treat the graph as being non-static for
       optimization purposes.
      
       If this option is set to true when a session is created, the full
       GraphDef will be retained to enable calls to Session::Extend().
       Calling Extend() without setting this flag will result in errors.
      
       This option is meant to replace `optimize_for_static_graph` and it
       aims to negate its value.
       
      bool disable_optimize_for_static_graph = 24;
      Returns:
      The disableOptimizeForStaticGraph.
    • getDisableEagerExecutorStreamingEnqueue

      boolean getDisableEagerExecutorStreamingEnqueue()
       Whether eager remote execution will stream all the function calls or
       allow them to happen in parallel. When true, streaming execution is
       disabled, and parallel execution is allowed.
       
      bool disable_eager_executor_streaming_enqueue = 26;
      Returns:
      The disableEagerExecutorStreamingEnqueue.