Package org.tensorflow.framework
Interface RewriterConfigOrBuilder
- All Superinterfaces:
com.google.protobuf.MessageLiteOrBuilder,com.google.protobuf.MessageOrBuilder
- All Known Implementing Classes:
RewriterConfig,RewriterConfig.Builder
public interface RewriterConfigOrBuilder
extends com.google.protobuf.MessageOrBuilder
-
Method Summary
Modifier and TypeMethodDescriptionArithmetic optimizations (default is ON) e.g.intArithmetic optimizations (default is ON) e.g.Optimize data types for CUDA/oneDNN (default is OFF).Emulate a model using data type float16 on CPU (default is OFF).intEmulate a model using data type float16 on CPU (default is OFF).Optimize data types for oneDNN (default is OFF).intOptimize data types for oneDNN (default is OFF).Optimize data types for oneDNN (default is OFF).intOptimize data types for oneDNN (default is OFF).intOptimize data types for CUDA/oneDNN (default is OFF).Configures AutoParallel optimization passes either through the meta-optimizer or when manually specified through the optimizers field.Configures AutoParallel optimization passes either through the meta-optimizer or when manually specified through the optimizers field.Common subgraph elimination (default is ON) e.g.intCommon subgraph elimination (default is ON) e.g.Fold constants (default is ON) Statically infer the value of tensors when possible, and materialize the result using constants.intFold constants (default is ON) Statically infer the value of tensors when possible, and materialize the result using constants.CPU Conversion settings between NHCW and NCHW.intCPU Conversion settings between NHCW and NCHW.getCustomOptimizers(int index) list of CustomGraphOptimizers to apply.intlist of CustomGraphOptimizers to apply.list of CustomGraphOptimizers to apply.getCustomOptimizersOrBuilder(int index) list of CustomGraphOptimizers to apply.List<? extends RewriterConfig.CustomGraphOptimizerOrBuilder> list of CustomGraphOptimizers to apply.Strips debug-related nodes from the graph (off by default).intStrips debug-related nodes from the graph (off by default).Control dependency optimizations (default is ON).intControl dependency optimizations (default is ON).booleanDisable the entire meta optimizer (off by default).booleanIf true, don't remove unnecessary ops from the graphbooleanDisable the TFG optimizer (off by default).Conditional code motion (default is ON).intConditional code motion (default is ON).booleanDisable optimizations that assume compressed tensors.booleanDisable folding quantization emulation ops such as FakeQuantWithMinMax* and QuantizeAndDequantize*.booleanIf true, any optimization pass failing will cause the MetaOptimizer to stop with an error.Function optimizations (default is ON).intFunction optimizations (default is ON).Enable the swap of kernel implementations based on the device placement (default is ON).intEnable the swap of kernel implementations based on the device placement (default is ON).VerifierConfig specifying the verifiers to be run after every optimizer.VerifierConfig specifying the verifiers to be run after every optimizer.Optimize tensor layouts (default is ON) e.g.intOptimize tensor layouts (default is ON) e.g.Loop optimizations (default is ON).intLoop optimizations (default is ON).Configures memory optimization passes through the meta-optimizer.intConfigures memory optimization passes through the meta-optimizer.A node name scope for node names which are valid outputs of recomputations.com.google.protobuf.ByteStringA node name scope for node names which are valid outputs of recomputations.Controls how many times we run the optimizers in meta optimizer (default is once).intControls how many times we run the optimizers in meta optimizer (default is once).longMaximum number of milliseconds to spend optimizing a single graph before timing out.intThe minimum number of nodes in a graph to optimizer.getOptimizers(int index) If non-empty, will use this as an alternative way to specify a list of optimizations to turn on and the order of the optimizations (replacing the meta-optimizer).com.google.protobuf.ByteStringgetOptimizersBytes(int index) If non-empty, will use this as an alternative way to specify a list of optimizations to turn on and the order of the optimizations (replacing the meta-optimizer).intIf non-empty, will use this as an alternative way to specify a list of optimizations to turn on and the order of the optimizations (replacing the meta-optimizer).If non-empty, will use this as an alternative way to specify a list of optimizations to turn on and the order of the optimizations (replacing the meta-optimizer).Force small ops onto the CPU (default is OFF).intForce small ops onto the CPU (default is OFF).VerifierConfig specifying the verifiers to be run at the end, after all optimizers have run.VerifierConfig specifying the verifiers to be run at the end, after all optimizers have run.Remapping (default is ON) Remap subgraphs onto more efficient implementations.intRemapping (default is ON) Remap subgraphs onto more efficient implementations.Try to allocate some independent Op outputs contiguously in order to merge or eliminate downstream Ops (off by default).intTry to allocate some independent Op outputs contiguously in order to merge or eliminate downstream Ops (off by default)..tensorflow.ScopedAllocatorOptions scoped_allocator_opts = 16;.tensorflow.ScopedAllocatorOptions scoped_allocator_opts = 16;Shape optimizations (default is ON) Simplify computations made on shapes.intShape optimizations (default is ON) Simplify computations made on shapes.Optimizers registered by plugin (default is ON)intOptimizers registered by plugin (default is ON)booleanConfigures AutoParallel optimization passes either through the meta-optimizer or when manually specified through the optimizers field.booleanVerifierConfig specifying the verifiers to be run after every optimizer.booleanVerifierConfig specifying the verifiers to be run at the end, after all optimizers have run.boolean.tensorflow.ScopedAllocatorOptions scoped_allocator_opts = 16;Methods inherited from interface com.google.protobuf.MessageLiteOrBuilder
isInitializedMethods inherited from interface com.google.protobuf.MessageOrBuilder
findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof
-
Method Details
-
getCpuLayoutConversionValue
int getCpuLayoutConversionValue()CPU Conversion settings between NHCW and NCHW.
.tensorflow.RewriterConfig.CpuLayout cpu_layout_conversion = 50;- Returns:
- The enum numeric value on the wire for cpuLayoutConversion.
-
getCpuLayoutConversion
RewriterConfig.CpuLayout getCpuLayoutConversion()CPU Conversion settings between NHCW and NCHW.
.tensorflow.RewriterConfig.CpuLayout cpu_layout_conversion = 50;- Returns:
- The cpuLayoutConversion.
-
getLayoutOptimizerValue
int getLayoutOptimizerValue()Optimize tensor layouts (default is ON) e.g. This will try to use NCHW layout on GPU which is faster.
.tensorflow.RewriterConfig.Toggle layout_optimizer = 1;- Returns:
- The enum numeric value on the wire for layoutOptimizer.
-
getLayoutOptimizer
RewriterConfig.Toggle getLayoutOptimizer()Optimize tensor layouts (default is ON) e.g. This will try to use NCHW layout on GPU which is faster.
.tensorflow.RewriterConfig.Toggle layout_optimizer = 1;- Returns:
- The layoutOptimizer.
-
getConstantFoldingValue
int getConstantFoldingValue()Fold constants (default is ON) Statically infer the value of tensors when possible, and materialize the result using constants.
.tensorflow.RewriterConfig.Toggle constant_folding = 3;- Returns:
- The enum numeric value on the wire for constantFolding.
-
getConstantFolding
RewriterConfig.Toggle getConstantFolding()Fold constants (default is ON) Statically infer the value of tensors when possible, and materialize the result using constants.
.tensorflow.RewriterConfig.Toggle constant_folding = 3;- Returns:
- The constantFolding.
-
getShapeOptimizationValue
int getShapeOptimizationValue()Shape optimizations (default is ON) Simplify computations made on shapes.
.tensorflow.RewriterConfig.Toggle shape_optimization = 13;- Returns:
- The enum numeric value on the wire for shapeOptimization.
-
getShapeOptimization
RewriterConfig.Toggle getShapeOptimization()Shape optimizations (default is ON) Simplify computations made on shapes.
.tensorflow.RewriterConfig.Toggle shape_optimization = 13;- Returns:
- The shapeOptimization.
-
getRemappingValue
int getRemappingValue()Remapping (default is ON) Remap subgraphs onto more efficient implementations.
.tensorflow.RewriterConfig.Toggle remapping = 14;- Returns:
- The enum numeric value on the wire for remapping.
-
getRemapping
RewriterConfig.Toggle getRemapping()Remapping (default is ON) Remap subgraphs onto more efficient implementations.
.tensorflow.RewriterConfig.Toggle remapping = 14;- Returns:
- The remapping.
-
getCommonSubgraphEliminationValue
int getCommonSubgraphEliminationValue()Common subgraph elimination (default is ON) e.g. Simplify arithmetic ops; merge ops with same value (like constants).
.tensorflow.RewriterConfig.Toggle common_subgraph_elimination = 24;- Returns:
- The enum numeric value on the wire for commonSubgraphElimination.
-
getCommonSubgraphElimination
RewriterConfig.Toggle getCommonSubgraphElimination()Common subgraph elimination (default is ON) e.g. Simplify arithmetic ops; merge ops with same value (like constants).
.tensorflow.RewriterConfig.Toggle common_subgraph_elimination = 24;- Returns:
- The commonSubgraphElimination.
-
getArithmeticOptimizationValue
int getArithmeticOptimizationValue()Arithmetic optimizations (default is ON) e.g. Simplify arithmetic ops; merge ops with same value (like constants).
.tensorflow.RewriterConfig.Toggle arithmetic_optimization = 7;- Returns:
- The enum numeric value on the wire for arithmeticOptimization.
-
getArithmeticOptimization
RewriterConfig.Toggle getArithmeticOptimization()Arithmetic optimizations (default is ON) e.g. Simplify arithmetic ops; merge ops with same value (like constants).
.tensorflow.RewriterConfig.Toggle arithmetic_optimization = 7;- Returns:
- The arithmeticOptimization.
-
getDependencyOptimizationValue
int getDependencyOptimizationValue()Control dependency optimizations (default is ON). Remove redundant control dependencies, which may enable other optimization.
.tensorflow.RewriterConfig.Toggle dependency_optimization = 8;- Returns:
- The enum numeric value on the wire for dependencyOptimization.
-
getDependencyOptimization
RewriterConfig.Toggle getDependencyOptimization()Control dependency optimizations (default is ON). Remove redundant control dependencies, which may enable other optimization.
.tensorflow.RewriterConfig.Toggle dependency_optimization = 8;- Returns:
- The dependencyOptimization.
-
getLoopOptimizationValue
int getLoopOptimizationValue()Loop optimizations (default is ON).
.tensorflow.RewriterConfig.Toggle loop_optimization = 9;- Returns:
- The enum numeric value on the wire for loopOptimization.
-
getLoopOptimization
RewriterConfig.Toggle getLoopOptimization()Loop optimizations (default is ON).
.tensorflow.RewriterConfig.Toggle loop_optimization = 9;- Returns:
- The loopOptimization.
-
getFunctionOptimizationValue
int getFunctionOptimizationValue()Function optimizations (default is ON).
.tensorflow.RewriterConfig.Toggle function_optimization = 10;- Returns:
- The enum numeric value on the wire for functionOptimization.
-
getFunctionOptimization
RewriterConfig.Toggle getFunctionOptimization()Function optimizations (default is ON).
.tensorflow.RewriterConfig.Toggle function_optimization = 10;- Returns:
- The functionOptimization.
-
getDebugStripperValue
int getDebugStripperValue()Strips debug-related nodes from the graph (off by default).
.tensorflow.RewriterConfig.Toggle debug_stripper = 11;- Returns:
- The enum numeric value on the wire for debugStripper.
-
getDebugStripper
RewriterConfig.Toggle getDebugStripper()Strips debug-related nodes from the graph (off by default).
.tensorflow.RewriterConfig.Toggle debug_stripper = 11;- Returns:
- The debugStripper.
-
getDisableModelPruning
boolean getDisableModelPruning()If true, don't remove unnecessary ops from the graph
bool disable_model_pruning = 2;- Returns:
- The disableModelPruning.
-
getScopedAllocatorOptimizationValue
int getScopedAllocatorOptimizationValue()Try to allocate some independent Op outputs contiguously in order to merge or eliminate downstream Ops (off by default).
.tensorflow.RewriterConfig.Toggle scoped_allocator_optimization = 15;- Returns:
- The enum numeric value on the wire for scopedAllocatorOptimization.
-
getScopedAllocatorOptimization
RewriterConfig.Toggle getScopedAllocatorOptimization()Try to allocate some independent Op outputs contiguously in order to merge or eliminate downstream Ops (off by default).
.tensorflow.RewriterConfig.Toggle scoped_allocator_optimization = 15;- Returns:
- The scopedAllocatorOptimization.
-
getPinToHostOptimizationValue
int getPinToHostOptimizationValue()Force small ops onto the CPU (default is OFF).
.tensorflow.RewriterConfig.Toggle pin_to_host_optimization = 18;- Returns:
- The enum numeric value on the wire for pinToHostOptimization.
-
getPinToHostOptimization
RewriterConfig.Toggle getPinToHostOptimization()Force small ops onto the CPU (default is OFF).
.tensorflow.RewriterConfig.Toggle pin_to_host_optimization = 18;- Returns:
- The pinToHostOptimization.
-
getImplementationSelectorValue
int getImplementationSelectorValue()Enable the swap of kernel implementations based on the device placement (default is ON).
.tensorflow.RewriterConfig.Toggle implementation_selector = 22;- Returns:
- The enum numeric value on the wire for implementationSelector.
-
getImplementationSelector
RewriterConfig.Toggle getImplementationSelector()Enable the swap of kernel implementations based on the device placement (default is ON).
.tensorflow.RewriterConfig.Toggle implementation_selector = 22;- Returns:
- The implementationSelector.
-
getAutoMixedPrecisionValue
int getAutoMixedPrecisionValue()Optimize data types for CUDA/oneDNN (default is OFF). This will try to use float16 on GPU/CPU which is faster. Note that this can change the numerical stability of the graph and may require the use of loss scaling to maintain model convergence.
.tensorflow.RewriterConfig.Toggle auto_mixed_precision = 23;- Returns:
- The enum numeric value on the wire for autoMixedPrecision.
-
getAutoMixedPrecision
RewriterConfig.Toggle getAutoMixedPrecision()Optimize data types for CUDA/oneDNN (default is OFF). This will try to use float16 on GPU/CPU which is faster. Note that this can change the numerical stability of the graph and may require the use of loss scaling to maintain model convergence.
.tensorflow.RewriterConfig.Toggle auto_mixed_precision = 23;- Returns:
- The autoMixedPrecision.
-
getAutoMixedPrecisionMklValue
int getAutoMixedPrecisionMklValue()Optimize data types for oneDNN (default is OFF). This will try to use bfloat16 on CPUs, which is faster. Note that this can change the numerical stability of the graph. Note: this is deprecated. It is replaced by auto_mixed_precision_onednn_bfloat16
.tensorflow.RewriterConfig.Toggle auto_mixed_precision_mkl = 25;- Returns:
- The enum numeric value on the wire for autoMixedPrecisionMkl.
-
getAutoMixedPrecisionMkl
RewriterConfig.Toggle getAutoMixedPrecisionMkl()Optimize data types for oneDNN (default is OFF). This will try to use bfloat16 on CPUs, which is faster. Note that this can change the numerical stability of the graph. Note: this is deprecated. It is replaced by auto_mixed_precision_onednn_bfloat16
.tensorflow.RewriterConfig.Toggle auto_mixed_precision_mkl = 25;- Returns:
- The autoMixedPrecisionMkl.
-
getAutoMixedPrecisionOnednnBfloat16Value
int getAutoMixedPrecisionOnednnBfloat16Value()Optimize data types for oneDNN (default is OFF). This will try to use bfloat16 on CPUs, which is faster. Note that this can change the numerical stability of the graph. Note: this is equivalent to the deprecated option auto_mixed_precision_mkl
.tensorflow.RewriterConfig.Toggle auto_mixed_precision_onednn_bfloat16 = 31;- Returns:
- The enum numeric value on the wire for autoMixedPrecisionOnednnBfloat16.
-
getAutoMixedPrecisionOnednnBfloat16
RewriterConfig.Toggle getAutoMixedPrecisionOnednnBfloat16()Optimize data types for oneDNN (default is OFF). This will try to use bfloat16 on CPUs, which is faster. Note that this can change the numerical stability of the graph. Note: this is equivalent to the deprecated option auto_mixed_precision_mkl
.tensorflow.RewriterConfig.Toggle auto_mixed_precision_onednn_bfloat16 = 31;- Returns:
- The autoMixedPrecisionOnednnBfloat16.
-
getAutoMixedPrecisionCpuValue
int getAutoMixedPrecisionCpuValue()Emulate a model using data type float16 on CPU (default is OFF). This will try to emulate the float16 inputs and outputs of an operator on CPU to have better correlation with float16 on GPU; however the computation in the operator is based on float32. Note that this can change the numerical stability of the graph.
.tensorflow.RewriterConfig.Toggle auto_mixed_precision_cpu = 29;- Returns:
- The enum numeric value on the wire for autoMixedPrecisionCpu.
-
getAutoMixedPrecisionCpu
RewriterConfig.Toggle getAutoMixedPrecisionCpu()Emulate a model using data type float16 on CPU (default is OFF). This will try to emulate the float16 inputs and outputs of an operator on CPU to have better correlation with float16 on GPU; however the computation in the operator is based on float32. Note that this can change the numerical stability of the graph.
.tensorflow.RewriterConfig.Toggle auto_mixed_precision_cpu = 29;- Returns:
- The autoMixedPrecisionCpu.
-
getDisableMetaOptimizer
boolean getDisableMetaOptimizer()Disable the entire meta optimizer (off by default).
bool disable_meta_optimizer = 19;- Returns:
- The disableMetaOptimizer.
-
getDisableTfgOptimizer
boolean getDisableTfgOptimizer()Disable the TFG optimizer (off by default).
bool disable_tfg_optimizer = 32;- Returns:
- The disableTfgOptimizer.
-
getUsePluginOptimizersValue
int getUsePluginOptimizersValue()Optimizers registered by plugin (default is ON)
.tensorflow.RewriterConfig.Toggle use_plugin_optimizers = 28;- Returns:
- The enum numeric value on the wire for usePluginOptimizers.
-
getUsePluginOptimizers
RewriterConfig.Toggle getUsePluginOptimizers()Optimizers registered by plugin (default is ON)
.tensorflow.RewriterConfig.Toggle use_plugin_optimizers = 28;- Returns:
- The usePluginOptimizers.
-
getExperimentalConditionalCodeMotionValue
int getExperimentalConditionalCodeMotionValue()Conditional code motion (default is ON).
.tensorflow.RewriterConfig.Toggle experimental_conditional_code_motion = 30;- Returns:
- The enum numeric value on the wire for experimentalConditionalCodeMotion.
-
getExperimentalConditionalCodeMotion
RewriterConfig.Toggle getExperimentalConditionalCodeMotion()Conditional code motion (default is ON).
.tensorflow.RewriterConfig.Toggle experimental_conditional_code_motion = 30;- Returns:
- The experimentalConditionalCodeMotion.
-
getMetaOptimizerIterationsValue
int getMetaOptimizerIterationsValue()Controls how many times we run the optimizers in meta optimizer (default is once).
.tensorflow.RewriterConfig.NumIterationsType meta_optimizer_iterations = 12;- Returns:
- The enum numeric value on the wire for metaOptimizerIterations.
-
getMetaOptimizerIterations
RewriterConfig.NumIterationsType getMetaOptimizerIterations()Controls how many times we run the optimizers in meta optimizer (default is once).
.tensorflow.RewriterConfig.NumIterationsType meta_optimizer_iterations = 12;- Returns:
- The metaOptimizerIterations.
-
getMinGraphNodes
int getMinGraphNodes()The minimum number of nodes in a graph to optimizer. For smaller graphs, optimization is skipped. 0 means the system picks an appropriate number. < 0 means do not skip optimization.
int32 min_graph_nodes = 17;- Returns:
- The minGraphNodes.
-
getExperimentalDisableCompressedTensorOptimization
boolean getExperimentalDisableCompressedTensorOptimization()Disable optimizations that assume compressed tensors. Note that this flag is experimental and may be removed in the future.
bool experimental_disable_compressed_tensor_optimization = 26;- Returns:
- The experimentalDisableCompressedTensorOptimization.
-
getExperimentalDisableFoldingQuantizationEmulation
boolean getExperimentalDisableFoldingQuantizationEmulation()Disable folding quantization emulation ops such as FakeQuantWithMinMax* and QuantizeAndDequantize*. Some compilers (e.g. the TF-to-tflite converter) have to extract quantization configs (e.g. min/max range, number of bits, and per-channel) from the quantization emulation ops. Note that this flag is experimental and may be removed in the future. See b/174138564 for more details.
bool experimental_disable_folding_quantization_emulation = 27;- Returns:
- The experimentalDisableFoldingQuantizationEmulation.
-
getMemoryOptimizationValue
int getMemoryOptimizationValue()Configures memory optimization passes through the meta-optimizer. Has no effect on manually requested memory optimization passes in the optimizers field.
.tensorflow.RewriterConfig.MemOptType memory_optimization = 4;- Returns:
- The enum numeric value on the wire for memoryOptimization.
-
getMemoryOptimization
RewriterConfig.MemOptType getMemoryOptimization()Configures memory optimization passes through the meta-optimizer. Has no effect on manually requested memory optimization passes in the optimizers field.
.tensorflow.RewriterConfig.MemOptType memory_optimization = 4;- Returns:
- The memoryOptimization.
-
getMemoryOptimizerTargetNodeNameScope
String getMemoryOptimizerTargetNodeNameScope()A node name scope for node names which are valid outputs of recomputations. Inputs to nodes that match this scope may be recomputed (subject either to manual annotation of those input nodes or to manual annotation and heuristics depending on memory_optimization), but the nodes themselves will not be recomputed. This matches any sub-scopes as well, meaning the scope can appear not just as a top-level scope. For example, if the value is "gradients/", the default, it will match node name "gradients/foo", "foo/gradients/bar", but not "foo_gradients/"
string memory_optimizer_target_node_name_scope = 6;- Returns:
- The memoryOptimizerTargetNodeNameScope.
-
getMemoryOptimizerTargetNodeNameScopeBytes
com.google.protobuf.ByteString getMemoryOptimizerTargetNodeNameScopeBytes()A node name scope for node names which are valid outputs of recomputations. Inputs to nodes that match this scope may be recomputed (subject either to manual annotation of those input nodes or to manual annotation and heuristics depending on memory_optimization), but the nodes themselves will not be recomputed. This matches any sub-scopes as well, meaning the scope can appear not just as a top-level scope. For example, if the value is "gradients/", the default, it will match node name "gradients/foo", "foo/gradients/bar", but not "foo_gradients/"
string memory_optimizer_target_node_name_scope = 6;- Returns:
- The bytes for memoryOptimizerTargetNodeNameScope.
-
getMetaOptimizerTimeoutMs
long getMetaOptimizerTimeoutMs()Maximum number of milliseconds to spend optimizing a single graph before timing out. If less than or equal to 0 (default value) the optimizer will never time out.
int64 meta_optimizer_timeout_ms = 20;- Returns:
- The metaOptimizerTimeoutMs.
-
hasAutoParallel
boolean hasAutoParallel()Configures AutoParallel optimization passes either through the meta-optimizer or when manually specified through the optimizers field.
.tensorflow.AutoParallelOptions auto_parallel = 5;- Returns:
- Whether the autoParallel field is set.
-
getAutoParallel
AutoParallelOptions getAutoParallel()Configures AutoParallel optimization passes either through the meta-optimizer or when manually specified through the optimizers field.
.tensorflow.AutoParallelOptions auto_parallel = 5;- Returns:
- The autoParallel.
-
getAutoParallelOrBuilder
AutoParallelOptionsOrBuilder getAutoParallelOrBuilder()Configures AutoParallel optimization passes either through the meta-optimizer or when manually specified through the optimizers field.
.tensorflow.AutoParallelOptions auto_parallel = 5; -
getFailOnOptimizerErrors
boolean getFailOnOptimizerErrors()If true, any optimization pass failing will cause the MetaOptimizer to stop with an error. By default - or when set to false, failing passes are skipped silently.
bool fail_on_optimizer_errors = 21;- Returns:
- The failOnOptimizerErrors.
-
hasScopedAllocatorOpts
boolean hasScopedAllocatorOpts().tensorflow.ScopedAllocatorOptions scoped_allocator_opts = 16;- Returns:
- Whether the scopedAllocatorOpts field is set.
-
getScopedAllocatorOpts
ScopedAllocatorOptions getScopedAllocatorOpts().tensorflow.ScopedAllocatorOptions scoped_allocator_opts = 16;- Returns:
- The scopedAllocatorOpts.
-
getScopedAllocatorOptsOrBuilder
ScopedAllocatorOptionsOrBuilder getScopedAllocatorOptsOrBuilder().tensorflow.ScopedAllocatorOptions scoped_allocator_opts = 16; -
getOptimizersList
If non-empty, will use this as an alternative way to specify a list of optimizations to turn on and the order of the optimizations (replacing the meta-optimizer). Of the RewriterConfig options, only the AutoParallel configuration options (the auto_parallel field) apply to manually requested optimization passes ("autoparallel"). Memory optimization passes ("memory") invoked here are not configurable (in contrast to memory optimization passes through the meta-optimizer) and act only on manual op annotations. Custom optimizers (see custom_optimizers) that are not part of this schedule will be run after - in the order that they were specified.repeated string optimizers = 100;- Returns:
- A list containing the optimizers.
-
getOptimizersCount
int getOptimizersCount()If non-empty, will use this as an alternative way to specify a list of optimizations to turn on and the order of the optimizations (replacing the meta-optimizer). Of the RewriterConfig options, only the AutoParallel configuration options (the auto_parallel field) apply to manually requested optimization passes ("autoparallel"). Memory optimization passes ("memory") invoked here are not configurable (in contrast to memory optimization passes through the meta-optimizer) and act only on manual op annotations. Custom optimizers (see custom_optimizers) that are not part of this schedule will be run after - in the order that they were specified.repeated string optimizers = 100;- Returns:
- The count of optimizers.
-
getOptimizers
If non-empty, will use this as an alternative way to specify a list of optimizations to turn on and the order of the optimizations (replacing the meta-optimizer). Of the RewriterConfig options, only the AutoParallel configuration options (the auto_parallel field) apply to manually requested optimization passes ("autoparallel"). Memory optimization passes ("memory") invoked here are not configurable (in contrast to memory optimization passes through the meta-optimizer) and act only on manual op annotations. Custom optimizers (see custom_optimizers) that are not part of this schedule will be run after - in the order that they were specified.repeated string optimizers = 100;- Parameters:
index- The index of the element to return.- Returns:
- The optimizers at the given index.
-
getOptimizersBytes
com.google.protobuf.ByteString getOptimizersBytes(int index) If non-empty, will use this as an alternative way to specify a list of optimizations to turn on and the order of the optimizations (replacing the meta-optimizer). Of the RewriterConfig options, only the AutoParallel configuration options (the auto_parallel field) apply to manually requested optimization passes ("autoparallel"). Memory optimization passes ("memory") invoked here are not configurable (in contrast to memory optimization passes through the meta-optimizer) and act only on manual op annotations. Custom optimizers (see custom_optimizers) that are not part of this schedule will be run after - in the order that they were specified.repeated string optimizers = 100;- Parameters:
index- The index of the value to return.- Returns:
- The bytes of the optimizers at the given index.
-
getCustomOptimizersList
List<RewriterConfig.CustomGraphOptimizer> getCustomOptimizersList()list of CustomGraphOptimizers to apply.
repeated .tensorflow.RewriterConfig.CustomGraphOptimizer custom_optimizers = 200; -
getCustomOptimizers
list of CustomGraphOptimizers to apply.
repeated .tensorflow.RewriterConfig.CustomGraphOptimizer custom_optimizers = 200; -
getCustomOptimizersCount
int getCustomOptimizersCount()list of CustomGraphOptimizers to apply.
repeated .tensorflow.RewriterConfig.CustomGraphOptimizer custom_optimizers = 200; -
getCustomOptimizersOrBuilderList
List<? extends RewriterConfig.CustomGraphOptimizerOrBuilder> getCustomOptimizersOrBuilderList()list of CustomGraphOptimizers to apply.
repeated .tensorflow.RewriterConfig.CustomGraphOptimizer custom_optimizers = 200; -
getCustomOptimizersOrBuilder
list of CustomGraphOptimizers to apply.
repeated .tensorflow.RewriterConfig.CustomGraphOptimizer custom_optimizers = 200; -
hasInterOptimizerVerifierConfig
boolean hasInterOptimizerVerifierConfig()VerifierConfig specifying the verifiers to be run after every optimizer.
.tensorflow.VerifierConfig inter_optimizer_verifier_config = 300;- Returns:
- Whether the interOptimizerVerifierConfig field is set.
-
getInterOptimizerVerifierConfig
VerifierConfig getInterOptimizerVerifierConfig()VerifierConfig specifying the verifiers to be run after every optimizer.
.tensorflow.VerifierConfig inter_optimizer_verifier_config = 300;- Returns:
- The interOptimizerVerifierConfig.
-
getInterOptimizerVerifierConfigOrBuilder
VerifierConfigOrBuilder getInterOptimizerVerifierConfigOrBuilder()VerifierConfig specifying the verifiers to be run after every optimizer.
.tensorflow.VerifierConfig inter_optimizer_verifier_config = 300; -
hasPostOptimizationVerifierConfig
boolean hasPostOptimizationVerifierConfig()VerifierConfig specifying the verifiers to be run at the end, after all optimizers have run.
.tensorflow.VerifierConfig post_optimization_verifier_config = 301;- Returns:
- Whether the postOptimizationVerifierConfig field is set.
-
getPostOptimizationVerifierConfig
VerifierConfig getPostOptimizationVerifierConfig()VerifierConfig specifying the verifiers to be run at the end, after all optimizers have run.
.tensorflow.VerifierConfig post_optimization_verifier_config = 301;- Returns:
- The postOptimizationVerifierConfig.
-
getPostOptimizationVerifierConfigOrBuilder
VerifierConfigOrBuilder getPostOptimizationVerifierConfigOrBuilder()VerifierConfig specifying the verifiers to be run at the end, after all optimizers have run.
.tensorflow.VerifierConfig post_optimization_verifier_config = 301;
-