Uses of Class
org.tensorflow.framework.GPUOptions.Experimental.Builder
Packages that use GPUOptions.Experimental.Builder
-
Uses of GPUOptions.Experimental.Builder in org.tensorflow.framework
Methods in org.tensorflow.framework that return GPUOptions.Experimental.BuilderModifier and TypeMethodDescriptionGPUOptions.Experimental.Builder.addAllVirtualDevices(Iterable<? extends GPUOptions.Experimental.VirtualDevices> values) The multi virtual device settings.GPUOptions.Experimental.Builder.addRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value) GPUOptions.Experimental.Builder.addVirtualDevices(int index, GPUOptions.Experimental.VirtualDevices value) The multi virtual device settings.GPUOptions.Experimental.Builder.addVirtualDevices(int index, GPUOptions.Experimental.VirtualDevices.Builder builderForValue) The multi virtual device settings.GPUOptions.Experimental.Builder.addVirtualDevices(GPUOptions.Experimental.VirtualDevices value) The multi virtual device settings.GPUOptions.Experimental.Builder.addVirtualDevices(GPUOptions.Experimental.VirtualDevices.Builder builderForValue) The multi virtual device settings.GPUOptions.Experimental.Builder.clear()GPUOptions.Experimental.Builder.clearCollectiveRingOrder()If non-empty, defines a good GPU ring order on a single worker based on device interconnect.GPUOptions.Experimental.Builder.clearDisallowRetryOnAllocationFailure()By default, BFCAllocator may sleep when it runs out of memory, in the hopes that another thread will free up memory in the meantime.GPUOptions.Experimental.Builder.clearField(com.google.protobuf.Descriptors.FieldDescriptor field) GPUOptions.Experimental.Builder.clearGpuHostMemDisallowGrowth()If true, then the host allocator allocates its max memory all upfront and never grows.GPUOptions.Experimental.Builder.clearGpuHostMemLimitInMb()Memory limit for "GPU host allocator", aka pinned memory allocator.GPUOptions.Experimental.Builder.clearGpuSystemMemorySizeInMb()Memory limit for gpu system.GPUOptions.Experimental.Builder.clearInternalFragmentationFraction()BFC Allocator can return an allocated chunk of memory upto 2x the requested size.GPUOptions.Experimental.Builder.clearKernelTrackerMaxBytes()If kernel_tracker_max_bytes = n > 0, then a tracking event is inserted after every series of kernels allocating a sum of memory >= n.GPUOptions.Experimental.Builder.clearKernelTrackerMaxInterval()Parameters for GPUKernelTracker.GPUOptions.Experimental.Builder.clearKernelTrackerMaxPending()If kernel_tracker_max_pending > 0 then no more than this many tracking events can be outstanding at a time.GPUOptions.Experimental.Builder.clearNodeId()node_id for use when creating a PjRt GPU client with remote devices, which enumerates jobs*tasks from a ServerDef.GPUOptions.Experimental.Builder.clearNumDevToDevCopyStreams()If > 1, the number of device-to-device copy streams to create for each GPUDevice.GPUOptions.Experimental.Builder.clearNumVirtualDevicesPerGpu()The number of virtual devices to create on each visible GPU.GPUOptions.Experimental.Builder.clearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof) GPUOptions.Experimental.Builder.clearPopulatePjrtGpuClientCreationInfo()If true, save information needed for created a PjRt GPU client for creating a client with remote devices.GPUOptions.Experimental.Builder.clearStreamMergeOptions().tensorflow.GPUOptions.Experimental.StreamMergeOptions stream_merge_options = 19;GPUOptions.Experimental.Builder.clearTimestampedAllocator()If true then extra work is done by GPUDevice and GPUBFCAllocator to keep track of when GPU memory is freed and when kernels actually complete so that we can know when a nominally free memory chunk is really not subject to pending use.GPUOptions.Experimental.Builder.clearUseCudaMallocAsync()When true, use CUDA cudaMallocAsync API instead of TF gpu allocator.GPUOptions.Experimental.Builder.clearUseUnifiedMemory()If true, uses CUDA unified memory for memory allocations.GPUOptions.Experimental.Builder.clearVirtualDevices()The multi virtual device settings.GPUOptions.Experimental.Builder.clone()GPUOptions.Builder.getExperimentalBuilder()Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.GPUOptions.Experimental.Builder.mergeFrom(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) GPUOptions.Experimental.Builder.mergeFrom(com.google.protobuf.Message other) GPUOptions.Experimental.Builder.mergeFrom(GPUOptions.Experimental other) GPUOptions.Experimental.Builder.mergeStreamMergeOptions(GPUOptions.Experimental.StreamMergeOptions value) .tensorflow.GPUOptions.Experimental.StreamMergeOptions stream_merge_options = 19;GPUOptions.Experimental.Builder.mergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields) GPUOptions.Experimental.newBuilder()GPUOptions.Experimental.newBuilder(GPUOptions.Experimental prototype) GPUOptions.Experimental.newBuilderForType()protected GPUOptions.Experimental.BuilderGPUOptions.Experimental.newBuilderForType(com.google.protobuf.GeneratedMessageV3.BuilderParent parent) GPUOptions.Experimental.Builder.removeVirtualDevices(int index) The multi virtual device settings.GPUOptions.Experimental.Builder.setCollectiveRingOrder(String value) If non-empty, defines a good GPU ring order on a single worker based on device interconnect.GPUOptions.Experimental.Builder.setCollectiveRingOrderBytes(com.google.protobuf.ByteString value) If non-empty, defines a good GPU ring order on a single worker based on device interconnect.GPUOptions.Experimental.Builder.setDisallowRetryOnAllocationFailure(boolean value) By default, BFCAllocator may sleep when it runs out of memory, in the hopes that another thread will free up memory in the meantime.GPUOptions.Experimental.Builder.setField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value) GPUOptions.Experimental.Builder.setGpuHostMemDisallowGrowth(boolean value) If true, then the host allocator allocates its max memory all upfront and never grows.GPUOptions.Experimental.Builder.setGpuHostMemLimitInMb(float value) Memory limit for "GPU host allocator", aka pinned memory allocator.GPUOptions.Experimental.Builder.setGpuSystemMemorySizeInMb(int value) Memory limit for gpu system.GPUOptions.Experimental.Builder.setInternalFragmentationFraction(double value) BFC Allocator can return an allocated chunk of memory upto 2x the requested size.GPUOptions.Experimental.Builder.setKernelTrackerMaxBytes(int value) If kernel_tracker_max_bytes = n > 0, then a tracking event is inserted after every series of kernels allocating a sum of memory >= n.GPUOptions.Experimental.Builder.setKernelTrackerMaxInterval(int value) Parameters for GPUKernelTracker.GPUOptions.Experimental.Builder.setKernelTrackerMaxPending(int value) If kernel_tracker_max_pending > 0 then no more than this many tracking events can be outstanding at a time.GPUOptions.Experimental.Builder.setNodeId(int value) node_id for use when creating a PjRt GPU client with remote devices, which enumerates jobs*tasks from a ServerDef.GPUOptions.Experimental.Builder.setNumDevToDevCopyStreams(int value) If > 1, the number of device-to-device copy streams to create for each GPUDevice.GPUOptions.Experimental.Builder.setNumVirtualDevicesPerGpu(int value) The number of virtual devices to create on each visible GPU.GPUOptions.Experimental.Builder.setPopulatePjrtGpuClientCreationInfo(boolean value) If true, save information needed for created a PjRt GPU client for creating a client with remote devices.GPUOptions.Experimental.Builder.setRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, int index, Object value) GPUOptions.Experimental.Builder.setStreamMergeOptions(GPUOptions.Experimental.StreamMergeOptions value) .tensorflow.GPUOptions.Experimental.StreamMergeOptions stream_merge_options = 19;GPUOptions.Experimental.Builder.setStreamMergeOptions(GPUOptions.Experimental.StreamMergeOptions.Builder builderForValue) .tensorflow.GPUOptions.Experimental.StreamMergeOptions stream_merge_options = 19;GPUOptions.Experimental.Builder.setTimestampedAllocator(boolean value) If true then extra work is done by GPUDevice and GPUBFCAllocator to keep track of when GPU memory is freed and when kernels actually complete so that we can know when a nominally free memory chunk is really not subject to pending use.GPUOptions.Experimental.Builder.setUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields) GPUOptions.Experimental.Builder.setUseCudaMallocAsync(boolean value) When true, use CUDA cudaMallocAsync API instead of TF gpu allocator.GPUOptions.Experimental.Builder.setUseUnifiedMemory(boolean value) If true, uses CUDA unified memory for memory allocations.GPUOptions.Experimental.Builder.setVirtualDevices(int index, GPUOptions.Experimental.VirtualDevices value) The multi virtual device settings.GPUOptions.Experimental.Builder.setVirtualDevices(int index, GPUOptions.Experimental.VirtualDevices.Builder builderForValue) The multi virtual device settings.GPUOptions.Experimental.toBuilder()Methods in org.tensorflow.framework with parameters of type GPUOptions.Experimental.BuilderModifier and TypeMethodDescriptionGPUOptions.Builder.setExperimental(GPUOptions.Experimental.Builder builderForValue) Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.