Uses of Class
tensorflow.serving.Inference.MultiInferenceRequest
Packages that use Inference.MultiInferenceRequest
-
Uses of Inference.MultiInferenceRequest in tensorflow.serving
Methods in tensorflow.serving that return Inference.MultiInferenceRequestModifier and TypeMethodDescriptionInference.MultiInferenceRequest.Builder.build()Inference.MultiInferenceRequest.Builder.buildPartial()Inference.MultiInferenceRequest.getDefaultInstance()Inference.MultiInferenceRequest.Builder.getDefaultInstanceForType()Inference.MultiInferenceRequest.getDefaultInstanceForType()PredictionLogOuterClass.MultiInferenceLog.Builder.getRequest().tensorflow.serving.MultiInferenceRequest request = 1;PredictionLogOuterClass.MultiInferenceLog.getRequest().tensorflow.serving.MultiInferenceRequest request = 1;PredictionLogOuterClass.MultiInferenceLogOrBuilder.getRequest().tensorflow.serving.MultiInferenceRequest request = 1;Inference.MultiInferenceRequest.parseDelimitedFrom(InputStream input) Inference.MultiInferenceRequest.parseDelimitedFrom(InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) Inference.MultiInferenceRequest.parseFrom(byte[] data) Inference.MultiInferenceRequest.parseFrom(byte[] data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) Inference.MultiInferenceRequest.parseFrom(com.google.protobuf.ByteString data) Inference.MultiInferenceRequest.parseFrom(com.google.protobuf.ByteString data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) Inference.MultiInferenceRequest.parseFrom(com.google.protobuf.CodedInputStream input) Inference.MultiInferenceRequest.parseFrom(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) Inference.MultiInferenceRequest.parseFrom(InputStream input) Inference.MultiInferenceRequest.parseFrom(InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) Inference.MultiInferenceRequest.parseFrom(ByteBuffer data) Inference.MultiInferenceRequest.parseFrom(ByteBuffer data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) Methods in tensorflow.serving that return types with arguments of type Inference.MultiInferenceRequestModifier and TypeMethodDescriptionstatic io.grpc.MethodDescriptor<Inference.MultiInferenceRequest, Inference.MultiInferenceResponse> PredictionServiceGrpc.getMultiInferenceMethod()com.google.protobuf.Parser<Inference.MultiInferenceRequest> Inference.MultiInferenceRequest.getParserForType()static com.google.protobuf.Parser<Inference.MultiInferenceRequest> Inference.MultiInferenceRequest.parser()Methods in tensorflow.serving with parameters of type Inference.MultiInferenceRequestModifier and TypeMethodDescriptionInference.MultiInferenceRequest.Builder.mergeFrom(Inference.MultiInferenceRequest other) PredictionLogOuterClass.MultiInferenceLog.Builder.mergeRequest(Inference.MultiInferenceRequest value) .tensorflow.serving.MultiInferenceRequest request = 1;default voidPredictionServiceGrpc.AsyncService.multiInference(Inference.MultiInferenceRequest request, io.grpc.stub.StreamObserver<Inference.MultiInferenceResponse> responseObserver) MultiInference API for multi-headed models.PredictionServiceGrpc.PredictionServiceBlockingStub.multiInference(Inference.MultiInferenceRequest request) MultiInference API for multi-headed models.PredictionServiceGrpc.PredictionServiceBlockingV2Stub.multiInference(Inference.MultiInferenceRequest request) MultiInference API for multi-headed models.com.google.common.util.concurrent.ListenableFuture<Inference.MultiInferenceResponse> PredictionServiceGrpc.PredictionServiceFutureStub.multiInference(Inference.MultiInferenceRequest request) MultiInference API for multi-headed models.voidPredictionServiceGrpc.PredictionServiceStub.multiInference(Inference.MultiInferenceRequest request, io.grpc.stub.StreamObserver<Inference.MultiInferenceResponse> responseObserver) MultiInference API for multi-headed models.Inference.MultiInferenceRequest.newBuilder(Inference.MultiInferenceRequest prototype) PredictionLogOuterClass.MultiInferenceLog.Builder.setRequest(Inference.MultiInferenceRequest value) .tensorflow.serving.MultiInferenceRequest request = 1;