Package tensorflow.serving
package tensorflow.serving
-
ClassDescriptionA single class.A single class.Protobuf type
tensorflow.serving.ClassificationRequestProtobuf typetensorflow.serving.ClassificationRequestProtobuf typetensorflow.serving.ClassificationResponseProtobuf typetensorflow.serving.ClassificationResponseContains one result per input example, in the same order as the input in ClassificationRequest.Contains one result per input example, in the same order as the input in ClassificationRequest.List of classes for a single item (tensorflow.Example).List of classes for a single item (tensorflow.Example).Config proto for FileSystemStoragePathSource.Config proto for FileSystemStoragePathSource.A servable name and base path to look for versions of the servable.A servable name and base path to look for versions of the servable.A policy that dictates which version(s) of a servable should be served.Serve all versions found on disk.Serve all versions found on disk.A policy that dictates which version(s) of a servable should be served.Serve the latest versions (i.e. the ones with the highest version numbers), among those found on disk.Serve the latest versions (i.e. the ones with the highest version numbers), among those found on disk.FileSystemStoragePathSource.FileSystemStoragePathSourceConfig.ServableVersionPolicy.PolicyChoiceCaseServe a specific version (or set of versions).FileSystemStoragePathSource.FileSystemStoragePathSourceConfig.ServableVersionPolicy.Specific.BuilderServe a specific version (or set of versions).Protobuf typetensorflow.serving.GetModelMetadataRequestProtobuf typetensorflow.serving.GetModelMetadataRequestProtobuf typetensorflow.serving.GetModelMetadataResponseProtobuf typetensorflow.serving.GetModelMetadataResponseMessage returned for "signature_def" field.Message returned for "signature_def" field.GetModelStatusRequest contains a ModelSpec indicating the model for which to get status.GetModelStatusRequest contains a ModelSpec indicating the model for which to get status.Response for ModelStatusRequest on successful run.Response for ModelStatusRequest on successful run.Version number, state, and status for a single version of a model.Version number, state, and status for a single version of a model.States that map to ManagerState enum in tensorflow_serving/core/servable_state.hInference result, matches the type of request or is an error.Inference result, matches the type of request or is an error.Inference request such as classification, regression, etc...Inference request such as classification, regression, etc...Inference request containing one or more requests.Inference request containing one or more requests.Inference request containing one or more responses.Inference request containing one or more responses.Specifies one or more fully independent input Examples.Specifies one or more fully independent input Examples.Specifies one or more independent input Examples, with a common context Example.Specifies one or more independent input Examples, with a common context Example.Protobuf typetensorflow.serving.InputProtobuf typetensorflow.serving.InputProtobuf typetensorflow.serving.LogCollectorConfigProtobuf typetensorflow.serving.LogCollectorConfigMetadata logged along with the request logs.Metadata logged along with the request logs.Configuration for logging query/responses.Configuration for logging query/responses.Protobuf typetensorflow.serving.SamplingConfigAttributes of requests that can be optionally sampled.Protobuf typetensorflow.serving.SamplingConfigMetadata for an inference request such as the model name and version.Metadata for an inference request such as the model name and version.Protobuf typetensorflow.serving.MetricProtobuf typetensorflow.serving.MetricProtobuf typetensorflow.serving.ReloadConfigRequestProtobuf typetensorflow.serving.ReloadConfigRequestProtobuf typetensorflow.serving.ReloadConfigResponseProtobuf typetensorflow.serving.ReloadConfigResponseCommon configuration for loading a model being served.Common configuration for loading a model being served.Static list of models to be loaded for serving.Static list of models to be loaded for serving.ModelServer config.ModelServer config.The type of model.ModelService provides methods to query and update the state of the server, e.g. which models/versions are being served.ModelService provides methods to query and update the state of the server, e.g. which models/versions are being served.A stub to allow clients to do limited synchronous rpc calls to service ModelService.A stub to allow clients to do synchronous rpc calls to service ModelService.A stub to allow clients to do ListenableFuture-style rpc calls to service ModelService.Base class for the server implementation of the service ModelService.A stub to allow clients to do asynchronous rpc calls to service ModelService.Configuration for monitoring.Configuration for monitoring.Configuration for Prometheus monitoring.Configuration for Prometheus monitoring.Configuration for a servable platform e.g. tensorflow or other ML systems.Configuration for a servable platform e.g. tensorflow or other ML systems.Protobuf typetensorflow.serving.PlatformConfigMapProtobuf typetensorflow.serving.PlatformConfigMapPredictRequest specifies which TensorFlow model to run, as well as how inputs are mapped to tensors and how outputs are filtered before returning to user.PredictRequest specifies which TensorFlow model to run, as well as how inputs are mapped to tensors and how outputs are filtered before returning to user.Options for PredictRequest.Options for PredictRequest.Deterministic mode for the request.Response for PredictRequest on successful run.Response for PredictRequest on successful run.Options only used for streaming requests that control how inputs/ouputs are handled in the stream.Options only used for streaming requests that control how inputs/ouputs are handled in the stream.Protobuf enumtensorflow.serving.PredictStreamedOptions.RequestStateProtobuf typetensorflow.serving.ClassifyLogProtobuf typetensorflow.serving.ClassifyLogProtobuf typetensorflow.serving.MultiInferenceLogProtobuf typetensorflow.serving.MultiInferenceLogLogged model inference request.Logged model inference request.Protobuf typetensorflow.serving.PredictLogProtobuf typetensorflow.serving.PredictLogProtobuf typetensorflow.serving.PredictStreamedLogProtobuf typetensorflow.serving.PredictStreamedLogProtobuf typetensorflow.serving.RegressLogProtobuf typetensorflow.serving.RegressLogProtobuf typetensorflow.serving.SessionRunLogProtobuf typetensorflow.serving.SessionRunLogopen source marker; do not remove PredictionService provides access to machine-learned models loaded by model_servers.open source marker; do not remove PredictionService provides access to machine-learned models loaded by model_servers.A stub to allow clients to do limited synchronous rpc calls to service PredictionService.A stub to allow clients to do synchronous rpc calls to service PredictionService.A stub to allow clients to do ListenableFuture-style rpc calls to service PredictionService.Base class for the server implementation of the service PredictionService.A stub to allow clients to do asynchronous rpc calls to service PredictionService.Regression result for a single item (tensorflow.Example).Regression result for a single item (tensorflow.Example).Protobuf typetensorflow.serving.RegressionRequestProtobuf typetensorflow.serving.RegressionRequestProtobuf typetensorflow.serving.RegressionResponseProtobuf typetensorflow.serving.RegressionResponseContains one result per input example, in the same order as the input in RegressionRequest.Contains one result per input example, in the same order as the input in RegressionRequest.SessionService defines a service with which a client can interact to execute Tensorflow model inference.SessionService defines a service with which a client can interact to execute Tensorflow model inference.A stub to allow clients to do limited synchronous rpc calls to service SessionService.A stub to allow clients to do synchronous rpc calls to service SessionService.A stub to allow clients to do ListenableFuture-style rpc calls to service SessionService.Base class for the server implementation of the service SessionService.A stub to allow clients to do asynchronous rpc calls to service SessionService.Protobuf typetensorflow.serving.SessionRunRequestProtobuf typetensorflow.serving.SessionRunRequestProtobuf typetensorflow.serving.SessionRunResponseProtobuf typetensorflow.serving.SessionRunResponseConfiguration for a secure gRPC channelConfiguration for a secure gRPC channelStatus that corresponds to Status in third_party/tensorflow/core/lib/core/status.h.Status that corresponds to Status in third_party/tensorflow/core/lib/core/status.h.