Class TrainedModelDeploymentNodesStats
java.lang.Object
co.elastic.clients.elasticsearch.ml.TrainedModelDeploymentNodesStats
- All Implemented Interfaces:
JsonpSerializable
@JsonpDeserializable
public class TrainedModelDeploymentNodesStats
extends Object
implements JsonpSerializable
- See Also:
-
Nested Class Summary
Nested Classes -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final JsonpDeserializer<TrainedModelDeploymentNodesStats>
Json deserializer forTrainedModelDeploymentNodesStats
-
Method Summary
Modifier and TypeMethodDescriptionfinal Double
The average time for each inference call to complete on this node.final Double
The average time for each inference call to complete on this node, excluding cachefinal Double
API name:average_inference_time_ms_last_minute
final Integer
The number of errors when evaluating the trained model.final Long
API name:inference_cache_hit_count
final Long
API name:inference_cache_hit_count_last_minute
final Long
The total number of inference calls made against this node for this model.final Long
The epoch time stamp of the last inference call for the model on this node.final DiscoveryNodeContent
node()
Information pertaining to the node.final Integer
The number of allocations assigned to this node.final Integer
The number of inference requests queued to be processed.of
(Function<TrainedModelDeploymentNodesStats.Builder, ObjectBuilder<TrainedModelDeploymentNodesStats>> fn) final long
Required - API name:peak_throughput_per_minute
final Integer
The number of inference requests that were not processed because the queue was full.Required - The current routing state and reason for the current routing state for this allocation.void
serialize
(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) Serialize this object to JSON.protected void
serializeInternal
(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) protected static void
setupTrainedModelDeploymentNodesStatsDeserializer
(ObjectDeserializer<TrainedModelDeploymentNodesStats.Builder> op) final Long
The epoch timestamp when the allocation started.final Integer
The number of threads used by each allocation during inference.final int
Required - API name:throughput_last_minute
final Integer
The number of inference requests that timed out before being processed.toString()
-
Field Details
-
_DESERIALIZER
Json deserializer forTrainedModelDeploymentNodesStats
-
-
Method Details
-
of
-
averageInferenceTimeMs
The average time for each inference call to complete on this node.API name:
average_inference_time_ms
-
averageInferenceTimeMsLastMinute
API name:average_inference_time_ms_last_minute
-
averageInferenceTimeMsExcludingCacheHits
The average time for each inference call to complete on this node, excluding cacheAPI name:
average_inference_time_ms_excluding_cache_hits
-
errorCount
The number of errors when evaluating the trained model.API name:
error_count
-
inferenceCount
The total number of inference calls made against this node for this model.API name:
inference_count
-
inferenceCacheHitCount
API name:inference_cache_hit_count
-
inferenceCacheHitCountLastMinute
API name:inference_cache_hit_count_last_minute
-
lastAccess
The epoch time stamp of the last inference call for the model on this node.API name:
last_access
-
node
Information pertaining to the node.API name:
node
-
numberOfAllocations
The number of allocations assigned to this node.API name:
number_of_allocations
-
numberOfPendingRequests
The number of inference requests queued to be processed.API name:
number_of_pending_requests
-
peakThroughputPerMinute
public final long peakThroughputPerMinute()Required - API name:peak_throughput_per_minute
-
rejectedExecutionCount
The number of inference requests that were not processed because the queue was full.API name:
rejected_execution_count
-
routingState
Required - The current routing state and reason for the current routing state for this allocation.API name:
routing_state
-
startTime
The epoch timestamp when the allocation started.API name:
start_time
-
threadsPerAllocation
The number of threads used by each allocation during inference.API name:
threads_per_allocation
-
throughputLastMinute
public final int throughputLastMinute()Required - API name:throughput_last_minute
-
timeoutCount
The number of inference requests that timed out before being processed.API name:
timeout_count
-
serialize
Serialize this object to JSON.- Specified by:
serialize
in interfaceJsonpSerializable
-
serializeInternal
-
toString
-
setupTrainedModelDeploymentNodesStatsDeserializer
protected static void setupTrainedModelDeploymentNodesStatsDeserializer(ObjectDeserializer<TrainedModelDeploymentNodesStats.Builder> op)
-