Class ElserServiceSettings
java.lang.Object
co.elastic.clients.elasticsearch.inference.ElserServiceSettings
- All Implemented Interfaces:
JsonpSerializable
- See Also:
-
Nested Class Summary
Nested Classes -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final JsonpDeserializer<ElserServiceSettings>
Json deserializer forElserServiceSettings
-
Method Summary
Modifier and TypeMethodDescriptionfinal AdaptiveAllocations
Adaptive allocations configuration details.final int
Required - The total number of allocations this model is assigned across machine learning nodes.final int
Required - The number of threads used by each model allocation during inference.static ElserServiceSettings
void
serialize
(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) Serialize this object to JSON.protected void
serializeInternal
(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) protected static void
toString()
-
Field Details
-
_DESERIALIZER
Json deserializer forElserServiceSettings
-
-
Method Details
-
of
public static ElserServiceSettings of(Function<ElserServiceSettings.Builder, ObjectBuilder<ElserServiceSettings>> fn) -
adaptiveAllocations
Adaptive allocations configuration details. Ifenabled
is true, the number of allocations of the model is set based on the current load the process gets. When the load is high, a new model allocation is automatically created, respecting the value ofmax_number_of_allocations
if it's set. When the load is low, a model allocation is automatically removed, respecting the value ofmin_number_of_allocations
if it's set. Ifenabled
is true, do not set the number of allocations manually.API name:
adaptive_allocations
-
numAllocations
public final int numAllocations()Required - The total number of allocations this model is assigned across machine learning nodes. Increasing this value generally increases the throughput. If adaptive allocations is enabled, do not set this value because it's automatically set.API name:
num_allocations
-
numThreads
public final int numThreads()Required - The number of threads used by each model allocation during inference. Increasing this value generally increases the speed per inference request. The inference process is a compute-bound process;threads_per_allocations
must not exceed the number of available allocated processors per node. The value must be a power of 2. The maximum value is 32.info If you want to optimize your ELSER endpoint for ingest, set the number of threads to 1. If you want to optimize your ELSER endpoint for search, set the number of threads to greater than 1.
API name:
num_threads
-
serialize
Serialize this object to JSON.- Specified by:
serialize
in interfaceJsonpSerializable
-
serializeInternal
-
toString
-
setupElserServiceSettingsDeserializer
protected static void setupElserServiceSettingsDeserializer(ObjectDeserializer<ElserServiceSettings.Builder> op)
-