Class MistralServiceSettings
java.lang.Object
co.elastic.clients.elasticsearch.inference.MistralServiceSettings
- All Implemented Interfaces:
JsonpSerializable
@JsonpDeserializable
public class MistralServiceSettings
extends Object
implements JsonpSerializable
- See Also:
-
Nested Class Summary
Nested Classes -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final JsonpDeserializer<MistralServiceSettings>
Json deserializer forMistralServiceSettings
-
Method Summary
Modifier and TypeMethodDescriptionfinal String
apiKey()
Required - A valid API key of your Mistral account.final Integer
The maximum number of tokens per input before chunking occurs.final String
model()
Required - The name of the model to use for the inference task.static MistralServiceSettings
final RateLimitSetting
This setting helps to minimize the number of rate limit errors returned from the Mistral API.void
serialize
(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) Serialize this object to JSON.protected void
serializeInternal
(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) protected static void
toString()
-
Field Details
-
_DESERIALIZER
Json deserializer forMistralServiceSettings
-
-
Method Details
-
of
public static MistralServiceSettings of(Function<MistralServiceSettings.Builder, ObjectBuilder<MistralServiceSettings>> fn) -
apiKey
Required - A valid API key of your Mistral account. You can find your Mistral API keys or you can create a new one on the API Keys page.IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.
API name:
api_key
-
maxInputTokens
The maximum number of tokens per input before chunking occurs.API name:
max_input_tokens
-
model
Required - The name of the model to use for the inference task. Refer to the Mistral models documentation for the list of available text embedding models.API name:
model
-
rateLimit
This setting helps to minimize the number of rate limit errors returned from the Mistral API. By default, themistral
service sets the number of requests allowed per minute to 240.API name:
rate_limit
-
serialize
Serialize this object to JSON.- Specified by:
serialize
in interfaceJsonpSerializable
-
serializeInternal
-
toString
-
setupMistralServiceSettingsDeserializer
protected static void setupMistralServiceSettingsDeserializer(ObjectDeserializer<MistralServiceSettings.Builder> op)
-