Class MistralServiceSettings.Builder
java.lang.Object
co.elastic.clients.util.ObjectBuilderBase
co.elastic.clients.util.WithJsonObjectBuilderBase<MistralServiceSettings.Builder>
co.elastic.clients.elasticsearch.inference.MistralServiceSettings.Builder
- All Implemented Interfaces:
- WithJson<MistralServiceSettings.Builder>,- ObjectBuilder<MistralServiceSettings>
- Enclosing class:
- MistralServiceSettings
public static class MistralServiceSettings.Builder
extends WithJsonObjectBuilderBase<MistralServiceSettings.Builder>
implements ObjectBuilder<MistralServiceSettings>
Builder for 
MistralServiceSettings.- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptionRequired - A valid API key of your Mistral account.build()Builds aMistralServiceSettings.maxInputTokens(Integer value) The maximum number of tokens per input before chunking occurs.Required - The name of the model to use for the inference task.rateLimit(RateLimitSetting value) This setting helps to minimize the number of rate limit errors returned from the Mistral API.This setting helps to minimize the number of rate limit errors returned from the Mistral API.protected MistralServiceSettings.Builderself()Methods inherited from class co.elastic.clients.util.WithJsonObjectBuilderBasewithJsonMethods inherited from class co.elastic.clients.util.ObjectBuilderBase_checkSingleUse, _listAdd, _listAddAll, _mapPut, _mapPutAll
- 
Constructor Details- 
Builderpublic Builder()
 
- 
- 
Method Details- 
apiKeyRequired - A valid API key of your Mistral account. You can find your Mistral API keys or you can create a new one on the API Keys page.IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key. API name: api_key
- 
maxInputTokensThe maximum number of tokens per input before chunking occurs.API name: max_input_tokens
- 
modelRequired - The name of the model to use for the inference task. Refer to the Mistral models documentation for the list of available models.API name: model
- 
rateLimitThis setting helps to minimize the number of rate limit errors returned from the Mistral API. By default, themistralservice sets the number of requests allowed per minute to 240.API name: rate_limit
- 
rateLimitpublic final MistralServiceSettings.Builder rateLimit(Function<RateLimitSetting.Builder, ObjectBuilder<RateLimitSetting>> fn) This setting helps to minimize the number of rate limit errors returned from the Mistral API. By default, themistralservice sets the number of requests allowed per minute to 240.API name: rate_limit
- 
self- Specified by:
- selfin class- WithJsonObjectBuilderBase<MistralServiceSettings.Builder>
 
- 
buildBuilds aMistralServiceSettings.- Specified by:
- buildin interface- ObjectBuilder<MistralServiceSettings>
- Throws:
- NullPointerException- if some of the required fields are null.
 
 
-