Class OpenAIServiceSettings
java.lang.Object
co.elastic.clients.elasticsearch.inference.OpenAIServiceSettings
- All Implemented Interfaces:
JsonpSerializable
- See Also:
-
Nested Class Summary
Nested Classes -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final JsonpDeserializer<OpenAIServiceSettings>
Json deserializer forOpenAIServiceSettings
-
Method Summary
Modifier and TypeMethodDescriptionfinal String
apiKey()
Required - A valid API key of your OpenAI account.final Integer
The number of dimensions the resulting output embeddings should have.final String
modelId()
Required - The name of the model to use for the inference task.static OpenAIServiceSettings
final String
The unique identifier for your organization.final RateLimitSetting
This setting helps to minimize the number of rate limit errors returned from OpenAI.void
serialize
(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) Serialize this object to JSON.protected void
serializeInternal
(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) protected static void
toString()
final String
url()
The URL endpoint to use for the requests.
-
Field Details
-
_DESERIALIZER
Json deserializer forOpenAIServiceSettings
-
-
Method Details
-
of
public static OpenAIServiceSettings of(Function<OpenAIServiceSettings.Builder, ObjectBuilder<OpenAIServiceSettings>> fn) -
apiKey
Required - A valid API key of your OpenAI account. You can find your OpenAI API keys in your OpenAI account under the API keys section.IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.
API name:
api_key
-
dimensions
The number of dimensions the resulting output embeddings should have. It is supported only intext-embedding-3
and later models. If it is not set, the OpenAI defined default for the model is used.API name:
dimensions
-
modelId
Required - The name of the model to use for the inference task. Refer to the OpenAI documentation for the list of available text embedding models.API name:
model_id
-
organizationId
The unique identifier for your organization. You can find the Organization ID in your OpenAI account under Settings > Organizations.API name:
organization_id
-
rateLimit
This setting helps to minimize the number of rate limit errors returned from OpenAI. Theopenai
service sets a default number of requests allowed per minute depending on the task type. Fortext_embedding
, it is set to3000
. Forcompletion
, it is set to500
.API name:
rate_limit
-
url
The URL endpoint to use for the requests. It can be changed for testing purposes.API name:
url
-
serialize
Serialize this object to JSON.- Specified by:
serialize
in interfaceJsonpSerializable
-
serializeInternal
-
toString
-
setupOpenAIServiceSettingsDeserializer
protected static void setupOpenAIServiceSettingsDeserializer(ObjectDeserializer<OpenAIServiceSettings.Builder> op)
-