Class AzureAiStudioServiceSettings
java.lang.Object
co.elastic.clients.elasticsearch.inference.AzureAiStudioServiceSettings
- All Implemented Interfaces:
- JsonpSerializable
@JsonpDeserializable
public class AzureAiStudioServiceSettings
extends Object
implements JsonpSerializable
- See Also:
- 
Nested Class SummaryNested Classes
- 
Field SummaryFieldsModifier and TypeFieldDescriptionstatic final JsonpDeserializer<AzureAiStudioServiceSettings>Json deserializer forAzureAiStudioServiceSettings
- 
Method SummaryModifier and TypeMethodDescriptionfinal StringapiKey()Required - A valid API key of your Azure AI Studio model deployment.final StringRequired - The type of endpoint that is available for deployment through Azure AI Studio:tokenorrealtime.static AzureAiStudioServiceSettingsfinal Stringprovider()Required - The model provider for your deployment.final RateLimitSettingThis setting helps to minimize the number of rate limit errors returned from Azure AI Studio.voidserialize(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) Serialize this object to JSON.protected voidserializeInternal(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) protected static voidsetupAzureAiStudioServiceSettingsDeserializer(ObjectDeserializer<AzureAiStudioServiceSettings.Builder> op) final Stringtarget()Required - The target URL of your Azure AI Studio model deployment.toString()
- 
Field Details- 
_DESERIALIZERJson deserializer forAzureAiStudioServiceSettings
 
- 
- 
Method Details- 
of
- 
apiKeyRequired - A valid API key of your Azure AI Studio model deployment. This key can be found on the overview page for your deployment in the management section of your Azure AI Studio account.IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key. API name: api_key
- 
endpointTypeRequired - The type of endpoint that is available for deployment through Azure AI Studio:tokenorrealtime. Thetokenendpoint type is for "pay as you go" endpoints that are billed per token. Therealtimeendpoint type is for "real-time" endpoints that are billed per hour of usage.API name: endpoint_type
- 
targetRequired - The target URL of your Azure AI Studio model deployment. This can be found on the overview page for your deployment in the management section of your Azure AI Studio account.API name: target
- 
providerRequired - The model provider for your deployment. Note that some providers may support only certain task types. Supported providers include:- cohere- available for- text_embeddingand- completiontask types
- databricks- available for- completiontask type only
- meta- available for- completiontask type only
- microsoft_phi- available for- completiontask type only
- mistral- available for- completiontask type only
- openai- available for- text_embeddingand- completiontask types
 API name: provider
- 
rateLimitThis setting helps to minimize the number of rate limit errors returned from Azure AI Studio. By default, theazureaistudioservice sets the number of requests allowed per minute to 240.API name: rate_limit
- 
serializeSerialize this object to JSON.- Specified by:
- serializein interface- JsonpSerializable
 
- 
serializeInternal
- 
toString
- 
setupAzureAiStudioServiceSettingsDeserializerprotected static void setupAzureAiStudioServiceSettingsDeserializer(ObjectDeserializer<AzureAiStudioServiceSettings.Builder> op) 
 
-