Class ElasticsearchInferenceClient

java.lang.Object
co.elastic.clients.ApiClient<ElasticsearchTransport,ElasticsearchInferenceClient>
co.elastic.clients.elasticsearch.inference.ElasticsearchInferenceClient
All Implemented Interfaces:
Closeable, AutoCloseable

public class ElasticsearchInferenceClient extends ApiClient<ElasticsearchTransport,ElasticsearchInferenceClient>
Client for the inference namespace.
  • Constructor Details

  • Method Details

    • withTransportOptions

      public ElasticsearchInferenceClient withTransportOptions(@Nullable TransportOptions transportOptions)
      Description copied from class: ApiClient
      Creates a new client with some request options
      Specified by:
      withTransportOptions in class ApiClient<ElasticsearchTransport,ElasticsearchInferenceClient>
    • chatCompletionUnified

      public BinaryResponse chatCompletionUnified(ChatCompletionUnifiedRequest request) throws IOException, ElasticsearchException
      Perform chat completion inference
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • chatCompletionUnified

      Perform chat completion inference
      Parameters:
      fn - a function that initializes a builder to create the ChatCompletionUnifiedRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • completion

      Perform completion inference on the service
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • completion

      Perform completion inference on the service
      Parameters:
      fn - a function that initializes a builder to create the CompletionRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • delete

      Delete an inference endpoint
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • delete

      Delete an inference endpoint
      Parameters:
      fn - a function that initializes a builder to create the DeleteInferenceRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • get

      Get an inference endpoint
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • get

      Get an inference endpoint
      Parameters:
      fn - a function that initializes a builder to create the GetInferenceRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • get

      Get an inference endpoint
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • put

      Create an inference endpoint. When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • put

      Create an inference endpoint. When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

      Parameters:
      fn - a function that initializes a builder to create the PutRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putAlibabacloud

      Create an AlibabaCloud AI Search inference endpoint.

      Create an inference endpoint to perform an inference task with the alibabacloud-ai-search service.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putAlibabacloud

      Create an AlibabaCloud AI Search inference endpoint.

      Create an inference endpoint to perform an inference task with the alibabacloud-ai-search service.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutAlibabacloudRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putAmazonbedrock

      Create an Amazon Bedrock inference endpoint.

      Creates an inference endpoint to perform an inference task with the amazonbedrock service.

      info You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putAmazonbedrock

      Create an Amazon Bedrock inference endpoint.

      Creates an inference endpoint to perform an inference task with the amazonbedrock service.

      info You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutAmazonbedrockRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putAnthropic

      Create an Anthropic inference endpoint.

      Create an inference endpoint to perform an inference task with the anthropic service.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putAnthropic

      Create an Anthropic inference endpoint.

      Create an inference endpoint to perform an inference task with the anthropic service.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutAnthropicRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putAzureaistudio

      Create an Azure AI studio inference endpoint.

      Create an inference endpoint to perform an inference task with the azureaistudio service.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putAzureaistudio

      Create an Azure AI studio inference endpoint.

      Create an inference endpoint to perform an inference task with the azureaistudio service.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutAzureaistudioRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putAzureopenai

      Create an Azure OpenAI inference endpoint.

      Create an inference endpoint to perform an inference task with the azureopenai service.

      The list of chat completion models that you can choose from in your Azure OpenAI deployment include:

      The list of embeddings models that you can choose from in your deployment can be found in the Azure models documentation.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putAzureopenai

      Create an Azure OpenAI inference endpoint.

      Create an inference endpoint to perform an inference task with the azureopenai service.

      The list of chat completion models that you can choose from in your Azure OpenAI deployment include:

      The list of embeddings models that you can choose from in your deployment can be found in the Azure models documentation.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutAzureopenaiRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putCohere

      Create a Cohere inference endpoint.

      Create an inference endpoint to perform an inference task with the cohere service.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putCohere

      Create a Cohere inference endpoint.

      Create an inference endpoint to perform an inference task with the cohere service.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutCohereRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putElasticsearch

      Create an Elasticsearch inference endpoint.

      Create an inference endpoint to perform an inference task with the elasticsearch service.

      info Your Elasticsearch deployment contains preconfigured ELSER and E5 inference endpoints, you only need to create the enpoints using the API if you want to customize the settings.

      If you use the ELSER or the E5 model through the elasticsearch service, the API request will automatically download and deploy the model if it isn't downloaded yet.

      info You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.

      After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putElasticsearch

      Create an Elasticsearch inference endpoint.

      Create an inference endpoint to perform an inference task with the elasticsearch service.

      info Your Elasticsearch deployment contains preconfigured ELSER and E5 inference endpoints, you only need to create the enpoints using the API if you want to customize the settings.

      If you use the ELSER or the E5 model through the elasticsearch service, the API request will automatically download and deploy the model if it isn't downloaded yet.

      info You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.

      After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutElasticsearchRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putElser

      Create an ELSER inference endpoint.

      Create an inference endpoint to perform an inference task with the elser service. You can also deploy ELSER by using the Elasticsearch inference integration.

      info Your Elasticsearch deployment contains a preconfigured ELSER inference endpoint, you only need to create the enpoint using the API if you want to customize the settings.

      The API request will automatically download and deploy the ELSER model if it isn't already downloaded.

      info You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.

      After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putElser

      Create an ELSER inference endpoint.

      Create an inference endpoint to perform an inference task with the elser service. You can also deploy ELSER by using the Elasticsearch inference integration.

      info Your Elasticsearch deployment contains a preconfigured ELSER inference endpoint, you only need to create the enpoint using the API if you want to customize the settings.

      The API request will automatically download and deploy the ELSER model if it isn't already downloaded.

      info You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.

      After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutElserRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putGoogleaistudio

      Create an Google AI Studio inference endpoint.

      Create an inference endpoint to perform an inference task with the googleaistudio service.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putGoogleaistudio

      Create an Google AI Studio inference endpoint.

      Create an inference endpoint to perform an inference task with the googleaistudio service.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutGoogleaistudioRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putGooglevertexai

      Create a Google Vertex AI inference endpoint.

      Create an inference endpoint to perform an inference task with the googlevertexai service.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putGooglevertexai

      Create a Google Vertex AI inference endpoint.

      Create an inference endpoint to perform an inference task with the googlevertexai service.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutGooglevertexaiRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putHuggingFace

      Create a Hugging Face inference endpoint.

      Create an inference endpoint to perform an inference task with the hugging_face service.

      You must first create an inference endpoint on the Hugging Face endpoint page to get an endpoint URL. Select the model you want to use on the new endpoint creation page (for example intfloat/e5-small-v2), then select the sentence embeddings task under the advanced configuration section. Create the endpoint and copy the URL after the endpoint initialization has been finished.

      The following models are recommended for the Hugging Face service:

      • all-MiniLM-L6-v2
      • all-MiniLM-L12-v2
      • all-mpnet-base-v2
      • e5-base-v2
      • e5-small-v2
      • multilingual-e5-base
      • multilingual-e5-small

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putHuggingFace

      Create a Hugging Face inference endpoint.

      Create an inference endpoint to perform an inference task with the hugging_face service.

      You must first create an inference endpoint on the Hugging Face endpoint page to get an endpoint URL. Select the model you want to use on the new endpoint creation page (for example intfloat/e5-small-v2), then select the sentence embeddings task under the advanced configuration section. Create the endpoint and copy the URL after the endpoint initialization has been finished.

      The following models are recommended for the Hugging Face service:

      • all-MiniLM-L6-v2
      • all-MiniLM-L12-v2
      • all-mpnet-base-v2
      • e5-base-v2
      • e5-small-v2
      • multilingual-e5-base
      • multilingual-e5-small

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutHuggingFaceRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putJinaai

      Create an JinaAI inference endpoint.

      Create an inference endpoint to perform an inference task with the jinaai service.

      To review the available rerank models, refer to https://jina.ai/reranker. To review the available text_embedding models, refer to the https://jina.ai/embeddings/.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putJinaai

      Create an JinaAI inference endpoint.

      Create an inference endpoint to perform an inference task with the jinaai service.

      To review the available rerank models, refer to https://jina.ai/reranker. To review the available text_embedding models, refer to the https://jina.ai/embeddings/.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutJinaaiRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putMistral

      Create a Mistral inference endpoint.

      Creates an inference endpoint to perform an inference task with the mistral service.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putMistral

      Create a Mistral inference endpoint.

      Creates an inference endpoint to perform an inference task with the mistral service.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutMistralRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putOpenai

      Create an OpenAI inference endpoint.

      Create an inference endpoint to perform an inference task with the openai service or openai compatible APIs.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putOpenai

      Create an OpenAI inference endpoint.

      Create an inference endpoint to perform an inference task with the openai service or openai compatible APIs.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutOpenaiRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putVoyageai

      Create a VoyageAI inference endpoint.

      Create an inference endpoint to perform an inference task with the voyageai service.

      Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putVoyageai

      Create a VoyageAI inference endpoint.

      Create an inference endpoint to perform an inference task with the voyageai service.

      Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutVoyageaiRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putWatsonx

      Create a Watsonx inference endpoint.

      Create an inference endpoint to perform an inference task with the watsonxai service. You need an IBM Cloud Databases for Elasticsearch deployment to use the watsonxai inference service. You can provision one through the IBM catalog, the Cloud Databases CLI plug-in, the Cloud Databases API, or Terraform.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • putWatsonx

      Create a Watsonx inference endpoint.

      Create an inference endpoint to perform an inference task with the watsonxai service. You need an IBM Cloud Databases for Elasticsearch deployment to use the watsonxai inference service. You can provision one through the IBM catalog, the Cloud Databases CLI plug-in, the Cloud Databases API, or Terraform.

      When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

      Parameters:
      fn - a function that initializes a builder to create the PutWatsonxRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • rerank

      Perform rereanking inference on the service
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • rerank

      Perform rereanking inference on the service
      Parameters:
      fn - a function that initializes a builder to create the RerankRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • sparseEmbedding

      Perform sparse embedding inference on the service
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • sparseEmbedding

      Perform sparse embedding inference on the service
      Parameters:
      fn - a function that initializes a builder to create the SparseEmbeddingRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • streamCompletion

      public BinaryResponse streamCompletion(StreamCompletionRequest request) throws IOException, ElasticsearchException
      Perform streaming inference. Get real-time responses for completion tasks by delivering answers incrementally, reducing response times during computation. This API works only with the completion task type.

      IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

      This API requires the monitor_inference cluster privilege (the built-in inference_admin and inference_user roles grant this privilege). You must use a client that supports streaming.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • streamCompletion

      Perform streaming inference. Get real-time responses for completion tasks by delivering answers incrementally, reducing response times during computation. This API works only with the completion task type.

      IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

      This API requires the monitor_inference cluster privilege (the built-in inference_admin and inference_user roles grant this privilege). You must use a client that supports streaming.

      Parameters:
      fn - a function that initializes a builder to create the StreamCompletionRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • textEmbedding

      Perform text embedding inference on the service
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • textEmbedding

      Perform text embedding inference on the service
      Parameters:
      fn - a function that initializes a builder to create the TextEmbeddingRequest
      Throws:
      IOException
      ElasticsearchException
      See Also:
    • update

      Update an inference endpoint.

      Modify task_settings, secrets (within service_settings), or num_allocations for an inference endpoint, depending on the specific endpoint service and task_type.

      IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

      Throws:
      IOException
      ElasticsearchException
      See Also:
    • update

      Update an inference endpoint.

      Modify task_settings, secrets (within service_settings), or num_allocations for an inference endpoint, depending on the specific endpoint service and task_type.

      IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

      Parameters:
      fn - a function that initializes a builder to create the UpdateInferenceRequest
      Throws:
      IOException
      ElasticsearchException
      See Also: