Class ElasticsearchInferenceClient
- All Implemented Interfaces:
- Closeable,- AutoCloseable
- 
Field SummaryFields inherited from class co.elastic.clients.ApiClienttransport, transportOptions
- 
Constructor SummaryConstructorsConstructorDescriptionElasticsearchInferenceClient(ElasticsearchTransport transport, TransportOptions transportOptions) 
- 
Method SummaryModifier and TypeMethodDescriptionPerform chat completion inferencefinal BinaryResponsechatCompletionUnified(Function<ChatCompletionUnifiedRequest.Builder, ObjectBuilder<ChatCompletionUnifiedRequest>> fn) Perform chat completion inferencecompletion(CompletionRequest request) Perform completion inference on the servicefinal CompletionResponsePerform completion inference on the servicedelete(DeleteInferenceRequest request) Delete an inference endpointfinal DeleteInferenceResponseDelete an inference endpointget()Get an inference endpointget(GetInferenceRequest request) Get an inference endpointfinal GetInferenceResponseGet an inference endpointinference(InferenceRequest request) Perform inference on the service.final InferenceResponsePerform inference on the service.put(PutRequest request) Create an inference endpoint.final PutResponseCreate an inference endpoint.putAi21(PutAi21Request request) Create a AI21 inference endpoint.final PutAi21ResponseCreate a AI21 inference endpoint.putAlibabacloud(PutAlibabacloudRequest request) Create an AlibabaCloud AI Search inference endpoint.final PutAlibabacloudResponseCreate an AlibabaCloud AI Search inference endpoint.putAmazonbedrock(PutAmazonbedrockRequest request) Create an Amazon Bedrock inference endpoint.final PutAmazonbedrockResponseputAmazonbedrock(Function<PutAmazonbedrockRequest.Builder, ObjectBuilder<PutAmazonbedrockRequest>> fn) Create an Amazon Bedrock inference endpoint.Create an Amazon SageMaker inference endpoint.putAmazonsagemaker(Function<PutAmazonsagemakerRequest.Builder, ObjectBuilder<PutAmazonsagemakerRequest>> fn) Create an Amazon SageMaker inference endpoint.putAnthropic(PutAnthropicRequest request) Create an Anthropic inference endpoint.final PutAnthropicResponseCreate an Anthropic inference endpoint.putAzureaistudio(PutAzureaistudioRequest request) Create an Azure AI studio inference endpoint.final PutAzureaistudioResponseputAzureaistudio(Function<PutAzureaistudioRequest.Builder, ObjectBuilder<PutAzureaistudioRequest>> fn) Create an Azure AI studio inference endpoint.putAzureopenai(PutAzureopenaiRequest request) Create an Azure OpenAI inference endpoint.final PutAzureopenaiResponseCreate an Azure OpenAI inference endpoint.putCohere(PutCohereRequest request) Create a Cohere inference endpoint.final PutCohereResponseCreate a Cohere inference endpoint.putContextualai(PutContextualaiRequest request) Create an Contextual AI inference endpoint.final PutContextualaiResponseCreate an Contextual AI inference endpoint.putCustom(PutCustomRequest request) Create a custom inference endpoint.final PutCustomResponseCreate a custom inference endpoint.putDeepseek(PutDeepseekRequest request) Create a DeepSeek inference endpoint.final PutDeepseekResponseCreate a DeepSeek inference endpoint.putElasticsearch(PutElasticsearchRequest request) Create an Elasticsearch inference endpoint.final PutElasticsearchResponseputElasticsearch(Function<PutElasticsearchRequest.Builder, ObjectBuilder<PutElasticsearchRequest>> fn) Create an Elasticsearch inference endpoint.putElser(PutElserRequest request) Create an ELSER inference endpoint.final PutElserResponseCreate an ELSER inference endpoint.Create an Google AI Studio inference endpoint.putGoogleaistudio(Function<PutGoogleaistudioRequest.Builder, ObjectBuilder<PutGoogleaistudioRequest>> fn) Create an Google AI Studio inference endpoint.Create a Google Vertex AI inference endpoint.putGooglevertexai(Function<PutGooglevertexaiRequest.Builder, ObjectBuilder<PutGooglevertexaiRequest>> fn) Create a Google Vertex AI inference endpoint.putHuggingFace(PutHuggingFaceRequest request) Create a Hugging Face inference endpoint.final PutHuggingFaceResponseCreate a Hugging Face inference endpoint.putJinaai(PutJinaaiRequest request) Create an JinaAI inference endpoint.final PutJinaaiResponseCreate an JinaAI inference endpoint.putLlama(PutLlamaRequest request) Create a Llama inference endpoint.final PutLlamaResponseCreate a Llama inference endpoint.putMistral(PutMistralRequest request) Create a Mistral inference endpoint.final PutMistralResponseCreate a Mistral inference endpoint.putOpenai(PutOpenaiRequest request) Create an OpenAI inference endpoint.final PutOpenaiResponseCreate an OpenAI inference endpoint.putVoyageai(PutVoyageaiRequest request) Create a VoyageAI inference endpoint.final PutVoyageaiResponseCreate a VoyageAI inference endpoint.putWatsonx(PutWatsonxRequest request) Create a Watsonx inference endpoint.final PutWatsonxResponseCreate a Watsonx inference endpoint.rerank(RerankRequest request) Perform reranking inference on the servicefinal RerankResponsePerform reranking inference on the servicesparseEmbedding(SparseEmbeddingRequest request) Perform sparse embedding inference on the servicefinal SparseEmbeddingResponsePerform sparse embedding inference on the servicestreamCompletion(StreamCompletionRequest request) Perform streaming inference.final BinaryResponsestreamCompletion(Function<StreamCompletionRequest.Builder, ObjectBuilder<StreamCompletionRequest>> fn) Perform streaming inference.textEmbedding(TextEmbeddingRequest request) Perform text embedding inference on the servicefinal TextEmbeddingResponsePerform text embedding inference on the serviceupdate(UpdateInferenceRequest request) Update an inference endpoint.final UpdateInferenceResponseUpdate an inference endpoint.withTransportOptions(TransportOptions transportOptions) Creates a new client with some request optionsMethods inherited from class co.elastic.clients.ApiClient_jsonpMapper, _transport, _transportOptions, close, getDeserializer, withTransportOptions
- 
Constructor Details- 
ElasticsearchInferenceClient
- 
ElasticsearchInferenceClientpublic ElasticsearchInferenceClient(ElasticsearchTransport transport, @Nullable TransportOptions transportOptions) 
 
- 
- 
Method Details- 
withTransportOptionspublic ElasticsearchInferenceClient withTransportOptions(@Nullable TransportOptions transportOptions) Description copied from class:ApiClientCreates a new client with some request options- Specified by:
- withTransportOptionsin class- ApiClient<ElasticsearchTransport,- ElasticsearchInferenceClient> 
 
- 
chatCompletionUnifiedpublic BinaryResponse chatCompletionUnified(ChatCompletionUnifiedRequest request) throws IOException, ElasticsearchException Perform chat completion inferenceThe chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation. It only works with the chat_completiontask type foropenaiandelasticinference services.NOTE: The chat_completiontask type is only available within the _stream API and only supports streaming. The Chat completion inference API and the Stream inference API differ in their response structure and capabilities. The Chat completion inference API provides more comprehensive customization options through more fields and function calling support. If you use theopenai,hugging_faceor theelasticservice, use the Chat completion inference API.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
chatCompletionUnifiedpublic final BinaryResponse chatCompletionUnified(Function<ChatCompletionUnifiedRequest.Builder, ObjectBuilder<ChatCompletionUnifiedRequest>> fn) throws IOException, ElasticsearchExceptionPerform chat completion inferenceThe chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation. It only works with the chat_completiontask type foropenaiandelasticinference services.NOTE: The chat_completiontask type is only available within the _stream API and only supports streaming. The Chat completion inference API and the Stream inference API differ in their response structure and capabilities. The Chat completion inference API provides more comprehensive customization options through more fields and function calling support. If you use theopenai,hugging_faceor theelasticservice, use the Chat completion inference API.- Parameters:
- fn- a function that initializes a builder to create the- ChatCompletionUnifiedRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
completionpublic CompletionResponse completion(CompletionRequest request) throws IOException, ElasticsearchException Perform completion inference on the service- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
completionpublic final CompletionResponse completion(Function<CompletionRequest.Builder, ObjectBuilder<CompletionRequest>> fn) throws IOException, ElasticsearchExceptionPerform completion inference on the service- Parameters:
- fn- a function that initializes a builder to create the- CompletionRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
deletepublic DeleteInferenceResponse delete(DeleteInferenceRequest request) throws IOException, ElasticsearchException Delete an inference endpoint- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
deletepublic final DeleteInferenceResponse delete(Function<DeleteInferenceRequest.Builder, ObjectBuilder<DeleteInferenceRequest>> fn) throws IOException, ElasticsearchExceptionDelete an inference endpoint- Parameters:
- fn- a function that initializes a builder to create the- DeleteInferenceRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
getpublic GetInferenceResponse get(GetInferenceRequest request) throws IOException, ElasticsearchException Get an inference endpoint- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
getpublic final GetInferenceResponse get(Function<GetInferenceRequest.Builder, ObjectBuilder<GetInferenceRequest>> fn) throws IOException, ElasticsearchExceptionGet an inference endpoint- Parameters:
- fn- a function that initializes a builder to create the- GetInferenceRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
getGet an inference endpoint- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
inferencepublic InferenceResponse inference(InferenceRequest request) throws IOException, ElasticsearchException Perform inference on the service.This API enables you to use machine learning models to perform specific tasks on data that you provide as an input. It returns a response with the results of the tasks. The inference endpoint you use can perform one specific task that has been defined when the endpoint was created with the create inference API. For details about using this API with a service, such as Amazon Bedrock, Anthropic, or HuggingFace, refer to the service-specific documentation. info The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. - Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
inferencepublic final InferenceResponse inference(Function<InferenceRequest.Builder, ObjectBuilder<InferenceRequest>> fn) throws IOException, ElasticsearchExceptionPerform inference on the service.This API enables you to use machine learning models to perform specific tasks on data that you provide as an input. It returns a response with the results of the tasks. The inference endpoint you use can perform one specific task that has been defined when the endpoint was created with the create inference API. For details about using this API with a service, such as Amazon Bedrock, Anthropic, or HuggingFace, refer to the service-specific documentation. info The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. - Parameters:
- fn- a function that initializes a builder to create the- InferenceRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putCreate an inference endpoint.IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. The following integrations are available through the inference API. You can find the available task types next to the integration name: - AI21 (chat_completion,completion)
- AlibabaCloud AI Search (completion,rerank,sparse_embedding,text_embedding)
- Amazon Bedrock (completion,text_embedding)
- Amazon SageMaker (chat_completion,completion,rerank,sparse_embedding,text_embedding)
- Anthropic (completion)
- Azure AI Studio (completion, 'rerank',text_embedding)
- Azure OpenAI (completion,text_embedding)
- Cohere (completion,rerank,text_embedding)
- DeepSeek (chat_completion,completion)
- Elasticsearch (rerank,sparse_embedding,text_embedding- this service is for built-in models and models uploaded through Eland)
- ELSER (sparse_embedding)
- Google AI Studio (completion,text_embedding)
- Google Vertex AI (chat_completion,completion,rerank,text_embedding)
- Hugging Face (chat_completion,completion,rerank,text_embedding)
- JinaAI (rerank,text_embedding)
- Llama (chat_completion,completion,text_embedding)
- Mistral (chat_completion,completion,text_embedding)
- OpenAI (chat_completion,completion,text_embedding)
- VoyageAI (rerank,text_embedding)
- Watsonx inference integration (text_embedding)
 - Throws:
- IOException
- ElasticsearchException
- See Also:
 
- AI21 (
- 
putpublic final PutResponse put(Function<PutRequest.Builder, ObjectBuilder<PutRequest>> fn) throws IOException, ElasticsearchExceptionCreate an inference endpoint.IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. The following integrations are available through the inference API. You can find the available task types next to the integration name: - AI21 (chat_completion,completion)
- AlibabaCloud AI Search (completion,rerank,sparse_embedding,text_embedding)
- Amazon Bedrock (completion,text_embedding)
- Amazon SageMaker (chat_completion,completion,rerank,sparse_embedding,text_embedding)
- Anthropic (completion)
- Azure AI Studio (completion, 'rerank',text_embedding)
- Azure OpenAI (completion,text_embedding)
- Cohere (completion,rerank,text_embedding)
- DeepSeek (chat_completion,completion)
- Elasticsearch (rerank,sparse_embedding,text_embedding- this service is for built-in models and models uploaded through Eland)
- ELSER (sparse_embedding)
- Google AI Studio (completion,text_embedding)
- Google Vertex AI (chat_completion,completion,rerank,text_embedding)
- Hugging Face (chat_completion,completion,rerank,text_embedding)
- JinaAI (rerank,text_embedding)
- Llama (chat_completion,completion,text_embedding)
- Mistral (chat_completion,completion,text_embedding)
- OpenAI (chat_completion,completion,text_embedding)
- VoyageAI (rerank,text_embedding)
- Watsonx inference integration (text_embedding)
 - Parameters:
- fn- a function that initializes a builder to create the- PutRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- AI21 (
- 
putAi21Create a AI21 inference endpoint.Create an inference endpoint to perform an inference task with the ai21service.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putAi21public final PutAi21Response putAi21(Function<PutAi21Request.Builder, ObjectBuilder<PutAi21Request>> fn) throws IOException, ElasticsearchExceptionCreate a AI21 inference endpoint.Create an inference endpoint to perform an inference task with the ai21service.- Parameters:
- fn- a function that initializes a builder to create the- PutAi21Request
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putAlibabacloudpublic PutAlibabacloudResponse putAlibabacloud(PutAlibabacloudRequest request) throws IOException, ElasticsearchException Create an AlibabaCloud AI Search inference endpoint.Create an inference endpoint to perform an inference task with the alibabacloud-ai-searchservice.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putAlibabacloudpublic final PutAlibabacloudResponse putAlibabacloud(Function<PutAlibabacloudRequest.Builder, ObjectBuilder<PutAlibabacloudRequest>> fn) throws IOException, ElasticsearchExceptionCreate an AlibabaCloud AI Search inference endpoint.Create an inference endpoint to perform an inference task with the alibabacloud-ai-searchservice.- Parameters:
- fn- a function that initializes a builder to create the- PutAlibabacloudRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putAmazonbedrockpublic PutAmazonbedrockResponse putAmazonbedrock(PutAmazonbedrockRequest request) throws IOException, ElasticsearchException Create an Amazon Bedrock inference endpoint.Create an inference endpoint to perform an inference task with the amazonbedrockservice.info You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys. - Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putAmazonbedrockpublic final PutAmazonbedrockResponse putAmazonbedrock(Function<PutAmazonbedrockRequest.Builder, ObjectBuilder<PutAmazonbedrockRequest>> fn) throws IOException, ElasticsearchExceptionCreate an Amazon Bedrock inference endpoint.Create an inference endpoint to perform an inference task with the amazonbedrockservice.info You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys. - Parameters:
- fn- a function that initializes a builder to create the- PutAmazonbedrockRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putAmazonsagemakerpublic PutAmazonsagemakerResponse putAmazonsagemaker(PutAmazonsagemakerRequest request) throws IOException, ElasticsearchException Create an Amazon SageMaker inference endpoint.Create an inference endpoint to perform an inference task with the amazon_sagemakerservice.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putAmazonsagemakerpublic final PutAmazonsagemakerResponse putAmazonsagemaker(Function<PutAmazonsagemakerRequest.Builder, ObjectBuilder<PutAmazonsagemakerRequest>> fn) throws IOException, ElasticsearchExceptionCreate an Amazon SageMaker inference endpoint.Create an inference endpoint to perform an inference task with the amazon_sagemakerservice.- Parameters:
- fn- a function that initializes a builder to create the- PutAmazonsagemakerRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putAnthropicpublic PutAnthropicResponse putAnthropic(PutAnthropicRequest request) throws IOException, ElasticsearchException Create an Anthropic inference endpoint.Create an inference endpoint to perform an inference task with the anthropicservice.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putAnthropicpublic final PutAnthropicResponse putAnthropic(Function<PutAnthropicRequest.Builder, ObjectBuilder<PutAnthropicRequest>> fn) throws IOException, ElasticsearchExceptionCreate an Anthropic inference endpoint.Create an inference endpoint to perform an inference task with the anthropicservice.- Parameters:
- fn- a function that initializes a builder to create the- PutAnthropicRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putAzureaistudiopublic PutAzureaistudioResponse putAzureaistudio(PutAzureaistudioRequest request) throws IOException, ElasticsearchException Create an Azure AI studio inference endpoint.Create an inference endpoint to perform an inference task with the azureaistudioservice.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putAzureaistudiopublic final PutAzureaistudioResponse putAzureaistudio(Function<PutAzureaistudioRequest.Builder, ObjectBuilder<PutAzureaistudioRequest>> fn) throws IOException, ElasticsearchExceptionCreate an Azure AI studio inference endpoint.Create an inference endpoint to perform an inference task with the azureaistudioservice.- Parameters:
- fn- a function that initializes a builder to create the- PutAzureaistudioRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putAzureopenaipublic PutAzureopenaiResponse putAzureopenai(PutAzureopenaiRequest request) throws IOException, ElasticsearchException Create an Azure OpenAI inference endpoint.Create an inference endpoint to perform an inference task with the azureopenaiservice.The list of chat completion models that you can choose from in your Azure OpenAI deployment include: The list of embeddings models that you can choose from in your deployment can be found in the Azure models documentation. - Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putAzureopenaipublic final PutAzureopenaiResponse putAzureopenai(Function<PutAzureopenaiRequest.Builder, ObjectBuilder<PutAzureopenaiRequest>> fn) throws IOException, ElasticsearchExceptionCreate an Azure OpenAI inference endpoint.Create an inference endpoint to perform an inference task with the azureopenaiservice.The list of chat completion models that you can choose from in your Azure OpenAI deployment include: The list of embeddings models that you can choose from in your deployment can be found in the Azure models documentation. - Parameters:
- fn- a function that initializes a builder to create the- PutAzureopenaiRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putCoherepublic PutCohereResponse putCohere(PutCohereRequest request) throws IOException, ElasticsearchException Create a Cohere inference endpoint.Create an inference endpoint to perform an inference task with the cohereservice.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putCoherepublic final PutCohereResponse putCohere(Function<PutCohereRequest.Builder, ObjectBuilder<PutCohereRequest>> fn) throws IOException, ElasticsearchExceptionCreate a Cohere inference endpoint.Create an inference endpoint to perform an inference task with the cohereservice.- Parameters:
- fn- a function that initializes a builder to create the- PutCohereRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putContextualaipublic PutContextualaiResponse putContextualai(PutContextualaiRequest request) throws IOException, ElasticsearchException Create an Contextual AI inference endpoint.Create an inference endpoint to perform an inference task with the contexualaiservice.To review the available rerankmodels, refer to https://docs.contextual.ai/api-reference/rerank/rerank#body-model.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putContextualaipublic final PutContextualaiResponse putContextualai(Function<PutContextualaiRequest.Builder, ObjectBuilder<PutContextualaiRequest>> fn) throws IOException, ElasticsearchExceptionCreate an Contextual AI inference endpoint.Create an inference endpoint to perform an inference task with the contexualaiservice.To review the available rerankmodels, refer to https://docs.contextual.ai/api-reference/rerank/rerank#body-model.- Parameters:
- fn- a function that initializes a builder to create the- PutContextualaiRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putCustompublic PutCustomResponse putCustom(PutCustomRequest request) throws IOException, ElasticsearchException Create a custom inference endpoint.The custom service gives more control over how to interact with external inference services that aren't explicitly supported through dedicated integrations. The custom service gives you the ability to define the headers, url, query parameters, request body, and secrets. The custom service supports the template replacement functionality, which enables you to define a template that can be replaced with the value associated with that key. Templates are portions of a string that start with ${and end with}. The parameterssecret_parametersandtask_settingsare checked for keys for template replacement. Template replacement is supported in therequest,headers,url, andquery_parameters. If the definition (key) is not found for a template, an error message is returned. In case of an endpoint definition like the following:PUT _inference/text_embedding/test-text-embedding { "service": "custom", "service_settings": { "secret_parameters": { "api_key": "<some api key>" }, "url": "...endpoints.huggingface.cloud/v1/embeddings", "headers": { "Authorization": "Bearer ${api_key}", "Content-Type": "application/json" }, "request": "{\"input\": ${input}}", "response": { "json_parser": { "text_embeddings":"$.data[*].embedding[*]" } } } }To replace ${api_key}thesecret_parametersandtask_settingsare checked for a key namedapi_key.info Templates should not be surrounded by quotes. Pre-defined templates: - ${input}refers to the array of input strings that comes from the- inputfield of the subsequent inference requests.
- ${input_type}refers to the input type translation values.
- ${query}refers to the query field used specifically for reranking tasks.
- ${top_n}refers to the- top_nfield available when performing rerank requests.
- ${return_documents}refers to the- return_documentsfield available when performing rerank requests.
 - Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putCustompublic final PutCustomResponse putCustom(Function<PutCustomRequest.Builder, ObjectBuilder<PutCustomRequest>> fn) throws IOException, ElasticsearchExceptionCreate a custom inference endpoint.The custom service gives more control over how to interact with external inference services that aren't explicitly supported through dedicated integrations. The custom service gives you the ability to define the headers, url, query parameters, request body, and secrets. The custom service supports the template replacement functionality, which enables you to define a template that can be replaced with the value associated with that key. Templates are portions of a string that start with ${and end with}. The parameterssecret_parametersandtask_settingsare checked for keys for template replacement. Template replacement is supported in therequest,headers,url, andquery_parameters. If the definition (key) is not found for a template, an error message is returned. In case of an endpoint definition like the following:PUT _inference/text_embedding/test-text-embedding { "service": "custom", "service_settings": { "secret_parameters": { "api_key": "<some api key>" }, "url": "...endpoints.huggingface.cloud/v1/embeddings", "headers": { "Authorization": "Bearer ${api_key}", "Content-Type": "application/json" }, "request": "{\"input\": ${input}}", "response": { "json_parser": { "text_embeddings":"$.data[*].embedding[*]" } } } }To replace ${api_key}thesecret_parametersandtask_settingsare checked for a key namedapi_key.info Templates should not be surrounded by quotes. Pre-defined templates: - ${input}refers to the array of input strings that comes from the- inputfield of the subsequent inference requests.
- ${input_type}refers to the input type translation values.
- ${query}refers to the query field used specifically for reranking tasks.
- ${top_n}refers to the- top_nfield available when performing rerank requests.
- ${return_documents}refers to the- return_documentsfield available when performing rerank requests.
 - Parameters:
- fn- a function that initializes a builder to create the- PutCustomRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putDeepseekpublic PutDeepseekResponse putDeepseek(PutDeepseekRequest request) throws IOException, ElasticsearchException Create a DeepSeek inference endpoint.Create an inference endpoint to perform an inference task with the deepseekservice.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putDeepseekpublic final PutDeepseekResponse putDeepseek(Function<PutDeepseekRequest.Builder, ObjectBuilder<PutDeepseekRequest>> fn) throws IOException, ElasticsearchExceptionCreate a DeepSeek inference endpoint.Create an inference endpoint to perform an inference task with the deepseekservice.- Parameters:
- fn- a function that initializes a builder to create the- PutDeepseekRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putElasticsearchpublic PutElasticsearchResponse putElasticsearch(PutElasticsearchRequest request) throws IOException, ElasticsearchException Create an Elasticsearch inference endpoint.Create an inference endpoint to perform an inference task with the elasticsearchservice.info Your Elasticsearch deployment contains preconfigured ELSER and E5 inference endpoints, you only need to create the enpoints using the API if you want to customize the settings. If you use the ELSER or the E5 model through the elasticsearchservice, the API request will automatically download and deploy the model if it isn't downloaded yet.info You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated"in the response and ensure that the"allocation_count"matches the"target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putElasticsearchpublic final PutElasticsearchResponse putElasticsearch(Function<PutElasticsearchRequest.Builder, ObjectBuilder<PutElasticsearchRequest>> fn) throws IOException, ElasticsearchExceptionCreate an Elasticsearch inference endpoint.Create an inference endpoint to perform an inference task with the elasticsearchservice.info Your Elasticsearch deployment contains preconfigured ELSER and E5 inference endpoints, you only need to create the enpoints using the API if you want to customize the settings. If you use the ELSER or the E5 model through the elasticsearchservice, the API request will automatically download and deploy the model if it isn't downloaded yet.info You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated"in the response and ensure that the"allocation_count"matches the"target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.- Parameters:
- fn- a function that initializes a builder to create the- PutElasticsearchRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putElserpublic PutElserResponse putElser(PutElserRequest request) throws IOException, ElasticsearchException Create an ELSER inference endpoint.Create an inference endpoint to perform an inference task with the elserservice. You can also deploy ELSER by using the Elasticsearch inference integration.info Your Elasticsearch deployment contains a preconfigured ELSER inference endpoint, you only need to create the enpoint using the API if you want to customize the settings. The API request will automatically download and deploy the ELSER model if it isn't already downloaded. info You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated"in the response and ensure that the"allocation_count"matches the"target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putElserpublic final PutElserResponse putElser(Function<PutElserRequest.Builder, ObjectBuilder<PutElserRequest>> fn) throws IOException, ElasticsearchExceptionCreate an ELSER inference endpoint.Create an inference endpoint to perform an inference task with the elserservice. You can also deploy ELSER by using the Elasticsearch inference integration.info Your Elasticsearch deployment contains a preconfigured ELSER inference endpoint, you only need to create the enpoint using the API if you want to customize the settings. The API request will automatically download and deploy the ELSER model if it isn't already downloaded. info You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated"in the response and ensure that the"allocation_count"matches the"target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.- Parameters:
- fn- a function that initializes a builder to create the- PutElserRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putGoogleaistudiopublic PutGoogleaistudioResponse putGoogleaistudio(PutGoogleaistudioRequest request) throws IOException, ElasticsearchException Create an Google AI Studio inference endpoint.Create an inference endpoint to perform an inference task with the googleaistudioservice.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putGoogleaistudiopublic final PutGoogleaistudioResponse putGoogleaistudio(Function<PutGoogleaistudioRequest.Builder, ObjectBuilder<PutGoogleaistudioRequest>> fn) throws IOException, ElasticsearchExceptionCreate an Google AI Studio inference endpoint.Create an inference endpoint to perform an inference task with the googleaistudioservice.- Parameters:
- fn- a function that initializes a builder to create the- PutGoogleaistudioRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putGooglevertexaipublic PutGooglevertexaiResponse putGooglevertexai(PutGooglevertexaiRequest request) throws IOException, ElasticsearchException Create a Google Vertex AI inference endpoint.Create an inference endpoint to perform an inference task with the googlevertexaiservice.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putGooglevertexaipublic final PutGooglevertexaiResponse putGooglevertexai(Function<PutGooglevertexaiRequest.Builder, ObjectBuilder<PutGooglevertexaiRequest>> fn) throws IOException, ElasticsearchExceptionCreate a Google Vertex AI inference endpoint.Create an inference endpoint to perform an inference task with the googlevertexaiservice.- Parameters:
- fn- a function that initializes a builder to create the- PutGooglevertexaiRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putHuggingFacepublic PutHuggingFaceResponse putHuggingFace(PutHuggingFaceRequest request) throws IOException, ElasticsearchException Create a Hugging Face inference endpoint.Create an inference endpoint to perform an inference task with the hugging_faceservice. Supported tasks include:text_embedding,completion, andchat_completion.To configure the endpoint, first visit the Hugging Face Inference Endpoints page and create a new endpoint. Select a model that supports the task you intend to use. For Elastic's text_embeddingtask: The selected model must support theSentence Embeddingstask. On the new endpoint creation page, select theSentence Embeddingstask under theAdvanced Configurationsection. After the endpoint has initialized, copy the generated endpoint URL. Recommended models fortext_embeddingtask:- all-MiniLM-L6-v2
- all-MiniLM-L12-v2
- all-mpnet-base-v2
- e5-base-v2
- e5-small-v2
- multilingual-e5-base
- multilingual-e5-small
 For Elastic's chat_completionandcompletiontasks: The selected model must support theText Generationtask and expose OpenAI API. HuggingFace supports both serverless and dedicated endpoints forText Generation. When creating dedicated endpoint select theText Generationtask. After the endpoint is initialized (for dedicated) or ready (for serverless), ensure it supports the OpenAI API and includes/v1/chat/completionspart in URL. Then, copy the full endpoint URL for use. Recommended models forchat_completionandcompletiontasks:- Mistral-7B-Instruct-v0.2
- QwQ-32B
- Phi-3-mini-128k-instruct
 For Elastic's reranktask: The selected model must support thesentence-rankingtask and expose OpenAI API. HuggingFace supports only dedicated (not serverless) endpoints forRerankso far. After the endpoint is initialized, copy the full endpoint URL for use. Tested models forreranktask:- bge-reranker-base
- jina-reranker-v1-turbo-en-GGUF
 - Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putHuggingFacepublic final PutHuggingFaceResponse putHuggingFace(Function<PutHuggingFaceRequest.Builder, ObjectBuilder<PutHuggingFaceRequest>> fn) throws IOException, ElasticsearchExceptionCreate a Hugging Face inference endpoint.Create an inference endpoint to perform an inference task with the hugging_faceservice. Supported tasks include:text_embedding,completion, andchat_completion.To configure the endpoint, first visit the Hugging Face Inference Endpoints page and create a new endpoint. Select a model that supports the task you intend to use. For Elastic's text_embeddingtask: The selected model must support theSentence Embeddingstask. On the new endpoint creation page, select theSentence Embeddingstask under theAdvanced Configurationsection. After the endpoint has initialized, copy the generated endpoint URL. Recommended models fortext_embeddingtask:- all-MiniLM-L6-v2
- all-MiniLM-L12-v2
- all-mpnet-base-v2
- e5-base-v2
- e5-small-v2
- multilingual-e5-base
- multilingual-e5-small
 For Elastic's chat_completionandcompletiontasks: The selected model must support theText Generationtask and expose OpenAI API. HuggingFace supports both serverless and dedicated endpoints forText Generation. When creating dedicated endpoint select theText Generationtask. After the endpoint is initialized (for dedicated) or ready (for serverless), ensure it supports the OpenAI API and includes/v1/chat/completionspart in URL. Then, copy the full endpoint URL for use. Recommended models forchat_completionandcompletiontasks:- Mistral-7B-Instruct-v0.2
- QwQ-32B
- Phi-3-mini-128k-instruct
 For Elastic's reranktask: The selected model must support thesentence-rankingtask and expose OpenAI API. HuggingFace supports only dedicated (not serverless) endpoints forRerankso far. After the endpoint is initialized, copy the full endpoint URL for use. Tested models forreranktask:- bge-reranker-base
- jina-reranker-v1-turbo-en-GGUF
 - Parameters:
- fn- a function that initializes a builder to create the- PutHuggingFaceRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putJinaaipublic PutJinaaiResponse putJinaai(PutJinaaiRequest request) throws IOException, ElasticsearchException Create an JinaAI inference endpoint.Create an inference endpoint to perform an inference task with the jinaaiservice.To review the available rerankmodels, refer to https://jina.ai/reranker. To review the availabletext_embeddingmodels, refer to the https://jina.ai/embeddings/.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putJinaaipublic final PutJinaaiResponse putJinaai(Function<PutJinaaiRequest.Builder, ObjectBuilder<PutJinaaiRequest>> fn) throws IOException, ElasticsearchExceptionCreate an JinaAI inference endpoint.Create an inference endpoint to perform an inference task with the jinaaiservice.To review the available rerankmodels, refer to https://jina.ai/reranker. To review the availabletext_embeddingmodels, refer to the https://jina.ai/embeddings/.- Parameters:
- fn- a function that initializes a builder to create the- PutJinaaiRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putLlamapublic PutLlamaResponse putLlama(PutLlamaRequest request) throws IOException, ElasticsearchException Create a Llama inference endpoint.Create an inference endpoint to perform an inference task with the llamaservice.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putLlamapublic final PutLlamaResponse putLlama(Function<PutLlamaRequest.Builder, ObjectBuilder<PutLlamaRequest>> fn) throws IOException, ElasticsearchExceptionCreate a Llama inference endpoint.Create an inference endpoint to perform an inference task with the llamaservice.- Parameters:
- fn- a function that initializes a builder to create the- PutLlamaRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putMistralpublic PutMistralResponse putMistral(PutMistralRequest request) throws IOException, ElasticsearchException Create a Mistral inference endpoint.Create an inference endpoint to perform an inference task with the mistralservice.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putMistralpublic final PutMistralResponse putMistral(Function<PutMistralRequest.Builder, ObjectBuilder<PutMistralRequest>> fn) throws IOException, ElasticsearchExceptionCreate a Mistral inference endpoint.Create an inference endpoint to perform an inference task with the mistralservice.- Parameters:
- fn- a function that initializes a builder to create the- PutMistralRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putOpenaipublic PutOpenaiResponse putOpenai(PutOpenaiRequest request) throws IOException, ElasticsearchException Create an OpenAI inference endpoint.Create an inference endpoint to perform an inference task with the openaiservice oropenaicompatible APIs.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putOpenaipublic final PutOpenaiResponse putOpenai(Function<PutOpenaiRequest.Builder, ObjectBuilder<PutOpenaiRequest>> fn) throws IOException, ElasticsearchExceptionCreate an OpenAI inference endpoint.Create an inference endpoint to perform an inference task with the openaiservice oropenaicompatible APIs.- Parameters:
- fn- a function that initializes a builder to create the- PutOpenaiRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putVoyageaipublic PutVoyageaiResponse putVoyageai(PutVoyageaiRequest request) throws IOException, ElasticsearchException Create a VoyageAI inference endpoint.Create an inference endpoint to perform an inference task with the voyageaiservice.Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources. - Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putVoyageaipublic final PutVoyageaiResponse putVoyageai(Function<PutVoyageaiRequest.Builder, ObjectBuilder<PutVoyageaiRequest>> fn) throws IOException, ElasticsearchExceptionCreate a VoyageAI inference endpoint.Create an inference endpoint to perform an inference task with the voyageaiservice.Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources. - Parameters:
- fn- a function that initializes a builder to create the- PutVoyageaiRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putWatsonxpublic PutWatsonxResponse putWatsonx(PutWatsonxRequest request) throws IOException, ElasticsearchException Create a Watsonx inference endpoint.Create an inference endpoint to perform an inference task with the watsonxaiservice. You need an IBM Cloud Databases for Elasticsearch deployment to use thewatsonxaiinference service. You can provision one through the IBM catalog, the Cloud Databases CLI plug-in, the Cloud Databases API, or Terraform.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
putWatsonxpublic final PutWatsonxResponse putWatsonx(Function<PutWatsonxRequest.Builder, ObjectBuilder<PutWatsonxRequest>> fn) throws IOException, ElasticsearchExceptionCreate a Watsonx inference endpoint.Create an inference endpoint to perform an inference task with the watsonxaiservice. You need an IBM Cloud Databases for Elasticsearch deployment to use thewatsonxaiinference service. You can provision one through the IBM catalog, the Cloud Databases CLI plug-in, the Cloud Databases API, or Terraform.- Parameters:
- fn- a function that initializes a builder to create the- PutWatsonxRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
rerankPerform reranking inference on the service- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
rerankpublic final RerankResponse rerank(Function<RerankRequest.Builder, ObjectBuilder<RerankRequest>> fn) throws IOException, ElasticsearchExceptionPerform reranking inference on the service- Parameters:
- fn- a function that initializes a builder to create the- RerankRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
sparseEmbeddingpublic SparseEmbeddingResponse sparseEmbedding(SparseEmbeddingRequest request) throws IOException, ElasticsearchException Perform sparse embedding inference on the service- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
sparseEmbeddingpublic final SparseEmbeddingResponse sparseEmbedding(Function<SparseEmbeddingRequest.Builder, ObjectBuilder<SparseEmbeddingRequest>> fn) throws IOException, ElasticsearchExceptionPerform sparse embedding inference on the service- Parameters:
- fn- a function that initializes a builder to create the- SparseEmbeddingRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
streamCompletionpublic BinaryResponse streamCompletion(StreamCompletionRequest request) throws IOException, ElasticsearchException Perform streaming inference. Get real-time responses for completion tasks by delivering answers incrementally, reducing response times during computation. This API works only with the completion task type.IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. This API requires the monitor_inferencecluster privilege (the built-ininference_adminandinference_userroles grant this privilege). You must use a client that supports streaming.- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
streamCompletionpublic final BinaryResponse streamCompletion(Function<StreamCompletionRequest.Builder, ObjectBuilder<StreamCompletionRequest>> fn) throws IOException, ElasticsearchExceptionPerform streaming inference. Get real-time responses for completion tasks by delivering answers incrementally, reducing response times during computation. This API works only with the completion task type.IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. This API requires the monitor_inferencecluster privilege (the built-ininference_adminandinference_userroles grant this privilege). You must use a client that supports streaming.- Parameters:
- fn- a function that initializes a builder to create the- StreamCompletionRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
textEmbeddingpublic TextEmbeddingResponse textEmbedding(TextEmbeddingRequest request) throws IOException, ElasticsearchException Perform text embedding inference on the service- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
textEmbeddingpublic final TextEmbeddingResponse textEmbedding(Function<TextEmbeddingRequest.Builder, ObjectBuilder<TextEmbeddingRequest>> fn) throws IOException, ElasticsearchExceptionPerform text embedding inference on the service- Parameters:
- fn- a function that initializes a builder to create the- TextEmbeddingRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
updatepublic UpdateInferenceResponse update(UpdateInferenceRequest request) throws IOException, ElasticsearchException Update an inference endpoint.Modify task_settings, secrets (withinservice_settings), ornum_allocationsfor an inference endpoint, depending on the specific endpoint service andtask_type.IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. - Throws:
- IOException
- ElasticsearchException
- See Also:
 
- 
updatepublic final UpdateInferenceResponse update(Function<UpdateInferenceRequest.Builder, ObjectBuilder<UpdateInferenceRequest>> fn) throws IOException, ElasticsearchExceptionUpdate an inference endpoint.Modify task_settings, secrets (withinservice_settings), ornum_allocationsfor an inference endpoint, depending on the specific endpoint service andtask_type.IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. - Parameters:
- fn- a function that initializes a builder to create the- UpdateInferenceRequest
- Throws:
- IOException
- ElasticsearchException
- See Also:
 
 
-