Class ChatCompletionUnifiedRequest
- All Implemented Interfaces:
JsonpSerializable
The chat completion inference API enables real-time responses for chat
completion tasks by delivering answers incrementally, reducing response times
during computation. It only works with the chat_completion
task
type for openai
and elastic
inference services.
IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
NOTE: The chat_completion
task type is only available within the
_stream API and only supports streaming. The Chat completion inference API
and the Stream inference API differ in their response structure and
capabilities. The Chat completion inference API provides more comprehensive
customization options through more fields and function calling support. If
you use the openai
service or the elastic
service,
use the Chat completion inference API.
- See Also:
-
Nested Class Summary
Nested ClassesNested classes/interfaces inherited from class co.elastic.clients.elasticsearch._types.RequestBase
RequestBase.AbstractBuilder<BuilderT extends RequestBase.AbstractBuilder<BuilderT>>
-
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final JsonpDeserializer<ChatCompletionUnifiedRequest>
static final Endpoint<ChatCompletionUnifiedRequest,
BinaryResponse, ErrorResponse> Endpoint "inference.chat_completion_unified
". -
Method Summary
Modifier and TypeMethodDescriptionfinal RequestChatCompletion
Required - Request body.protected static JsonpDeserializer<ChatCompletionUnifiedRequest>
final String
Required - The inference Idstatic ChatCompletionUnifiedRequest
void
serialize
(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) Serialize this value to JSON.final Time
timeout()
Specifies the amount of time to wait for the inference request to complete.Methods inherited from class co.elastic.clients.elasticsearch._types.RequestBase
toString
-
Field Details
-
_DESERIALIZER
-
_ENDPOINT
Endpoint "inference.chat_completion_unified
".
-
-
Method Details
-
of
-
inferenceId
Required - The inference IdAPI name:
inference_id
-
timeout
Specifies the amount of time to wait for the inference request to complete.API name:
timeout
-
chatCompletionRequest
Required - Request body. -
serialize
Serialize this value to JSON.- Specified by:
serialize
in interfaceJsonpSerializable
-
createChatCompletionUnifiedRequestDeserializer
protected static JsonpDeserializer<ChatCompletionUnifiedRequest> createChatCompletionUnifiedRequestDeserializer()
-