Interface ChatLanguageModel
-
- All Implemented Interfaces:
public interface ChatLanguageModel
Represents a language model that has a chat API.
-
-
Method Summary
Modifier and Type Method Description ChatResponse
chat(ChatRequest chatRequest)
This is the main API to interact with the chat model. static void
validate(ChatRequestParameters parameters)
static void
validate(ToolChoice toolChoice)
static void
validate(ResponseFormat responseFormat)
String
chat(String userMessage)
ChatRequestParameters
defaultRequestParameters()
Set<Capability>
supportedCapabilities()
String
generate(String userMessage)
Generates a response from the model based on a message from a user. Response<AiMessage>
generate(Array<ChatMessage> messages)
Generates a response from the model based on a sequence of messages. abstract Response<AiMessage>
generate(List<ChatMessage> messages)
Generates a response from the model based on a sequence of messages. Response<AiMessage>
generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications)
Generates a response from the model based on a list of messages and a list of tool specifications. Response<AiMessage>
generate(List<ChatMessage> messages, ToolSpecification toolSpecification)
Generates a response from the model based on a list of messages and a single tool specification. -
-
Method Detail
-
chat
ChatResponse chat(ChatRequest chatRequest)
This is the main API to interact with the chat model. All the existing generate(...) methods (see below) will be deprecated and removed before 1.0.0 release.
A temporary default implementation of this method is necessary until all ChatLanguageModel implementations adopt it. It should be removed once that occurs.
- Parameters:
chatRequest
- a ChatRequest, containing all the inputs to the LLM- Returns:
a ChatResponse, containing all the outputs from the LLM
-
validate
static void validate(ChatRequestParameters parameters)
-
validate
static void validate(ToolChoice toolChoice)
-
validate
static void validate(ResponseFormat responseFormat)
-
defaultRequestParameters
ChatRequestParameters defaultRequestParameters()
-
supportedCapabilities
Set<Capability> supportedCapabilities()
-
generate
String generate(String userMessage)
Generates a response from the model based on a message from a user. This is a convenience method that receives the message from a user as a String and returns only the generated response.
- Parameters:
userMessage
- The message from the user.- Returns:
The response generated by the model.
-
generate
Response<AiMessage> generate(Array<ChatMessage> messages)
Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...
- Parameters:
messages
- An array of messages.- Returns:
The response generated by the model.
-
generate
abstract Response<AiMessage> generate(List<ChatMessage> messages)
Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...
- Parameters:
messages
- A list of messages.- Returns:
The response generated by the model.
-
generate
Response<AiMessage> generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications)
Generates a response from the model based on a list of messages and a list of tool specifications. The response may either be a text message or a request to execute one of the specified tools. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...
- Parameters:
messages
- A list of messages.toolSpecifications
- A list of tools that the model is allowed to execute.- Returns:
The response generated by the model. AiMessage can contain either a textual response or a request to execute one of the tools.
-
generate
Response<AiMessage> generate(List<ChatMessage> messages, ToolSpecification toolSpecification)
Generates a response from the model based on a list of messages and a single tool specification. The model is forced to execute the specified tool. This is usually achieved by setting `tool_choice=ANY` in the LLM provider API. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...
- Parameters:
messages
- A list of messages.toolSpecification
- The specification of a tool that must be executed.- Returns:
The response generated by the model. AiMessage contains a request to execute the specified tool.
-
-
-
-