Interface StreamingChatLanguageModel
-
- All Implemented Interfaces:
public interface StreamingChatLanguageModelRepresents a language model that has a chat API and can stream a response one token at a time.
-
-
Method Summary
Modifier and Type Method Description voidchat(ChatRequest chatRequest, StreamingChatResponseHandler handler)This is the main API to interact with the chat model. voidchat(String userMessage, StreamingChatResponseHandler handler)voidchat(List<ChatMessage> messages, StreamingChatResponseHandler handler)List<ChatModelListener>listeners()voiddoChat(ChatRequest chatRequest, StreamingChatResponseHandler handler)ChatRequestParametersdefaultRequestParameters()Set<Capability>supportedCapabilities()voidgenerate(String userMessage, StreamingResponseHandler<AiMessage> handler)Generates a response from the model based on a message from a user. voidgenerate(UserMessage userMessage, StreamingResponseHandler<AiMessage> handler)Generates a response from the model based on a message from a user. abstract voidgenerate(List<ChatMessage> messages, StreamingResponseHandler<AiMessage> handler)Generates a response from the model based on a sequence of messages. voidgenerate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, StreamingResponseHandler<AiMessage> handler)Generates a response from the model based on a list of messages and a list of tool specifications. voidgenerate(List<ChatMessage> messages, ToolSpecification toolSpecification, StreamingResponseHandler<AiMessage> handler)Generates a response from the model based on a list of messages and a single tool specification. -
-
Method Detail
-
chat
void chat(ChatRequest chatRequest, StreamingChatResponseHandler handler)
This is the main API to interact with the chat model.
A temporary default implementation of this method is necessary until all StreamingChatLanguageModel implementations adopt it. It should be removed once that occurs.
- Parameters:
chatRequest- a ChatRequest, containing all the inputs to the LLMhandler- a StreamingChatResponseHandler that will handle streaming response from the LLM
-
chat
void chat(String userMessage, StreamingChatResponseHandler handler)
-
chat
void chat(List<ChatMessage> messages, StreamingChatResponseHandler handler)
-
listeners
List<ChatModelListener> listeners()
-
doChat
void doChat(ChatRequest chatRequest, StreamingChatResponseHandler handler)
-
defaultRequestParameters
ChatRequestParameters defaultRequestParameters()
-
supportedCapabilities
Set<Capability> supportedCapabilities()
-
generate
@Deprecated(forRemoval = true) void generate(String userMessage, StreamingResponseHandler<AiMessage> handler)
Generates a response from the model based on a message from a user.
- Parameters:
userMessage- The message from the user.handler- The handler for streaming the response.
-
generate
@Deprecated(forRemoval = true) void generate(UserMessage userMessage, StreamingResponseHandler<AiMessage> handler)
Generates a response from the model based on a message from a user.
- Parameters:
userMessage- The message from the user.handler- The handler for streaming the response.
-
generate
@Deprecated(forRemoval = true) abstract void generate(List<ChatMessage> messages, StreamingResponseHandler<AiMessage> handler)
Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...
- Parameters:
messages- A list of messages.handler- The handler for streaming the response.
-
generate
@Deprecated(forRemoval = true) void generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, StreamingResponseHandler<AiMessage> handler)
Generates a response from the model based on a list of messages and a list of tool specifications. The response may either be a text message or a request to execute one of the specified tools. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...
- Parameters:
messages- A list of messages.toolSpecifications- A list of tools that the model is allowed to execute.handler- The handler for streaming the response.
-
generate
@Deprecated(forRemoval = true) void generate(List<ChatMessage> messages, ToolSpecification toolSpecification, StreamingResponseHandler<AiMessage> handler)
Generates a response from the model based on a list of messages and a single tool specification. The model is forced to execute the specified tool. This is usually achieved by setting `tool_choice=ANY` in the LLM provider API.
- Parameters:
messages- A list of messages.toolSpecification- The specification of a tool that must be executed.handler- The handler for streaming the response.
-
-
-
-