Class DisabledChatLanguageModel
-
- All Implemented Interfaces:
-
dev.langchain4j.model.chat.ChatLanguageModel
public class DisabledChatLanguageModel implements ChatLanguageModel
A ChatLanguageModel which throws a ModelDisabledException for all of its methods
This could be used in tests, or in libraries that extend this one to conditionally enable or disable functionality.
-
-
Constructor Summary
Constructors Constructor Description DisabledChatLanguageModel()
-
Method Summary
Modifier and Type Method Description String
generate(String userMessage)
Generates a response from the model based on a message from a user. Response<AiMessage>
generate(Array<ChatMessage> messages)
Generates a response from the model based on a sequence of messages. Response<AiMessage>
generate(List<ChatMessage> messages)
Generates a response from the model based on a sequence of messages. Response<AiMessage>
generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications)
Generates a response from the model based on a list of messages and a list of tool specifications. Response<AiMessage>
generate(List<ChatMessage> messages, ToolSpecification toolSpecification)
Generates a response from the model based on a list of messages and a single tool specification. -
Methods inherited from class dev.langchain4j.model.chat.ChatLanguageModel
chat, chat, chat, chat, defaultRequestParameters, doChat, listeners, supportedCapabilities, validate, validate, validate
-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
-
Method Detail
-
generate
String generate(String userMessage)
Generates a response from the model based on a message from a user. This is a convenience method that receives the message from a user as a String and returns only the generated response.
- Parameters:
userMessage
- The message from the user.- Returns:
The response generated by the model.
-
generate
Response<AiMessage> generate(Array<ChatMessage> messages)
Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...
- Parameters:
messages
- An array of messages.- Returns:
The response generated by the model.
-
generate
Response<AiMessage> generate(List<ChatMessage> messages)
Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...
- Parameters:
messages
- A list of messages.- Returns:
The response generated by the model.
-
generate
Response<AiMessage> generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications)
Generates a response from the model based on a list of messages and a list of tool specifications. The response may either be a text message or a request to execute one of the specified tools. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...
- Parameters:
messages
- A list of messages.toolSpecifications
- A list of tools that the model is allowed to execute.- Returns:
The response generated by the model. AiMessage can contain either a textual response or a request to execute one of the tools.
-
generate
Response<AiMessage> generate(List<ChatMessage> messages, ToolSpecification toolSpecification)
Generates a response from the model based on a list of messages and a single tool specification. The model is forced to execute the specified tool. This is usually achieved by setting `tool_choice=ANY` in the LLM provider API. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...
- Parameters:
messages
- A list of messages.toolSpecification
- The specification of a tool that must be executed.- Returns:
The response generated by the model. AiMessage contains a request to execute the specified tool.
-
-
-
-