All Classes and Interfaces

Class
Description
 
AI Services is a high-level API of LangChain4j to interact with ChatModel and StreamingChatModel.
 
 
Parameters for creating an AiServiceTokenStream.
Represents a chain step that takes an input and produces an output.
Allow to access the ChatMemory of any AI service extending it.
Provides instances of ChatMemory.
 
Represent the result of classification.
DocumentLoader implementation for loading documents using a ClassPathSource
Specialization of a DocumentSource that knows how to read from the classpath.
A chain for conversing with a specified ChatModel while maintaining a memory of the conversation.
 
A chain for conversing with a specified ChatModel based on the information retrieved by a specified ContentRetriever.
 
 
Splits the provided Document into characters and attempts to fit as many characters as possible into a single TextSegment, adhering to the limit set by maxSegmentSize.
Splits the provided Document into lines and attempts to fit as many lines as possible into a single TextSegment, adhering to the limit set by maxSegmentSize.
Splits the provided Document into paragraphs and attempts to fit as many paragraphs as possible into a single TextSegment, adhering to the limit set by maxSegmentSize.
Splits the provided Document into parts using the provided regex and attempts to fit as many parts as possible into a single TextSegment, adhering to the limit set by maxSegmentSize.
Splits the provided Document into sentences and attempts to fit as many sentences as possible into a single TextSegment, adhering to the limit set by maxSegmentSize.
Splits the provided Document into words and attempts to fit as many words as possible into a single TextSegment, adhering to the limit set by maxSegmentSize.
 
A TextClassifier that uses an EmbeddingModel and predefined examples to perform classification.
 
 
 
Base class for hierarchical document splitters.
 
An EmbeddingStore that stores embeddings in memory.
 
 
Utility class responsible for resolving variable names and values for prompt templates by leveraging method parameters and their annotations.
 
The value of a method parameter annotated with @MemoryId will be used to find the memory belonging to that user/conversation.
This chat memory operates as a sliding window of MessageWindowChatMemory.maxMessages messages.
 
When a method in the AI Service is annotated with @Moderate, each invocation of this method will call not only the LLM, but also the moderation model (which must be provided during the construction of the AI Service) in parallel.
Thrown when content moderation fails, i.e., when content is flagged by the moderation model.
 
Represents the result of an AI Service invocation.
 
Represents a classification label with score.
 
Specifies either a complete system message (prompt) or a system message template to be used each time an AI service is invoked.
Classifies a given text based on a set of labels.
 
Represents a token stream from the model to which you can subscribe and receive updates when a new partial response (usually a single token) is available, when the model finishes streaming, or when an error occurs during streaming.
 
This chat memory operates as a sliding window of TokenWindowChatMemory.maxTokens tokens.
 
Represents the execution of a tool, including the request and the result.
 
A low-level executor/handler of a ToolExecutionRequest.
A tool provider.
 
 
 
 
 
 
 
 
 
Specifies either a complete user message or a user message template to be used each time an AI service is invoked.
The value of a method parameter annotated with @UserName will be injected into the field 'name' of a UserMessage.
When a parameter of a method in an AI Service is annotated with @V, it becomes a prompt template variable.