All Classes and Interfaces

Class
Description
 
AI Services provide a simpler and more flexible alternative to chains.
 
 
 
 
 
 
Provides instances of ChatMemory.
A chain for conversing with a specified ChatLanguageModel while maintaining a memory of the conversation.
A chain for conversing with a specified ChatLanguageModel based on the information retrieved by a specified ContentRetriever.
 
 
 
Splits the provided Document into characters and attempts to fit as many characters as possible into a single TextSegment, adhering to the limit set by maxSegmentSize.
Splits the provided Document into lines and attempts to fit as many lines as possible into a single TextSegment, adhering to the limit set by maxSegmentSize.
Splits the provided Document into paragraphs and attempts to fit as many paragraphs as possible into a single TextSegment, adhering to the limit set by maxSegmentSize.
Splits the provided Document into parts using the provided regex and attempts to fit as many parts as possible into a single TextSegment, adhering to the limit set by maxSegmentSize.
Splits the provided Document into sentences and attempts to fit as many sentences as possible into a single TextSegment, adhering to the limit set by maxSegmentSize.
Splits the provided Document into words and attempts to fit as many words as possible into a single TextSegment, adhering to the limit set by maxSegmentSize.
 
 
A TextClassifier that uses an EmbeddingModel and predefined examples to perform classification.
 
 
 
 
 
 
Base class for hierarchical document splitters.
Extracts text from a given HTML document.
 
An EmbeddingStore that stores embeddings in memory.
 
 
 
A tool that executes JS code using the Judge0 service, hosted by Rapid API.
 
 
 
 
The value of a method parameter annotated with @MemoryId will be used to find the memory belonging to that user/conversation.
This chat memory operates as a sliding window of MessageWindowChatMemory.maxMessages messages.
 
When a method in the AI Service is annotated with @Moderate, each invocation of this method will call not only the LLM, but also the moderation model (which must be provided during the construction of the AI Service) in parallel.
Thrown when content moderation fails, i.e., when content is flagged by the moderation model.
 
 
 
 
 
 
 
Represents a token stream from language model to which you can subscribe and receive updates when a new token is available, when language model finishes streaming, or when an error occurs during streaming.
This chat memory operates as a sliding window of TokenWindowChatMemory.maxTokens tokens.
 
 
 
 
 
The value of a method parameter annotated with @UserName will be injected into the field 'name' of a UserMessage.
The values of method parameters annotated with @V, together with prompt templates defined by @UserMessage and @SystemMessage, are used to produce a message that will be sent to the LLM.