Index
All Classes and Interfaces|All Packages
A
- apiKey() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.Langchain4jOpenAiConfig
-
OpenAI API key
B
- baseUrl() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.Langchain4jOpenAiConfig
-
Base URL of OpenAI API
C
- chatModel() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.Langchain4jOpenAiConfig
-
Chat model related settings
- chatModel(Langchain4jOpenAiConfig) - Method in class io.quarkiverse.langchain4j.openai.runtime.OpenAiRecorder
- ChatModelConfig - Interface in io.quarkiverse.langchain4j.openai.runtime.config
- cleanUp(ShutdownContext) - Method in class io.quarkiverse.langchain4j.openai.runtime.OpenAiRecorder
E
- embeddingModel() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.Langchain4jOpenAiConfig
-
Embedding model related settings
- embeddingModel(Langchain4jOpenAiConfig) - Method in class io.quarkiverse.langchain4j.openai.runtime.OpenAiRecorder
- EmbeddingModelConfig - Interface in io.quarkiverse.langchain4j.openai.runtime.config
F
- frequencyPenalty() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.ChatModelConfig
-
Number between -2.0 and 2.0.
I
- io.quarkiverse.langchain4j.openai.runtime - package io.quarkiverse.langchain4j.openai.runtime
- io.quarkiverse.langchain4j.openai.runtime.config - package io.quarkiverse.langchain4j.openai.runtime.config
L
- Langchain4jOpenAiConfig - Interface in io.quarkiverse.langchain4j.openai.runtime.config
- logRequests() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.Langchain4jOpenAiConfig
-
Whether the OpenAI client should log requests
- logResponses() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.Langchain4jOpenAiConfig
-
Whether the OpenAI client should log responses
M
- maxRetries() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.Langchain4jOpenAiConfig
-
The maximum number of times to retry
- maxTokens() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.ChatModelConfig
-
The maximum number of tokens to generate in the completion.
- modelName() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.ChatModelConfig
-
Model name to use
- modelName() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.EmbeddingModelConfig
-
Model name to use
- modelName() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.ModerationModelConfig
-
Model name to use
- moderationModel() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.Langchain4jOpenAiConfig
-
Moderation model related settings
- moderationModel(Langchain4jOpenAiConfig) - Method in class io.quarkiverse.langchain4j.openai.runtime.OpenAiRecorder
- ModerationModelConfig - Interface in io.quarkiverse.langchain4j.openai.runtime.config
O
- OpenAiRecorder - Class in io.quarkiverse.langchain4j.openai.runtime
- OpenAiRecorder() - Constructor for class io.quarkiverse.langchain4j.openai.runtime.OpenAiRecorder
P
- presencePenalty() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.ChatModelConfig
-
Number between -2.0 and 2.0.
S
- streamingChatModel(Langchain4jOpenAiConfig) - Method in class io.quarkiverse.langchain4j.openai.runtime.OpenAiRecorder
T
- temperature() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.ChatModelConfig
-
What sampling temperature to use, with values between 0 and 2.
- timeout() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.Langchain4jOpenAiConfig
-
Timeout for OpenAI calls
- topP() - Method in interface io.quarkiverse.langchain4j.openai.runtime.config.ChatModelConfig
-
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with topP probability mass.
All Classes and Interfaces|All Packages