Class BetaAssistantUpdateParams.BetaAssistantUpdateBody
-
- All Implemented Interfaces:
public final class BetaAssistantUpdateParams.BetaAssistantUpdateBody
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description public final class
BetaAssistantUpdateParams.BetaAssistantUpdateBody.Builder
A builder for BetaAssistantUpdateBody.
-
Method Summary
Modifier and Type Method Description final Optional<String>
description()
The description of the assistant. final Optional<String>
instructions()
The system instructions that the assistant uses. final Optional<Metadata>
metadata()
Set of 16 key-value pairs that can be attached to an object. final Optional<BetaAssistantUpdateParams.Model>
model()
ID of the model to use. final Optional<String>
name()
The name of the assistant. final Optional<BetaAssistantUpdateParams.ReasoningEffort>
reasoningEffort()
o1 and o3-mini models onlyConstrains effort on reasoning for reasoning models. final Optional<AssistantResponseFormatOption>
responseFormat()
Specifies the format that the model must output. final Optional<Double>
temperature()
What sampling temperature to use, between 0 and 2. final Optional<BetaAssistantUpdateParams.ToolResources>
toolResources()
A set of resources that are used by the assistant's tools. final Optional<List<AssistantTool>>
tools()
A list of tool enabled on the assistant. final Optional<Double>
topP()
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. final JsonField<String>
_description()
The description of the assistant. final JsonField<String>
_instructions()
The system instructions that the assistant uses. final JsonField<Metadata>
_metadata()
Set of 16 key-value pairs that can be attached to an object. final JsonField<BetaAssistantUpdateParams.Model>
_model()
ID of the model to use. final JsonField<String>
_name()
The name of the assistant. final JsonField<BetaAssistantUpdateParams.ReasoningEffort>
_reasoningEffort()
o1 and o3-mini models onlyConstrains effort on reasoning for reasoning models. final JsonField<AssistantResponseFormatOption>
_responseFormat()
Specifies the format that the model must output. final JsonField<Double>
_temperature()
What sampling temperature to use, between 0 and 2. final JsonField<BetaAssistantUpdateParams.ToolResources>
_toolResources()
A set of resources that are used by the assistant's tools. final JsonField<List<AssistantTool>>
_tools()
A list of tool enabled on the assistant. final JsonField<Double>
_topP()
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. final Map<String, JsonValue>
_additionalProperties()
final BetaAssistantUpdateParams.BetaAssistantUpdateBody
validate()
final BetaAssistantUpdateParams.BetaAssistantUpdateBody.Builder
toBuilder()
Boolean
equals(Object other)
Integer
hashCode()
String
toString()
final static BetaAssistantUpdateParams.BetaAssistantUpdateBody.Builder
builder()
-
-
Method Detail
-
description
final Optional<String> description()
The description of the assistant. The maximum length is 512 characters.
-
instructions
final Optional<String> instructions()
The system instructions that the assistant uses. The maximum length is 256,000 characters.
-
metadata
final Optional<Metadata> metadata()
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
-
model
final Optional<BetaAssistantUpdateParams.Model> model()
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
-
reasoningEffort
final Optional<BetaAssistantUpdateParams.ReasoningEffort> reasoningEffort()
o1 and o3-mini models only
Constrains effort on reasoning for reasoning models. Currently supported values are
low
,medium
, andhigh
. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
-
responseFormat
final Optional<AssistantResponseFormatOption> responseFormat()
Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 * Turbo, and all GPT-3.5 Turbo models since
gpt-3.5-turbo-1106
.Setting to
{ "type": "json_schema", "json_schema": {...} }
enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.Setting to
{ "type": "json_object" }
enables JSON mode, which ensures the message the model generates is valid JSON.Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if
finish_reason="length"
, which indicates the generation exceededmax_tokens
or the conversation exceeded the max context length.
-
temperature
final Optional<Double> temperature()
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
-
toolResources
final Optional<BetaAssistantUpdateParams.ToolResources> toolResources()
A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the
code_interpreter
tool requires a list of file IDs, while thefile_search
tool requires a list of vector store IDs.
-
tools
final Optional<List<AssistantTool>> tools()
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types
code_interpreter
,file_search
, orfunction
.
-
topP
final Optional<Double> topP()
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
-
_description
final JsonField<String> _description()
The description of the assistant. The maximum length is 512 characters.
-
_instructions
final JsonField<String> _instructions()
The system instructions that the assistant uses. The maximum length is 256,000 characters.
-
_metadata
final JsonField<Metadata> _metadata()
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
-
_model
final JsonField<BetaAssistantUpdateParams.Model> _model()
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
-
_name
final JsonField<String> _name()
The name of the assistant. The maximum length is 256 characters.
-
_reasoningEffort
final JsonField<BetaAssistantUpdateParams.ReasoningEffort> _reasoningEffort()
o1 and o3-mini models only
Constrains effort on reasoning for reasoning models. Currently supported values are
low
,medium
, andhigh
. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
-
_responseFormat
final JsonField<AssistantResponseFormatOption> _responseFormat()
Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 * Turbo, and all GPT-3.5 Turbo models since
gpt-3.5-turbo-1106
.Setting to
{ "type": "json_schema", "json_schema": {...} }
enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.Setting to
{ "type": "json_object" }
enables JSON mode, which ensures the message the model generates is valid JSON.Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if
finish_reason="length"
, which indicates the generation exceededmax_tokens
or the conversation exceeded the max context length.
-
_temperature
final JsonField<Double> _temperature()
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
-
_toolResources
final JsonField<BetaAssistantUpdateParams.ToolResources> _toolResources()
A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the
code_interpreter
tool requires a list of file IDs, while thefile_search
tool requires a list of vector store IDs.
-
_tools
final JsonField<List<AssistantTool>> _tools()
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types
code_interpreter
,file_search
, orfunction
.
-
_topP
final JsonField<Double> _topP()
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
-
_additionalProperties
final Map<String, JsonValue> _additionalProperties()
-
validate
final BetaAssistantUpdateParams.BetaAssistantUpdateBody validate()
-
toBuilder
final BetaAssistantUpdateParams.BetaAssistantUpdateBody.Builder toBuilder()
-
builder
final static BetaAssistantUpdateParams.BetaAssistantUpdateBody.Builder builder()
-
-
-
-