Class ResponseCreateParams.Body
- 
                    
                    - All Implemented Interfaces:
 
 public final class ResponseCreateParams.Body
- 
                
                    
                    - 
                                
                            
                                Nested Class SummaryNested Classes Modifier and Type Class Description public final classResponseCreateParams.Body.BuilderA builder for Body. 
 - 
                                
                            
                                Method SummaryModifier and Type Method Description final Optional<Boolean>background()Whether to run the model response in the background. final Optional<ResponseCreateParams.Conversation>conversation()The conversation that this response belongs to. final Optional<List<ResponseIncludable>>include()Specify additional output data to include in the model response. final Optional<ResponseCreateParams.Input>input()Text, image, or file inputs to the model, used to generate a response. final Optional<String>instructions()A system (or developer) message inserted into the model's context. final Optional<Long>maxOutputTokens()An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. final Optional<Long>maxToolCalls()The maximum number of total calls to built-in tools that can be processed in a response. final Optional<ResponseCreateParams.Metadata>metadata()Set of 16 key-value pairs that can be attached to an object. final Optional<ResponsesModel>model()Model ID used to generate the response, like gpt-4ooro3.final Optional<Boolean>parallelToolCalls()Whether to allow the model to run tool calls in parallel. final Optional<String>previousResponseId()The unique ID of the previous response to the model. final Optional<ResponsePrompt>prompt()Reference to a prompt template and its variables. final Optional<String>promptCacheKey()Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. final Optional<Reasoning>reasoning()gpt-5 and o-series models onlyConfiguration options for reasoning models. final Optional<String>safetyIdentifier()A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. final Optional<ResponseCreateParams.ServiceTier>serviceTier()Specifies the processing type used for serving the request. final Optional<Boolean>store()Whether to store the generated model response for later retrieval via API. final Optional<ResponseCreateParams.StreamOptions>streamOptions()Options for streaming responses. final Optional<Double>temperature()What sampling temperature to use, between 0 and 2. final Optional<ResponseTextConfig>text()Configuration options for a text response from the model. final Optional<ResponseCreateParams.ToolChoice>toolChoice()How the model should select which tool (or tools) to use when generating a response. final Optional<List<Tool>>tools()An array of tools the model may call while generating a response. final Optional<Long>topLogprobs()An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. final Optional<Double>topP()An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. final Optional<ResponseCreateParams.Truncation>truncation()The truncation strategy to use for the model response. final Optional<String>user()This field is being replaced by safety_identifierandprompt_cache_key.final JsonField<Boolean>_background()Returns the raw JSON value of background. final JsonField<ResponseCreateParams.Conversation>_conversation()Returns the raw JSON value of conversation. final JsonField<List<ResponseIncludable>>_include()Returns the raw JSON value of include. final JsonField<ResponseCreateParams.Input>_input()Returns the raw JSON value of input. final JsonField<String>_instructions()Returns the raw JSON value of instructions. final JsonField<Long>_maxOutputTokens()Returns the raw JSON value of maxOutputTokens. final JsonField<Long>_maxToolCalls()Returns the raw JSON value of maxToolCalls. final JsonField<ResponseCreateParams.Metadata>_metadata()Returns the raw JSON value of metadata. final JsonField<ResponsesModel>_model()Returns the raw JSON value of model. final JsonField<Boolean>_parallelToolCalls()Returns the raw JSON value of parallelToolCalls. final JsonField<String>_previousResponseId()Returns the raw JSON value of previousResponseId. final JsonField<ResponsePrompt>_prompt()Returns the raw JSON value of prompt. final JsonField<String>_promptCacheKey()Returns the raw JSON value of promptCacheKey. final JsonField<Reasoning>_reasoning()Returns the raw JSON value of reasoning. final JsonField<String>_safetyIdentifier()Returns the raw JSON value of safetyIdentifier. final JsonField<ResponseCreateParams.ServiceTier>_serviceTier()Returns the raw JSON value of serviceTier. final JsonField<Boolean>_store()Returns the raw JSON value of store. final JsonField<ResponseCreateParams.StreamOptions>_streamOptions()Returns the raw JSON value of streamOptions. final JsonField<Double>_temperature()Returns the raw JSON value of temperature. final JsonField<ResponseTextConfig>_text()Returns the raw JSON value of text. final JsonField<ResponseCreateParams.ToolChoice>_toolChoice()Returns the raw JSON value of toolChoice. final JsonField<List<Tool>>_tools()Returns the raw JSON value of tools. final JsonField<Long>_topLogprobs()Returns the raw JSON value of topLogprobs. final JsonField<Double>_topP()Returns the raw JSON value of topP. final JsonField<ResponseCreateParams.Truncation>_truncation()Returns the raw JSON value of truncation. final JsonField<String>_user()Returns the raw JSON value of user. final Map<String, JsonValue>_additionalProperties()final ResponseCreateParams.Body.BuildertoBuilder()final ResponseCreateParams.Bodyvalidate()final BooleanisValid()Booleanequals(Object other)IntegerhashCode()StringtoString()final static ResponseCreateParams.Body.Builderbuilder()Returns a mutable builder for constructing an instance of Body. - 
                    
                    
                    - 
                                
                            
                                Method Detail- 
                                        backgroundfinal Optional<Boolean> background() Whether to run the model response in the background. Learn more. 
 - 
                                        conversationfinal Optional<ResponseCreateParams.Conversation> conversation() The conversation that this response belongs to. Items from this conversation are prepended to input_itemsfor this response request. Input items and output items from this response are automatically added to this conversation after this response completes.
 - 
                                        includefinal Optional<List<ResponseIncludable>> include() Specify additional output data to include in the model response. Currently supported values are: - web_search_call.action.sources: Include the sources of the web search tool call.
- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.
- computer_call_output.output.image_url: Include image urls from the computer call output.
- file_search_call.results: Include the search results of the file search tool call.
- message.input_image.image_url: Include image urls from the input message.
- message.output_text.logprobs: Include logprobs with assistant messages.
- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the- storeparameter is set to- false, or when an organization is enrolled in the zero data retention program).
 
 - 
                                        inputfinal Optional<ResponseCreateParams.Input> input() Text, image, or file inputs to the model, used to generate a response. Learn more: 
 - 
                                        instructionsfinal Optional<String> instructions() A system (or developer) message inserted into the model's context. When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.
 - 
                                        maxOutputTokensfinal Optional<Long> maxOutputTokens() An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. 
 - 
                                        maxToolCallsfinal Optional<Long> maxToolCalls() The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored. 
 - 
                                        metadatafinal Optional<ResponseCreateParams.Metadata> metadata() Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. 
 - 
                                        modelfinal Optional<ResponsesModel> model() Model ID used to generate the response, like gpt-4ooro3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.
 - 
                                        parallelToolCallsfinal Optional<Boolean> parallelToolCalls() Whether to allow the model to run tool calls in parallel. 
 - 
                                        previousResponseIdfinal Optional<String> previousResponseId() The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about conversation state. Cannot be used in conjunction with conversation.
 - 
                                        promptfinal Optional<ResponsePrompt> prompt() Reference to a prompt template and its variables. Learn more. 
 - 
                                        promptCacheKeyfinal Optional<String> promptCacheKey() Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the userfield. Learn more.
 - 
                                        reasoningfinal Optional<Reasoning> reasoning() gpt-5 and o-series models only Configuration options for reasoning models. 
 - 
                                        safetyIdentifierfinal Optional<String> safetyIdentifier() A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more. 
 - 
                                        serviceTierfinal Optional<ResponseCreateParams.ServiceTier> serviceTier() Specifies the processing type used for serving the request. - If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'. 
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model. 
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier. 
- When not set, the default behavior is 'auto'. 
 When the service_tierparameter is set, the response body will include theservice_tiervalue based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
 - 
                                        storefinal Optional<Boolean> store() Whether to store the generated model response for later retrieval via API. 
 - 
                                        streamOptionsfinal Optional<ResponseCreateParams.StreamOptions> streamOptions() Options for streaming responses. Only set this when you set stream: true.
 - 
                                        temperaturefinal Optional<Double> temperature() What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_pbut not both.
 - 
                                        textfinal Optional<ResponseTextConfig> text() Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: 
 - 
                                        toolChoicefinal Optional<ResponseCreateParams.ToolChoice> toolChoice() How the model should select which tool (or tools) to use when generating a response. See the toolsparameter to see how to specify which tools the model can call.
 - 
                                        toolsfinal Optional<List<Tool>> tools() An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choiceparameter.We support the following categories of tools: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools. 
- MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools. 
- Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code. 
 
 - 
                                        topLogprobsfinal Optional<Long> topLogprobs() An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. 
 - 
                                        topPfinal Optional<Double> topP() An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperaturebut not both.
 - 
                                        truncationfinal Optional<ResponseCreateParams.Truncation> truncation() The truncation strategy to use for the model response. - auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.
- disabled(default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
 
 - 
                                        user@Deprecated(message = "deprecated") final Optional<String> user() This field is being replaced by safety_identifierandprompt_cache_key. Useprompt_cache_keyinstead to maintain caching optimizations. A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.
 - 
                                        _backgroundfinal JsonField<Boolean> _background() Returns the raw JSON value of background. Unlike background, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _conversationfinal JsonField<ResponseCreateParams.Conversation> _conversation() Returns the raw JSON value of conversation. Unlike conversation, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _includefinal JsonField<List<ResponseIncludable>> _include() Returns the raw JSON value of include. Unlike include, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _inputfinal JsonField<ResponseCreateParams.Input> _input() Returns the raw JSON value of input. Unlike input, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _instructionsfinal JsonField<String> _instructions() Returns the raw JSON value of instructions. Unlike instructions, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _maxOutputTokensfinal JsonField<Long> _maxOutputTokens() Returns the raw JSON value of maxOutputTokens. Unlike maxOutputTokens, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _maxToolCallsfinal JsonField<Long> _maxToolCalls() Returns the raw JSON value of maxToolCalls. Unlike maxToolCalls, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _metadatafinal JsonField<ResponseCreateParams.Metadata> _metadata() Returns the raw JSON value of metadata. Unlike metadata, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _modelfinal JsonField<ResponsesModel> _model() Returns the raw JSON value of model. Unlike model, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _parallelToolCallsfinal JsonField<Boolean> _parallelToolCalls() Returns the raw JSON value of parallelToolCalls. Unlike parallelToolCalls, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _previousResponseIdfinal JsonField<String> _previousResponseId() Returns the raw JSON value of previousResponseId. Unlike previousResponseId, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _promptfinal JsonField<ResponsePrompt> _prompt() Returns the raw JSON value of prompt. Unlike prompt, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _promptCacheKeyfinal JsonField<String> _promptCacheKey() Returns the raw JSON value of promptCacheKey. Unlike promptCacheKey, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _reasoningfinal JsonField<Reasoning> _reasoning() Returns the raw JSON value of reasoning. Unlike reasoning, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _safetyIdentifierfinal JsonField<String> _safetyIdentifier() Returns the raw JSON value of safetyIdentifier. Unlike safetyIdentifier, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _serviceTierfinal JsonField<ResponseCreateParams.ServiceTier> _serviceTier() Returns the raw JSON value of serviceTier. Unlike serviceTier, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _storefinal JsonField<Boolean> _store() Returns the raw JSON value of store. Unlike store, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _streamOptionsfinal JsonField<ResponseCreateParams.StreamOptions> _streamOptions() Returns the raw JSON value of streamOptions. Unlike streamOptions, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _temperaturefinal JsonField<Double> _temperature() Returns the raw JSON value of temperature. Unlike temperature, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _textfinal JsonField<ResponseTextConfig> _text() Returns the raw JSON value of text. Unlike text, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _toolChoicefinal JsonField<ResponseCreateParams.ToolChoice> _toolChoice() Returns the raw JSON value of toolChoice. Unlike toolChoice, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _toolsfinal JsonField<List<Tool>> _tools() Returns the raw JSON value of tools. Unlike tools, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _topLogprobsfinal JsonField<Long> _topLogprobs() Returns the raw JSON value of topLogprobs. Unlike topLogprobs, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _topPfinal JsonField<Double> _topP() Returns the raw JSON value of topP. Unlike topP, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _truncationfinal JsonField<ResponseCreateParams.Truncation> _truncation() Returns the raw JSON value of truncation. Unlike truncation, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _user@Deprecated(message = "deprecated") final JsonField<String> _user() Returns the raw JSON value of user. Unlike user, this method doesn't throw if the JSON field has an unexpected type. 
 - 
                                        _additionalPropertiesfinal Map<String, JsonValue> _additionalProperties() 
 - 
                                        toBuilderfinal ResponseCreateParams.Body.Builder toBuilder() 
 - 
                                        validatefinal ResponseCreateParams.Body validate() 
 - 
                                        builderfinal static ResponseCreateParams.Body.Builder builder() Returns a mutable builder for constructing an instance of Body. 
 
- 
                                        
 
- 
                                
                            
                                
 
- 
                    
                    
                    
 
-