Class RealtimeClientEvent
-
- All Implemented Interfaces:
public final class RealtimeClientEvent
A realtime client event.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description public interface
RealtimeClientEvent.Visitor
An interface that defines how to map each variant of RealtimeClientEvent to a value of type T.
public final class
RealtimeClientEvent.OutputAudioBufferClear
WebRTC Only: Emit to cut off the current audio response. This will trigger the server to stop generating audio and emit a
output_audio_buffer.cleared
event. This event should be preceded by aresponse.cancel
client event to stop the generation of the current response. Learn more.
-
Method Summary
Modifier and Type Method Description final Optional<ConversationItemCreateEvent>
conversationItemCreate()
Add a new Item to the Conversation's context, including messages, function calls, and function call responses. final Optional<ConversationItemDeleteEvent>
conversationItemDelete()
Send this event when you want to remove any item from the conversation history. final Optional<ConversationItemRetrieveEvent>
conversationItemRetrieve()
Send this event when you want to retrieve the server's representation of a specific item in the conversation history. final Optional<ConversationItemTruncateEvent>
conversationItemTruncate()
Send this event to truncate a previous assistant message’s audio. final Optional<InputAudioBufferAppendEvent>
inputAudioBufferAppend()
Send this event to append audio bytes to the input audio buffer. final Optional<InputAudioBufferClearEvent>
inputAudioBufferClear()
Send this event to clear the audio bytes in the buffer. final Optional<RealtimeClientEvent.OutputAudioBufferClear>
outputAudioBufferClear()
WebRTC Only: Emit to cut off the current audio response. final Optional<InputAudioBufferCommitEvent>
inputAudioBufferCommit()
Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. final Optional<ResponseCancelEvent>
responseCancel()
Send this event to cancel an in-progress response. final Optional<ResponseCreateEvent>
responseCreate()
This event instructs the server to create a Response, which means triggering model inference. final Optional<SessionUpdateEvent>
sessionUpdate()
Send this event to update the session’s default configuration. final Optional<TranscriptionSessionUpdate>
transcriptionSessionUpdate()
Send this event to update a transcription session. final Boolean
isConversationItemCreate()
final Boolean
isConversationItemDelete()
final Boolean
isConversationItemRetrieve()
final Boolean
isConversationItemTruncate()
final Boolean
isInputAudioBufferAppend()
final Boolean
isInputAudioBufferClear()
final Boolean
isOutputAudioBufferClear()
final Boolean
isInputAudioBufferCommit()
final Boolean
isResponseCancel()
final Boolean
isResponseCreate()
final Boolean
isSessionUpdate()
final Boolean
isTranscriptionSessionUpdate()
final ConversationItemCreateEvent
asConversationItemCreate()
Add a new Item to the Conversation's context, including messages, function calls, and function call responses. final ConversationItemDeleteEvent
asConversationItemDelete()
Send this event when you want to remove any item from the conversation history. final ConversationItemRetrieveEvent
asConversationItemRetrieve()
Send this event when you want to retrieve the server's representation of a specific item in the conversation history. final ConversationItemTruncateEvent
asConversationItemTruncate()
Send this event to truncate a previous assistant message’s audio. final InputAudioBufferAppendEvent
asInputAudioBufferAppend()
Send this event to append audio bytes to the input audio buffer. final InputAudioBufferClearEvent
asInputAudioBufferClear()
Send this event to clear the audio bytes in the buffer. final RealtimeClientEvent.OutputAudioBufferClear
asOutputAudioBufferClear()
WebRTC Only: Emit to cut off the current audio response. final InputAudioBufferCommitEvent
asInputAudioBufferCommit()
Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. final ResponseCancelEvent
asResponseCancel()
Send this event to cancel an in-progress response. final ResponseCreateEvent
asResponseCreate()
This event instructs the server to create a Response, which means triggering model inference. final SessionUpdateEvent
asSessionUpdate()
Send this event to update the session’s default configuration. final TranscriptionSessionUpdate
asTranscriptionSessionUpdate()
Send this event to update a transcription session. final Optional<JsonValue>
_json()
final <T extends Any> T
accept(RealtimeClientEvent.Visitor<T> visitor)
final RealtimeClientEvent
validate()
final Boolean
isValid()
Boolean
equals(Object other)
Integer
hashCode()
String
toString()
final static RealtimeClientEvent
ofConversationItemCreate(ConversationItemCreateEvent conversationItemCreate)
Add a new Item to the Conversation's context, including messages, function calls, and function call responses. final static RealtimeClientEvent
ofConversationItemDelete(ConversationItemDeleteEvent conversationItemDelete)
Send this event when you want to remove any item from the conversation history. final static RealtimeClientEvent
ofConversationItemRetrieve(ConversationItemRetrieveEvent conversationItemRetrieve)
Send this event when you want to retrieve the server's representation of a specific item in the conversation history. final static RealtimeClientEvent
ofConversationItemTruncate(ConversationItemTruncateEvent conversationItemTruncate)
Send this event to truncate a previous assistant message’s audio. final static RealtimeClientEvent
ofInputAudioBufferAppend(InputAudioBufferAppendEvent inputAudioBufferAppend)
Send this event to append audio bytes to the input audio buffer. final static RealtimeClientEvent
ofInputAudioBufferClear(InputAudioBufferClearEvent inputAudioBufferClear)
Send this event to clear the audio bytes in the buffer. final static RealtimeClientEvent
ofOutputAudioBufferClear(RealtimeClientEvent.OutputAudioBufferClear outputAudioBufferClear)
WebRTC Only: Emit to cut off the current audio response. final static RealtimeClientEvent
ofInputAudioBufferCommit(InputAudioBufferCommitEvent inputAudioBufferCommit)
Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. final static RealtimeClientEvent
ofResponseCancel(ResponseCancelEvent responseCancel)
Send this event to cancel an in-progress response. final static RealtimeClientEvent
ofResponseCreate(ResponseCreateEvent responseCreate)
This event instructs the server to create a Response, which means triggering model inference. final static RealtimeClientEvent
ofSessionUpdate(SessionUpdateEvent sessionUpdate)
Send this event to update the session’s default configuration. final static RealtimeClientEvent
ofTranscriptionSessionUpdate(TranscriptionSessionUpdate transcriptionSessionUpdate)
Send this event to update a transcription session. -
-
Method Detail
-
conversationItemCreate
final Optional<ConversationItemCreateEvent> conversationItemCreate()
Add a new Item to the Conversation's context, including messages, function calls, and function call responses. This event can be used both to populate a "history" of the conversation and to add new items mid-stream, but has the current limitation that it cannot populate assistant audio messages.
If successful, the server will respond with a
conversation.item.created
event, otherwise anerror
event will be sent.
-
conversationItemDelete
final Optional<ConversationItemDeleteEvent> conversationItemDelete()
Send this event when you want to remove any item from the conversation history. The server will respond with a
conversation.item.deleted
event, unless the item does not exist in the conversation history, in which case the server will respond with an error.
-
conversationItemRetrieve
final Optional<ConversationItemRetrieveEvent> conversationItemRetrieve()
Send this event when you want to retrieve the server's representation of a specific item in the conversation history. This is useful, for example, to inspect user audio after noise cancellation and VAD. The server will respond with a
conversation.item.retrieved
event, unless the item does not exist in the conversation history, in which case the server will respond with an error.
-
conversationItemTruncate
final Optional<ConversationItemTruncateEvent> conversationItemTruncate()
Send this event to truncate a previous assistant message’s audio. The server will produce audio faster than realtime, so this event is useful when the user interrupts to truncate audio that has already been sent to the client but not yet played. This will synchronize the server's understanding of the audio with the client's playback.
Truncating audio will delete the server-side text transcript to ensure there is not text in the context that hasn't been heard by the user.
If successful, the server will respond with a
conversation.item.truncated
event.
-
inputAudioBufferAppend
final Optional<InputAudioBufferAppendEvent> inputAudioBufferAppend()
Send this event to append audio bytes to the input audio buffer. The audio buffer is temporary storage you can write to and later commit. In Server VAD mode, the audio buffer is used to detect speech and the server will decide when to commit. When Server VAD is disabled, you must commit the audio buffer manually.
The client may choose how much audio to place in each event up to a maximum of 15 MiB, for example streaming smaller chunks from the client may allow the VAD to be more responsive. Unlike made other client events, the server will not send a confirmation response to this event.
-
inputAudioBufferClear
final Optional<InputAudioBufferClearEvent> inputAudioBufferClear()
Send this event to clear the audio bytes in the buffer. The server will respond with an
input_audio_buffer.cleared
event.
-
outputAudioBufferClear
final Optional<RealtimeClientEvent.OutputAudioBufferClear> outputAudioBufferClear()
WebRTC Only: Emit to cut off the current audio response. This will trigger the server to stop generating audio and emit a
output_audio_buffer.cleared
event. This event should be preceded by aresponse.cancel
client event to stop the generation of the current response. Learn more.
-
inputAudioBufferCommit
final Optional<InputAudioBufferCommitEvent> inputAudioBufferCommit()
Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. This event will produce an error if the input audio buffer is empty. When in Server VAD mode, the client does not need to send this event, the server will commit the audio buffer automatically.
Committing the input audio buffer will trigger input audio transcription (if enabled in session configuration), but it will not create a response from the model. The server will respond with an
input_audio_buffer.committed
event.
-
responseCancel
final Optional<ResponseCancelEvent> responseCancel()
Send this event to cancel an in-progress response. The server will respond with a
response.cancelled
event or an error if there is no response to cancel.
-
responseCreate
final Optional<ResponseCreateEvent> responseCreate()
This event instructs the server to create a Response, which means triggering model inference. When in Server VAD mode, the server will create Responses automatically.
A Response will include at least one Item, and may have two, in which case the second will be a function call. These Items will be appended to the conversation history.
The server will respond with a
response.created
event, events for Items and content created, and finally aresponse.done
event to indicate the Response is complete.The
response.create
event includes inference configuration likeinstructions
, andtemperature
. These fields will override the Session's configuration for this Response only.
-
sessionUpdate
final Optional<SessionUpdateEvent> sessionUpdate()
Send this event to update the session’s default configuration. The client may send this event at any time to update any field, except for
voice
. However, note that once a session has been initialized with a particularmodel
, it can’t be changed to another model usingsession.update
.When the server receives a
session.update
, it will respond with asession.updated
event showing the full, effective configuration. Only the fields that are present are updated. To clear a field likeinstructions
, pass an empty string.
-
transcriptionSessionUpdate
final Optional<TranscriptionSessionUpdate> transcriptionSessionUpdate()
Send this event to update a transcription session.
-
isConversationItemCreate
final Boolean isConversationItemCreate()
-
isConversationItemDelete
final Boolean isConversationItemDelete()
-
isConversationItemRetrieve
final Boolean isConversationItemRetrieve()
-
isConversationItemTruncate
final Boolean isConversationItemTruncate()
-
isInputAudioBufferAppend
final Boolean isInputAudioBufferAppend()
-
isInputAudioBufferClear
final Boolean isInputAudioBufferClear()
-
isOutputAudioBufferClear
final Boolean isOutputAudioBufferClear()
-
isInputAudioBufferCommit
final Boolean isInputAudioBufferCommit()
-
isResponseCancel
final Boolean isResponseCancel()
-
isResponseCreate
final Boolean isResponseCreate()
-
isSessionUpdate
final Boolean isSessionUpdate()
-
isTranscriptionSessionUpdate
final Boolean isTranscriptionSessionUpdate()
-
asConversationItemCreate
final ConversationItemCreateEvent asConversationItemCreate()
Add a new Item to the Conversation's context, including messages, function calls, and function call responses. This event can be used both to populate a "history" of the conversation and to add new items mid-stream, but has the current limitation that it cannot populate assistant audio messages.
If successful, the server will respond with a
conversation.item.created
event, otherwise anerror
event will be sent.
-
asConversationItemDelete
final ConversationItemDeleteEvent asConversationItemDelete()
Send this event when you want to remove any item from the conversation history. The server will respond with a
conversation.item.deleted
event, unless the item does not exist in the conversation history, in which case the server will respond with an error.
-
asConversationItemRetrieve
final ConversationItemRetrieveEvent asConversationItemRetrieve()
Send this event when you want to retrieve the server's representation of a specific item in the conversation history. This is useful, for example, to inspect user audio after noise cancellation and VAD. The server will respond with a
conversation.item.retrieved
event, unless the item does not exist in the conversation history, in which case the server will respond with an error.
-
asConversationItemTruncate
final ConversationItemTruncateEvent asConversationItemTruncate()
Send this event to truncate a previous assistant message’s audio. The server will produce audio faster than realtime, so this event is useful when the user interrupts to truncate audio that has already been sent to the client but not yet played. This will synchronize the server's understanding of the audio with the client's playback.
Truncating audio will delete the server-side text transcript to ensure there is not text in the context that hasn't been heard by the user.
If successful, the server will respond with a
conversation.item.truncated
event.
-
asInputAudioBufferAppend
final InputAudioBufferAppendEvent asInputAudioBufferAppend()
Send this event to append audio bytes to the input audio buffer. The audio buffer is temporary storage you can write to and later commit. In Server VAD mode, the audio buffer is used to detect speech and the server will decide when to commit. When Server VAD is disabled, you must commit the audio buffer manually.
The client may choose how much audio to place in each event up to a maximum of 15 MiB, for example streaming smaller chunks from the client may allow the VAD to be more responsive. Unlike made other client events, the server will not send a confirmation response to this event.
-
asInputAudioBufferClear
final InputAudioBufferClearEvent asInputAudioBufferClear()
Send this event to clear the audio bytes in the buffer. The server will respond with an
input_audio_buffer.cleared
event.
-
asOutputAudioBufferClear
final RealtimeClientEvent.OutputAudioBufferClear asOutputAudioBufferClear()
WebRTC Only: Emit to cut off the current audio response. This will trigger the server to stop generating audio and emit a
output_audio_buffer.cleared
event. This event should be preceded by aresponse.cancel
client event to stop the generation of the current response. Learn more.
-
asInputAudioBufferCommit
final InputAudioBufferCommitEvent asInputAudioBufferCommit()
Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. This event will produce an error if the input audio buffer is empty. When in Server VAD mode, the client does not need to send this event, the server will commit the audio buffer automatically.
Committing the input audio buffer will trigger input audio transcription (if enabled in session configuration), but it will not create a response from the model. The server will respond with an
input_audio_buffer.committed
event.
-
asResponseCancel
final ResponseCancelEvent asResponseCancel()
Send this event to cancel an in-progress response. The server will respond with a
response.cancelled
event or an error if there is no response to cancel.
-
asResponseCreate
final ResponseCreateEvent asResponseCreate()
This event instructs the server to create a Response, which means triggering model inference. When in Server VAD mode, the server will create Responses automatically.
A Response will include at least one Item, and may have two, in which case the second will be a function call. These Items will be appended to the conversation history.
The server will respond with a
response.created
event, events for Items and content created, and finally aresponse.done
event to indicate the Response is complete.The
response.create
event includes inference configuration likeinstructions
, andtemperature
. These fields will override the Session's configuration for this Response only.
-
asSessionUpdate
final SessionUpdateEvent asSessionUpdate()
Send this event to update the session’s default configuration. The client may send this event at any time to update any field, except for
voice
. However, note that once a session has been initialized with a particularmodel
, it can’t be changed to another model usingsession.update
.When the server receives a
session.update
, it will respond with asession.updated
event showing the full, effective configuration. Only the fields that are present are updated. To clear a field likeinstructions
, pass an empty string.
-
asTranscriptionSessionUpdate
final TranscriptionSessionUpdate asTranscriptionSessionUpdate()
Send this event to update a transcription session.
-
accept
final <T extends Any> T accept(RealtimeClientEvent.Visitor<T> visitor)
-
validate
final RealtimeClientEvent validate()
-
ofConversationItemCreate
final static RealtimeClientEvent ofConversationItemCreate(ConversationItemCreateEvent conversationItemCreate)
Add a new Item to the Conversation's context, including messages, function calls, and function call responses. This event can be used both to populate a "history" of the conversation and to add new items mid-stream, but has the current limitation that it cannot populate assistant audio messages.
If successful, the server will respond with a
conversation.item.created
event, otherwise anerror
event will be sent.
-
ofConversationItemDelete
final static RealtimeClientEvent ofConversationItemDelete(ConversationItemDeleteEvent conversationItemDelete)
Send this event when you want to remove any item from the conversation history. The server will respond with a
conversation.item.deleted
event, unless the item does not exist in the conversation history, in which case the server will respond with an error.
-
ofConversationItemRetrieve
final static RealtimeClientEvent ofConversationItemRetrieve(ConversationItemRetrieveEvent conversationItemRetrieve)
Send this event when you want to retrieve the server's representation of a specific item in the conversation history. This is useful, for example, to inspect user audio after noise cancellation and VAD. The server will respond with a
conversation.item.retrieved
event, unless the item does not exist in the conversation history, in which case the server will respond with an error.
-
ofConversationItemTruncate
final static RealtimeClientEvent ofConversationItemTruncate(ConversationItemTruncateEvent conversationItemTruncate)
Send this event to truncate a previous assistant message’s audio. The server will produce audio faster than realtime, so this event is useful when the user interrupts to truncate audio that has already been sent to the client but not yet played. This will synchronize the server's understanding of the audio with the client's playback.
Truncating audio will delete the server-side text transcript to ensure there is not text in the context that hasn't been heard by the user.
If successful, the server will respond with a
conversation.item.truncated
event.
-
ofInputAudioBufferAppend
final static RealtimeClientEvent ofInputAudioBufferAppend(InputAudioBufferAppendEvent inputAudioBufferAppend)
Send this event to append audio bytes to the input audio buffer. The audio buffer is temporary storage you can write to and later commit. In Server VAD mode, the audio buffer is used to detect speech and the server will decide when to commit. When Server VAD is disabled, you must commit the audio buffer manually.
The client may choose how much audio to place in each event up to a maximum of 15 MiB, for example streaming smaller chunks from the client may allow the VAD to be more responsive. Unlike made other client events, the server will not send a confirmation response to this event.
-
ofInputAudioBufferClear
final static RealtimeClientEvent ofInputAudioBufferClear(InputAudioBufferClearEvent inputAudioBufferClear)
Send this event to clear the audio bytes in the buffer. The server will respond with an
input_audio_buffer.cleared
event.
-
ofOutputAudioBufferClear
final static RealtimeClientEvent ofOutputAudioBufferClear(RealtimeClientEvent.OutputAudioBufferClear outputAudioBufferClear)
WebRTC Only: Emit to cut off the current audio response. This will trigger the server to stop generating audio and emit a
output_audio_buffer.cleared
event. This event should be preceded by aresponse.cancel
client event to stop the generation of the current response. Learn more.
-
ofInputAudioBufferCommit
final static RealtimeClientEvent ofInputAudioBufferCommit(InputAudioBufferCommitEvent inputAudioBufferCommit)
Send this event to commit the user input audio buffer, which will create a new user message item in the conversation. This event will produce an error if the input audio buffer is empty. When in Server VAD mode, the client does not need to send this event, the server will commit the audio buffer automatically.
Committing the input audio buffer will trigger input audio transcription (if enabled in session configuration), but it will not create a response from the model. The server will respond with an
input_audio_buffer.committed
event.
-
ofResponseCancel
final static RealtimeClientEvent ofResponseCancel(ResponseCancelEvent responseCancel)
Send this event to cancel an in-progress response. The server will respond with a
response.cancelled
event or an error if there is no response to cancel.
-
ofResponseCreate
final static RealtimeClientEvent ofResponseCreate(ResponseCreateEvent responseCreate)
This event instructs the server to create a Response, which means triggering model inference. When in Server VAD mode, the server will create Responses automatically.
A Response will include at least one Item, and may have two, in which case the second will be a function call. These Items will be appended to the conversation history.
The server will respond with a
response.created
event, events for Items and content created, and finally aresponse.done
event to indicate the Response is complete.The
response.create
event includes inference configuration likeinstructions
, andtemperature
. These fields will override the Session's configuration for this Response only.
-
ofSessionUpdate
final static RealtimeClientEvent ofSessionUpdate(SessionUpdateEvent sessionUpdate)
Send this event to update the session’s default configuration. The client may send this event at any time to update any field, except for
voice
. However, note that once a session has been initialized with a particularmodel
, it can’t be changed to another model usingsession.update
.When the server receives a
session.update
, it will respond with asession.updated
event showing the full, effective configuration. Only the fields that are present are updated. To clear a field likeinstructions
, pass an empty string.
-
ofTranscriptionSessionUpdate
final static RealtimeClientEvent ofTranscriptionSessionUpdate(TranscriptionSessionUpdate transcriptionSessionUpdate)
Send this event to update a transcription session.
-
-
-
-