Package com.openai.models
Class ChatCompletionCreateParams.ServiceTier
-
- All Implemented Interfaces:
-
com.openai.core.Enum
public final class ChatCompletionCreateParams.ServiceTier implements EnumSpecifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:
If set to 'auto', and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted.
If set to 'auto', and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.
If set to 'default', the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.
When not set, the default behavior is 'auto'.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description public enumChatCompletionCreateParams.ServiceTier.Knownpublic enumChatCompletionCreateParams.ServiceTier.Value
-
Field Summary
Fields Modifier and Type Field Description public final static ChatCompletionCreateParams.ServiceTierAUTOpublic final static ChatCompletionCreateParams.ServiceTierDEFAULT
-
Method Summary
Modifier and Type Method Description final JsonField<String>_value()final ChatCompletionCreateParams.ServiceTier.Valuevalue()final ChatCompletionCreateParams.ServiceTier.Knownknown()final StringasString()Booleanequals(Object other)IntegerhashCode()StringtoString()final static ChatCompletionCreateParams.ServiceTierof(String value)-
-
Method Detail
-
value
final ChatCompletionCreateParams.ServiceTier.Value value()
-
known
final ChatCompletionCreateParams.ServiceTier.Known known()
-
of
final static ChatCompletionCreateParams.ServiceTier of(String value)
-
-
-
-