Package com.openai.models
Class ChatCompletionCreateParams.ServiceTier
-
- All Implemented Interfaces:
-
com.openai.core.Enum
public final class ChatCompletionCreateParams.ServiceTier implements Enum
Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:
If set to 'auto', and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted.
If set to 'auto', and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.
If set to 'default', the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.
When not set, the default behavior is 'auto'.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description public enum
ChatCompletionCreateParams.ServiceTier.Known
public enum
ChatCompletionCreateParams.ServiceTier.Value
-
Field Summary
Fields Modifier and Type Field Description public final static ChatCompletionCreateParams.ServiceTier
AUTO
public final static ChatCompletionCreateParams.ServiceTier
DEFAULT
-
Method Summary
Modifier and Type Method Description final JsonField<String>
_value()
final ChatCompletionCreateParams.ServiceTier.Value
value()
final ChatCompletionCreateParams.ServiceTier.Known
known()
final String
asString()
Boolean
equals(Object other)
Integer
hashCode()
String
toString()
final static ChatCompletionCreateParams.ServiceTier
of(String value)
-
-
Method Detail
-
value
final ChatCompletionCreateParams.ServiceTier.Value value()
-
known
final ChatCompletionCreateParams.ServiceTier.Known known()
-
of
final static ChatCompletionCreateParams.ServiceTier of(String value)
-
-
-
-