Class PutLlamaRequest
java.lang.Object
co.elastic.clients.elasticsearch._types.RequestBase
co.elastic.clients.elasticsearch.inference.PutLlamaRequest
- All Implemented Interfaces:
- JsonpSerializable
Create a Llama inference endpoint.
 
 Create an inference endpoint to perform an inference task with the
 llama service.
- See Also:
- 
Nested Class SummaryNested ClassesNested classes/interfaces inherited from class co.elastic.clients.elasticsearch._types.RequestBaseRequestBase.AbstractBuilder<BuilderT extends RequestBase.AbstractBuilder<BuilderT>>
- 
Field SummaryFieldsModifier and TypeFieldDescriptionstatic final JsonpDeserializer<PutLlamaRequest>Json deserializer forPutLlamaRequeststatic final Endpoint<PutLlamaRequest,PutLlamaResponse, ErrorResponse> Endpoint "inference.put_llama".
- 
Method SummaryModifier and TypeMethodDescriptionThe chunking configuration object.final StringRequired - The unique identifier of the inference endpoint.static PutLlamaRequestvoidserialize(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) Serialize this object to JSON.protected voidserializeInternal(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) final LlamaServiceTypeservice()Required - The type of service supported for the specified task type.final LlamaServiceSettingsRequired - Settings used to install the inference model.protected static voidfinal LlamaTaskTypetaskType()Required - The type of the inference task that the model will perform.final Timetimeout()Specifies the amount of time to wait for the inference endpoint to be created.Methods inherited from class co.elastic.clients.elasticsearch._types.RequestBasetoString
- 
Field Details- 
_DESERIALIZERJson deserializer forPutLlamaRequest
- 
_ENDPOINTEndpoint "inference.put_llama".
 
- 
- 
Method Details- 
ofpublic static PutLlamaRequest of(Function<PutLlamaRequest.Builder, ObjectBuilder<PutLlamaRequest>> fn) 
- 
chunkingSettingsThe chunking configuration object.API name: chunking_settings
- 
llamaInferenceIdRequired - The unique identifier of the inference endpoint.API name: llama_inference_id
- 
serviceRequired - The type of service supported for the specified task type. In this case,llama.API name: service
- 
serviceSettingsRequired - Settings used to install the inference model. These settings are specific to thellamaservice.API name: service_settings
- 
taskTypeRequired - The type of the inference task that the model will perform.API name: task_type
- 
timeoutSpecifies the amount of time to wait for the inference endpoint to be created.API name: timeout
- 
serializeSerialize this object to JSON.- Specified by:
- serializein interface- JsonpSerializable
 
- 
serializeInternal
- 
setupPutLlamaRequestDeserializerprotected static void setupPutLlamaRequestDeserializer(ObjectDeserializer<PutLlamaRequest.Builder> op) 
 
-