Class PutLlamaRequest.Builder
java.lang.Object
co.elastic.clients.util.ObjectBuilderBase
co.elastic.clients.util.WithJsonObjectBuilderBase<BuilderT>
co.elastic.clients.elasticsearch._types.RequestBase.AbstractBuilder<PutLlamaRequest.Builder>
co.elastic.clients.elasticsearch.inference.PutLlamaRequest.Builder
- All Implemented Interfaces:
- WithJson<PutLlamaRequest.Builder>,- ObjectBuilder<PutLlamaRequest>
- Enclosing class:
- PutLlamaRequest
public static class PutLlamaRequest.Builder
extends RequestBase.AbstractBuilder<PutLlamaRequest.Builder>
implements ObjectBuilder<PutLlamaRequest>
Builder for 
PutLlamaRequest.- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptionbuild()Builds aPutLlamaRequest.final PutLlamaRequest.BuilderThe chunking configuration object.final PutLlamaRequest.BuilderchunkingSettings(Function<InferenceChunkingSettings.Builder, ObjectBuilder<InferenceChunkingSettings>> fn) The chunking configuration object.final PutLlamaRequest.BuilderllamaInferenceId(String value) Required - The unique identifier of the inference endpoint.protected PutLlamaRequest.Builderself()final PutLlamaRequest.Builderservice(LlamaServiceType value) Required - The type of service supported for the specified task type.final PutLlamaRequest.BuilderRequired - Settings used to install the inference model.final PutLlamaRequest.BuilderRequired - Settings used to install the inference model.final PutLlamaRequest.BuildertaskType(LlamaTaskType value) Required - The type of the inference task that the model will perform.final PutLlamaRequest.BuilderSpecifies the amount of time to wait for the inference endpoint to be created.final PutLlamaRequest.BuilderSpecifies the amount of time to wait for the inference endpoint to be created.Methods inherited from class co.elastic.clients.util.WithJsonObjectBuilderBasewithJsonMethods inherited from class co.elastic.clients.util.ObjectBuilderBase_checkSingleUse, _listAdd, _listAddAll, _mapPut, _mapPutAll
- 
Constructor Details- 
Builderpublic Builder()
 
- 
- 
Method Details- 
chunkingSettingsThe chunking configuration object.API name: chunking_settings
- 
chunkingSettingspublic final PutLlamaRequest.Builder chunkingSettings(Function<InferenceChunkingSettings.Builder, ObjectBuilder<InferenceChunkingSettings>> fn) The chunking configuration object.API name: chunking_settings
- 
llamaInferenceIdRequired - The unique identifier of the inference endpoint.API name: llama_inference_id
- 
serviceRequired - The type of service supported for the specified task type. In this case,llama.API name: service
- 
serviceSettingsRequired - Settings used to install the inference model. These settings are specific to thellamaservice.API name: service_settings
- 
serviceSettingspublic final PutLlamaRequest.Builder serviceSettings(Function<LlamaServiceSettings.Builder, ObjectBuilder<LlamaServiceSettings>> fn) Required - Settings used to install the inference model. These settings are specific to thellamaservice.API name: service_settings
- 
taskTypeRequired - The type of the inference task that the model will perform.API name: task_type
- 
timeoutSpecifies the amount of time to wait for the inference endpoint to be created.API name: timeout
- 
timeoutSpecifies the amount of time to wait for the inference endpoint to be created.API name: timeout
- 
self- Specified by:
- selfin class- RequestBase.AbstractBuilder<PutLlamaRequest.Builder>
 
- 
buildBuilds aPutLlamaRequest.- Specified by:
- buildin interface- ObjectBuilder<PutLlamaRequest>
- Throws:
- NullPointerException- if some of the required fields are null.
 
 
-