Class PutLlamaRequest.Builder
java.lang.Object
co.elastic.clients.util.ObjectBuilderBase
co.elastic.clients.util.WithJsonObjectBuilderBase<BuilderT>
co.elastic.clients.elasticsearch._types.RequestBase.AbstractBuilder<PutLlamaRequest.Builder>
co.elastic.clients.elasticsearch.inference.PutLlamaRequest.Builder
- All Implemented Interfaces:
WithJson<PutLlamaRequest.Builder>,ObjectBuilder<PutLlamaRequest>
- Enclosing class:
- PutLlamaRequest
public static class PutLlamaRequest.Builder
extends RequestBase.AbstractBuilder<PutLlamaRequest.Builder>
implements ObjectBuilder<PutLlamaRequest>
Builder for
PutLlamaRequest.-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionbuild()Builds aPutLlamaRequest.final PutLlamaRequest.BuilderThe chunking configuration object.final PutLlamaRequest.BuilderchunkingSettings(Function<InferenceChunkingSettings.Builder, ObjectBuilder<InferenceChunkingSettings>> fn) The chunking configuration object.final PutLlamaRequest.BuilderllamaInferenceId(String value) Required - The unique identifier of the inference endpoint.protected PutLlamaRequest.Builderself()final PutLlamaRequest.Builderservice(LlamaServiceType value) Required - The type of service supported for the specified task type.final PutLlamaRequest.BuilderRequired - Settings used to install the inference model.final PutLlamaRequest.BuilderRequired - Settings used to install the inference model.final PutLlamaRequest.BuildertaskType(LlamaTaskType value) Required - The type of the inference task that the model will perform.final PutLlamaRequest.BuilderSpecifies the amount of time to wait for the inference endpoint to be created.final PutLlamaRequest.BuilderSpecifies the amount of time to wait for the inference endpoint to be created.Methods inherited from class co.elastic.clients.util.WithJsonObjectBuilderBase
withJsonMethods inherited from class co.elastic.clients.util.ObjectBuilderBase
_checkSingleUse, _listAdd, _listAddAll, _mapPut, _mapPutAll
-
Constructor Details
-
Builder
public Builder()
-
-
Method Details
-
chunkingSettings
The chunking configuration object. Applies only to thetext_embeddingtask type. Not applicable to thecompletionorchat_completiontask types.API name:
chunking_settings -
chunkingSettings
public final PutLlamaRequest.Builder chunkingSettings(Function<InferenceChunkingSettings.Builder, ObjectBuilder<InferenceChunkingSettings>> fn) The chunking configuration object. Applies only to thetext_embeddingtask type. Not applicable to thecompletionorchat_completiontask types.API name:
chunking_settings -
llamaInferenceId
Required - The unique identifier of the inference endpoint.API name:
llama_inference_id -
service
Required - The type of service supported for the specified task type. In this case,llama.API name:
service -
serviceSettings
Required - Settings used to install the inference model. These settings are specific to thellamaservice.API name:
service_settings -
serviceSettings
public final PutLlamaRequest.Builder serviceSettings(Function<LlamaServiceSettings.Builder, ObjectBuilder<LlamaServiceSettings>> fn) Required - Settings used to install the inference model. These settings are specific to thellamaservice.API name:
service_settings -
taskType
Required - The type of the inference task that the model will perform.API name:
task_type -
timeout
Specifies the amount of time to wait for the inference endpoint to be created.API name:
timeout -
timeout
Specifies the amount of time to wait for the inference endpoint to be created.API name:
timeout -
self
- Specified by:
selfin classRequestBase.AbstractBuilder<PutLlamaRequest.Builder>
-
build
Builds aPutLlamaRequest.- Specified by:
buildin interfaceObjectBuilder<PutLlamaRequest>- Throws:
NullPointerException- if some of the required fields are null.
-