Class UpdateByQueryRequest
- All Implemented Interfaces:
JsonpSerializable
If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or alias:
readindexorwrite
You can specify the query criteria in the request URI or the request body using the same syntax as the search API.
When you submit an update by query request, Elasticsearch gets a snapshot of
the data stream or index when it begins processing the request and updates
matching documents using internal versioning. When the versions match, the
document is updated and the version number is incremented. If a document
changes between the time that the snapshot is taken and the update operation
is processed, it results in a version conflict and the operation fails. You
can opt to count version conflicts instead of halting and returning by
setting conflicts to proceed. Note that if you opt
to count version conflicts, the operation could attempt to update more
documents from the source than max_docs until it has
successfully updated max_docs documents or it has gone through
every document in the source query.
NOTE: Documents with a version equal to 0 cannot be updated using update by query because internal versioning does not support 0 as a valid version number.
While processing an update by query request, Elasticsearch performs multiple search requests sequentially to find all of the matching documents. A bulk update request is performed for each batch of matching documents. Any query or update failures cause the update by query request to fail and the failures are shown in the response. Any update requests that completed successfully still stick, they are not rolled back.
Refreshing shards
Specifying the refresh parameter refreshes all shards once the
request completes. This is different to the update API's refresh
parameter, which causes only the shard that received the request to be
refreshed. Unlike the update API, it does not support wait_for.
Running update by query asynchronously
If the request contains wait_for_completion=false, Elasticsearch
performs some preflight checks, launches the request, and returns a task
you can use to cancel or get the status of the task. Elasticsearch creates a
record of this task as a document at .tasks/task/${taskId}.
Waiting for active shards
wait_for_active_shards controls how many copies of a shard must
be active before proceeding with the request. See wait_for_active_shards
for details. timeout controls how long each write request waits
for unavailable shards to become available. Both work exactly the way they
work in the Bulk
API. Update by query uses scrolled searches, so you can also specify the
scroll parameter to control how long it keeps the search context
alive, for example ?scroll=10m. The default is 5 minutes.
Throttling update requests
To control the rate at which update by query issues batches of update
operations, you can set requests_per_second to any positive
decimal number. This pads each batch with a wait time to throttle the rate.
Set requests_per_second to -1 to turn off
throttling.
Throttling uses a wait time between batches so that the internal scroll
requests can be given a timeout that takes the request padding into account.
The padding time is the difference between the batch size divided by the
requests_per_second and the time spent writing. By default the
batch size is 1000, so if requests_per_second is set to
500:
target_time = 1000 / 500 per second = 2 seconds
wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds
Since the batch is issued as a single _bulk request, large batch sizes cause Elasticsearch to create many requests and wait before starting the next set. This is "bursty" instead of "smooth".
Slicing
Update by query supports sliced scroll to parallelize the update process. This can improve efficiency and provide a convenient way to break the request down into smaller parts.
Setting slices to auto chooses a reasonable number
for most data streams and indices. This setting will use one slice per shard,
up to a certain limit. If there are multiple source data streams or indices,
it will choose the number of slices based on the index or backing index with
the smallest number of shards.
Adding slices to _update_by_query just automates
the manual process of creating sub-requests, which means it has some quirks:
- You can see these requests in the tasks APIs. These sub-requests are "child" tasks of the task for the request with slices.
- Fetching the status of the task for the request with
slicesonly contains the status of completed slices. - These sub-requests are individually addressable for things like cancellation and rethrottling.
- Rethrottling the request with
sliceswill rethrottle the unfinished sub-request proportionally. - Canceling the request with slices will cancel each sub-request.
- Due to the nature of slices each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution.
- Parameters like
requests_per_secondandmax_docson a request with slices are distributed proportionally to each sub-request. Combine that with the point above about distribution being uneven and you should conclude that usingmax_docswithslicesmight not result in exactlymax_docsdocuments being updated. - Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time.
If you're slicing manually or otherwise tuning automatic slicing, keep in mind that:
- Query performance is most efficient when the number of slices is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many slices hurts performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead.
- Update performance scales linearly across available resources with the number of slices.
Whether query or update performance dominates the runtime depends on the
documents being reindexed and cluster resources. Refer to the linked
documentation for examples of how to update documents using the
_update_by_query API:
- See Also:
-
Nested Class Summary
Nested ClassesNested classes/interfaces inherited from class co.elastic.clients.elasticsearch._types.RequestBase
RequestBase.AbstractBuilder<BuilderT extends RequestBase.AbstractBuilder<BuilderT>> -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final JsonpDeserializer<UpdateByQueryRequest>Json deserializer forUpdateByQueryRequeststatic final Endpoint<UpdateByQueryRequest,UpdateByQueryResponse, ErrorResponse> Endpoint "update_by_query". -
Method Summary
Modifier and TypeMethodDescriptionfinal BooleanIffalse, the request returns an error if any wildcard expression, index alias, or_allvalue targets only missing or closed indices.final Stringanalyzer()The analyzer to use for the query string.final BooleanIftrue, wildcard and prefix queries are analyzed.final ConflictsThe preferred behavior when update by query hits version conflicts:abortorproceed.final OperatorThe default operator for query string query:andoror.final Stringdf()The field to use as default where no field prefix is given in the query string.final List<ExpandWildcard>The type of index that wildcard patterns can match.final Longfrom()Skips the specified number of documents.final BooleanIffalse, the request returns an error if it targets a missing or closed index.index()Required - A comma-separated list of data streams, indices, and aliases to search.final Booleanlenient()Iftrue, format-based query failures (such as providing text to a numeric field) in the query string will be ignored.final LongmaxDocs()The maximum number of documents to update.static UpdateByQueryRequestfinal Stringpipeline()The ID of the pipeline to use to preprocess incoming documents.final StringThe node or shard the operation should be performed on.final Stringq()A query in the Lucene query string syntax.final Queryquery()The documents to update using the Query DSL.final Booleanrefresh()Iftrue, Elasticsearch refreshes affected shards to make the operation visible to search after the request completes.final BooleanIftrue, the request cache is used for this request.final FloatThe throttle for this request in sub-requests per second.final Stringrouting()A custom value used to route operations to a specific shard.final Scriptscript()The script to run to update the document source or metadata when updating.final Timescroll()The period to retain the search context for scrolling.final LongThe size of the scroll request that powers the operation.final TimeAn explicit timeout for each search request.final SearchTypeThe type of the search operation.voidserialize(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) Serialize this object to JSON.protected voidserializeInternal(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) protected static voidfinal SlicedScrollslice()Slice the request manually using the provided slice ID and total number of slices.final Slicesslices()The number of slices this task should be divided into.sort()A comma-separated list of <field>:<direction> pairs.stats()The specifictagof the request for logging and statistical purposes.final LongThe maximum number of documents to collect for each shard.final Timetimeout()The period each update request waits for the following operations: dynamic mapping updates, waiting for active shards.final Booleanversion()Iftrue, returns the document version as part of a hit.final BooleanShould the document increment the version number (internal) on hit or not (reindex)final WaitForActiveShardsThe number of shard copies that must be active before proceeding with the operation.final BooleanIftrue, the request blocks until the operation is complete.Methods inherited from class co.elastic.clients.elasticsearch._types.RequestBase
toString
-
Field Details
-
_DESERIALIZER
Json deserializer forUpdateByQueryRequest -
_ENDPOINT
Endpoint "update_by_query".
-
-
Method Details
-
of
public static UpdateByQueryRequest of(Function<UpdateByQueryRequest.Builder, ObjectBuilder<UpdateByQueryRequest>> fn) -
allowNoIndices
Iffalse, the request returns an error if any wildcard expression, index alias, or_allvalue targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*returns an error if an index starts withfoobut no index starts withbar.API name:
allow_no_indices -
analyzeWildcard
Iftrue, wildcard and prefix queries are analyzed. This parameter can be used only when theqquery string parameter is specified.API name:
analyze_wildcard -
analyzer
The analyzer to use for the query string. This parameter can be used only when theqquery string parameter is specified.API name:
analyzer -
conflicts
The preferred behavior when update by query hits version conflicts:abortorproceed.API name:
conflicts -
defaultOperator
The default operator for query string query:andoror. This parameter can be used only when theqquery string parameter is specified.API name:
default_operator -
df
The field to use as default where no field prefix is given in the query string. This parameter can be used only when theqquery string parameter is specified.API name:
df -
expandWildcards
The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such asopen,hidden.API name:
expand_wildcards -
from
Skips the specified number of documents.API name:
from -
index
Required - A comma-separated list of data streams, indices, and aliases to search. It supports wildcards (*). To search all data streams or indices, omit this parameter or use*or_all.API name:
index -
lenient
Iftrue, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when theqquery string parameter is specified.API name:
lenient -
maxDocs
The maximum number of documents to update.API name:
max_docs -
pipeline
The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to_nonedisables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter.API name:
pipeline -
preference
The node or shard the operation should be performed on. It is random by default.API name:
preference -
q
A query in the Lucene query string syntax.API name:
q -
query
The documents to update using the Query DSL.API name:
query -
refresh
Iftrue, Elasticsearch refreshes affected shards to make the operation visible to search after the request completes. This is different than the update API'srefreshparameter, which causes just the shard that received the request to be refreshed.API name:
refresh -
requestCache
Iftrue, the request cache is used for this request. It defaults to the index-level setting.API name:
request_cache -
requestsPerSecond
The throttle for this request in sub-requests per second.API name:
requests_per_second -
routing
A custom value used to route operations to a specific shard.API name:
routing -
script
The script to run to update the document source or metadata when updating.API name:
script -
scroll
The period to retain the search context for scrolling.API name:
scroll -
scrollSize
The size of the scroll request that powers the operation.API name:
scroll_size -
searchTimeout
An explicit timeout for each search request. By default, there is no timeout.API name:
search_timeout -
searchType
The type of the search operation. Available options includequery_then_fetchanddfs_query_then_fetch.API name:
search_type -
slice
Slice the request manually using the provided slice ID and total number of slices.API name:
slice -
slices
The number of slices this task should be divided into.API name:
slices -
sort
A comma-separated list of <field>:<direction> pairs.API name:
sort -
stats
The specifictagof the request for logging and statistical purposes.API name:
stats -
terminateAfter
The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.IMPORTANT: Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers.
API name:
terminate_after -
timeout
The period each update request waits for the following operations: dynamic mapping updates, waiting for active shards. By default, it is one minute. This guarantees Elasticsearch waits for at least the timeout before failing. The actual wait time could be longer, particularly when multiple waits occur.API name:
timeout -
version
Iftrue, returns the document version as part of a hit.API name:
version -
versionType
Should the document increment the version number (internal) on hit or not (reindex)API name:
version_type -
waitForActiveShards
The number of shard copies that must be active before proceeding with the operation. Set toallor any positive integer up to the total number of shards in the index (number_of_replicas+1). Thetimeoutparameter controls how long each write request waits for unavailable shards to become available. Both work exactly the way they work in the bulk API.API name:
wait_for_active_shards -
waitForCompletion
Iftrue, the request blocks until the operation is complete. Iffalse, Elasticsearch performs some preflight checks, launches the request, and returns a task ID that you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at.tasks/task/${taskId}.API name:
wait_for_completion -
serialize
Serialize this object to JSON.- Specified by:
serializein interfaceJsonpSerializable
-
serializeInternal
-
setupUpdateByQueryRequestDeserializer
protected static void setupUpdateByQueryRequestDeserializer(ObjectDeserializer<UpdateByQueryRequest.Builder> op)
-