Class UpdateByQueryRequest
- All Implemented Interfaces:
JsonpSerializable
If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or alias:
read
index
orwrite
You can specify the query criteria in the request URI or the request body using the same syntax as the search API.
When you submit an update by query request, Elasticsearch gets a snapshot of
the data stream or index when it begins processing the request and updates
matching documents using internal versioning. When the versions match, the
document is updated and the version number is incremented. If a document
changes between the time that the snapshot is taken and the update operation
is processed, it results in a version conflict and the operation fails. You
can opt to count version conflicts instead of halting and returning by
setting conflicts
to proceed
. Note that if you opt
to count version conflicts, the operation could attempt to update more
documents from the source than max_docs
until it has
successfully updated max_docs
documents or it has gone through
every document in the source query.
NOTE: Documents with a version equal to 0 cannot be updated using update by query because internal versioning does not support 0 as a valid version number.
While processing an update by query request, Elasticsearch performs multiple search requests sequentially to find all of the matching documents. A bulk update request is performed for each batch of matching documents. Any query or update failures cause the update by query request to fail and the failures are shown in the response. Any update requests that completed successfully still stick, they are not rolled back.
Throttling update requests
To control the rate at which update by query issues batches of update
operations, you can set requests_per_second
to any positive
decimal number. This pads each batch with a wait time to throttle the rate.
Set requests_per_second
to -1
to turn off
throttling.
Throttling uses a wait time between batches so that the internal scroll
requests can be given a timeout that takes the request padding into account.
The padding time is the difference between the batch size divided by the
requests_per_second
and the time spent writing. By default the
batch size is 1000, so if requests_per_second
is set to
500
:
target_time = 1000 / 500 per second = 2 seconds
wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds
Since the batch is issued as a single _bulk request, large batch sizes cause Elasticsearch to create many requests and wait before starting the next set. This is "bursty" instead of "smooth".
Slicing
Update by query supports sliced scroll to parallelize the update process. This can improve efficiency and provide a convenient way to break the request down into smaller parts.
Setting slices
to auto
chooses a reasonable number
for most data streams and indices. This setting will use one slice per shard,
up to a certain limit. If there are multiple source data streams or indices,
it will choose the number of slices based on the index or backing index with
the smallest number of shards.
Adding slices
to _update_by_query
just automates
the manual process of creating sub-requests, which means it has some quirks:
- You can see these requests in the tasks APIs. These sub-requests are "child" tasks of the task for the request with slices.
- Fetching the status of the task for the request with
slices
only contains the status of completed slices. - These sub-requests are individually addressable for things like cancellation and rethrottling.
- Rethrottling the request with
slices
will rethrottle the unfinished sub-request proportionally. - Canceling the request with slices will cancel each sub-request.
- Due to the nature of slices each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution.
- Parameters like
requests_per_second
andmax_docs
on a request with slices are distributed proportionally to each sub-request. Combine that with the point above about distribution being uneven and you should conclude that usingmax_docs
withslices
might not result in exactlymax_docs
documents being updated. - Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time.
If you're slicing manually or otherwise tuning automatic slicing, keep in mind that:
- Query performance is most efficient when the number of slices is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many slices hurts performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead.
- Update performance scales linearly across available resources with the number of slices.
Whether query or update performance dominates the runtime depends on the documents being reindexed and cluster resources.
Update the document source
Update by query supports scripts to update the document source. As with the
update API, you can set ctx.op
to change the operation that is
performed.
Set ctx.op = "noop"
if your script decides that it
doesn't have to make any changes. The update by query operation skips
updating the document and increments the noop
counter.
Set ctx.op = "delete"
if your script decides that the
document should be deleted. The update by query operation deletes the
document and increments the deleted
counter.
Update by query supports only index
, noop
, and
delete
. Setting ctx.op
to anything else is an
error. Setting any other field in ctx
is an error. This API
enables you to only modify the source of matching documents; you cannot move
them.
- See Also:
-
Nested Class Summary
Nested ClassesNested classes/interfaces inherited from class co.elastic.clients.elasticsearch._types.RequestBase
RequestBase.AbstractBuilder<BuilderT extends RequestBase.AbstractBuilder<BuilderT>>
-
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final JsonpDeserializer<UpdateByQueryRequest>
Json deserializer forUpdateByQueryRequest
static final Endpoint<UpdateByQueryRequest,
UpdateByQueryResponse, ErrorResponse> Endpoint "update_by_query
". -
Method Summary
Modifier and TypeMethodDescriptionfinal Boolean
Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices.final String
analyzer()
The analyzer to use for the query string.final Boolean
Iftrue
, wildcard and prefix queries are analyzed.final Conflicts
The preferred behavior when update by query hits version conflicts:abort
orproceed
.final Operator
The default operator for query string query:AND
orOR
.final String
df()
The field to use as default where no field prefix is given in the query string.final List<ExpandWildcard>
The type of index that wildcard patterns can match.final Long
from()
Skips the specified number of documents.final Boolean
Iffalse
, the request returns an error if it targets a missing or closed index.index()
Required - A comma-separated list of data streams, indices, and aliases to search.final Boolean
lenient()
Iftrue
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored.final Long
maxDocs()
The maximum number of documents to update.static UpdateByQueryRequest
final String
pipeline()
The ID of the pipeline to use to preprocess incoming documents.final String
The node or shard the operation should be performed on.final String
q()
A query in the Lucene query string syntax.final Query
query()
The documents to update using the Query DSL.final Boolean
refresh()
Iftrue
, Elasticsearch refreshes affected shards to make the operation visible to search after the request completes.final Boolean
Iftrue
, the request cache is used for this request.final Float
The throttle for this request in sub-requests per second.final String
routing()
A custom value used to route operations to a specific shard.final Script
script()
The script to run to update the document source or metadata when updating.final Time
scroll()
The period to retain the search context for scrolling.final Long
The size of the scroll request that powers the operation.final Time
An explicit timeout for each search request.final SearchType
The type of the search operation.void
serialize
(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) Serialize this object to JSON.protected void
serializeInternal
(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper) protected static void
final SlicedScroll
slice()
Slice the request manually using the provided slice ID and total number of slices.final Slices
slices()
The number of slices this task should be divided into.sort()
A comma-separated list of <field>:<direction> pairs.stats()
The specifictag
of the request for logging and statistical purposes.final Long
The maximum number of documents to collect for each shard.final Time
timeout()
The period each update request waits for the following operations: dynamic mapping updates, waiting for active shards.final Boolean
version()
Iftrue
, returns the document version as part of a hit.final Boolean
Should the document increment the version number (internal) on hit or not (reindex)final WaitForActiveShards
The number of shard copies that must be active before proceeding with the operation.final Boolean
Iftrue
, the request blocks until the operation is complete.Methods inherited from class co.elastic.clients.elasticsearch._types.RequestBase
toString
-
Field Details
-
_DESERIALIZER
Json deserializer forUpdateByQueryRequest
-
_ENDPOINT
Endpoint "update_by_query
".
-
-
Method Details
-
of
public static UpdateByQueryRequest of(Function<UpdateByQueryRequest.Builder, ObjectBuilder<UpdateByQueryRequest>> fn) -
allowNoIndices
Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
.API name:
allow_no_indices
-
analyzeWildcard
Iftrue
, wildcard and prefix queries are analyzed. This parameter can be used only when theq
query string parameter is specified.API name:
analyze_wildcard
-
analyzer
The analyzer to use for the query string. This parameter can be used only when theq
query string parameter is specified.API name:
analyzer
-
conflicts
The preferred behavior when update by query hits version conflicts:abort
orproceed
.API name:
conflicts
-
defaultOperator
The default operator for query string query:AND
orOR
. This parameter can be used only when theq
query string parameter is specified.API name:
default_operator
-
df
The field to use as default where no field prefix is given in the query string. This parameter can be used only when theq
query string parameter is specified.API name:
df
-
expandWildcards
The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
.API name:
expand_wildcards
-
from
Skips the specified number of documents.API name:
from
-
index
Required - A comma-separated list of data streams, indices, and aliases to search. It supports wildcards (*
). To search all data streams or indices, omit this parameter or use*
or_all
.API name:
index
-
lenient
Iftrue
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when theq
query string parameter is specified.API name:
lenient
-
maxDocs
The maximum number of documents to update.API name:
max_docs
-
pipeline
The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to_none
disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter.API name:
pipeline
-
preference
The node or shard the operation should be performed on. It is random by default.API name:
preference
-
q
A query in the Lucene query string syntax.API name:
q
-
query
The documents to update using the Query DSL.API name:
query
-
refresh
Iftrue
, Elasticsearch refreshes affected shards to make the operation visible to search after the request completes. This is different than the update API'srefresh
parameter, which causes just the shard that received the request to be refreshed.API name:
refresh
-
requestCache
Iftrue
, the request cache is used for this request. It defaults to the index-level setting.API name:
request_cache
-
requestsPerSecond
The throttle for this request in sub-requests per second.API name:
requests_per_second
-
routing
A custom value used to route operations to a specific shard.API name:
routing
-
script
The script to run to update the document source or metadata when updating.API name:
script
-
scroll
The period to retain the search context for scrolling.API name:
scroll
-
scrollSize
The size of the scroll request that powers the operation.API name:
scroll_size
-
searchTimeout
An explicit timeout for each search request. By default, there is no timeout.API name:
search_timeout
-
searchType
The type of the search operation. Available options includequery_then_fetch
anddfs_query_then_fetch
.API name:
search_type
-
slice
Slice the request manually using the provided slice ID and total number of slices.API name:
slice
-
slices
The number of slices this task should be divided into.API name:
slices
-
sort
A comma-separated list of <field>:<direction> pairs.API name:
sort
-
stats
The specifictag
of the request for logging and statistical purposes.API name:
stats
-
terminateAfter
The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.IMPORTANT: Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers.
API name:
terminate_after
-
timeout
The period each update request waits for the following operations: dynamic mapping updates, waiting for active shards. By default, it is one minute. This guarantees Elasticsearch waits for at least the timeout before failing. The actual wait time could be longer, particularly when multiple waits occur.API name:
timeout
-
version
Iftrue
, returns the document version as part of a hit.API name:
version
-
versionType
Should the document increment the version number (internal) on hit or not (reindex)API name:
version_type
-
waitForActiveShards
The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). Thetimeout
parameter controls how long each write request waits for unavailable shards to become available. Both work exactly the way they work in the bulk API.API name:
wait_for_active_shards
-
waitForCompletion
Iftrue
, the request blocks until the operation is complete. Iffalse
, Elasticsearch performs some preflight checks, launches the request, and returns a task ID that you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at.tasks/task/${taskId}
.API name:
wait_for_completion
-
serialize
Serialize this object to JSON.- Specified by:
serialize
in interfaceJsonpSerializable
-
serializeInternal
-
setupUpdateByQueryRequestDeserializer
protected static void setupUpdateByQueryRequestDeserializer(ObjectDeserializer<UpdateByQueryRequest.Builder> op)
-