Class ElasticsearchIndicesAsyncClient
- All Implemented Interfaces:
Closeable
,AutoCloseable
-
Field Summary
Fields inherited from class co.elastic.clients.ApiClient
transport, transportOptions
-
Constructor Summary
ConstructorsConstructorDescriptionElasticsearchIndicesAsyncClient
(ElasticsearchTransport transport, TransportOptions transportOptions) -
Method Summary
Modifier and TypeMethodDescriptionaddBlock
(AddBlockRequest request) Add an index block.Add an index block.analyze()
Get tokens from text analysis.analyze
(AnalyzeRequest request) Get tokens from text analysis.final CompletableFuture<AnalyzeResponse>
Get tokens from text analysis.Cancel a migration reindex operation.cancelMigrateReindex
(Function<CancelMigrateReindexRequest.Builder, ObjectBuilder<CancelMigrateReindexRequest>> fn) Cancel a migration reindex operation.Clear the cache.clearCache
(ClearCacheRequest request) Clear the cache.Clear the cache.clone
(CloneIndexRequest request) Clone an index.Clone an index.close
(CloseIndexRequest request) Close an index.Close an index.create
(CreateIndexRequest request) Create an index.Create an index.createDataStream
(CreateDataStreamRequest request) Create a data stream.createDataStream
(Function<CreateDataStreamRequest.Builder, ObjectBuilder<CreateDataStreamRequest>> fn) Create a data stream.createFrom
(CreateFromRequest request) Create an index from a source index.Create an index from a source index.Get data stream stats.dataStreamsStats
(DataStreamsStatsRequest request) Get data stream stats.dataStreamsStats
(Function<DataStreamsStatsRequest.Builder, ObjectBuilder<DataStreamsStatsRequest>> fn) Get data stream stats.delete
(DeleteIndexRequest request) Delete indices.Delete indices.deleteAlias
(DeleteAliasRequest request) Delete an alias.Delete an alias.Delete data stream lifecycles.deleteDataLifecycle
(Function<DeleteDataLifecycleRequest.Builder, ObjectBuilder<DeleteDataLifecycleRequest>> fn) Delete data stream lifecycles.deleteDataStream
(DeleteDataStreamRequest request) Delete data streams.deleteDataStream
(Function<DeleteDataStreamRequest.Builder, ObjectBuilder<DeleteDataStreamRequest>> fn) Delete data streams.Delete an index template.deleteIndexTemplate
(Function<DeleteIndexTemplateRequest.Builder, ObjectBuilder<DeleteIndexTemplateRequest>> fn) Delete an index template.deleteTemplate
(DeleteTemplateRequest request) Delete a legacy index template.Delete a legacy index template.diskUsage
(DiskUsageRequest request) Analyze the index disk usage.Analyze the index disk usage.downsample
(DownsampleRequest request) Downsample an index.Downsample an index.exists
(ExistsRequest request) Check indices.final CompletableFuture<BooleanResponse>
Check indices.existsAlias
(ExistsAliasRequest request) Check aliases.final CompletableFuture<BooleanResponse>
Check aliases.Check index templates.final CompletableFuture<BooleanResponse>
existsIndexTemplate
(Function<ExistsIndexTemplateRequest.Builder, ObjectBuilder<ExistsIndexTemplateRequest>> fn) Check index templates.existsTemplate
(ExistsTemplateRequest request) Check existence of index templates.final CompletableFuture<BooleanResponse>
Check existence of index templates.Get the status for a data stream lifecycle.explainDataLifecycle
(Function<ExplainDataLifecycleRequest.Builder, ObjectBuilder<ExplainDataLifecycleRequest>> fn) Get the status for a data stream lifecycle.fieldUsageStats
(FieldUsageStatsRequest request) Get field usage stats.Get field usage stats.flush()
Flush data streams or indices.flush
(FlushRequest request) Flush data streams or indices.final CompletableFuture<FlushResponse>
Flush data streams or indices.Force a merge.forcemerge
(ForcemergeRequest request) Force a merge.Force a merge.get
(GetIndexRequest request) Get index information.Get index information.getAlias()
Get aliases.getAlias
(GetAliasRequest request) Get aliases.Get aliases.getDataLifecycle
(GetDataLifecycleRequest request) Get data stream lifecycles.getDataLifecycle
(Function<GetDataLifecycleRequest.Builder, ObjectBuilder<GetDataLifecycleRequest>> fn) Get data stream lifecycles.Get data stream lifecycle stats.Get data streams.getDataStream
(GetDataStreamRequest request) Get data streams.Get data streams.getFieldMapping
(GetFieldMappingRequest request) Get mapping definitions.Get mapping definitions.Get index templates.getIndexTemplate
(GetIndexTemplateRequest request) Get index templates.getIndexTemplate
(Function<GetIndexTemplateRequest.Builder, ObjectBuilder<GetIndexTemplateRequest>> fn) Get index templates.Get mapping definitions.getMapping
(GetMappingRequest request) Get mapping definitions.Get mapping definitions.Get the migration reindexing status.getMigrateReindexStatus
(Function<GetMigrateReindexStatusRequest.Builder, ObjectBuilder<GetMigrateReindexStatusRequest>> fn) Get the migration reindexing status.Get index settings.getSettings
(GetIndicesSettingsRequest request) Get index settings.getSettings
(Function<GetIndicesSettingsRequest.Builder, ObjectBuilder<GetIndicesSettingsRequest>> fn) Get index settings.Get index templates.getTemplate
(GetTemplateRequest request) Get index templates.Get index templates.Reindex legacy backing indices.migrateReindex
(MigrateReindexRequest request) Reindex legacy backing indices.Reindex legacy backing indices.Convert an index alias to a data stream.migrateToDataStream
(Function<MigrateToDataStreamRequest.Builder, ObjectBuilder<MigrateToDataStreamRequest>> fn) Convert an index alias to a data stream.modifyDataStream
(ModifyDataStreamRequest request) Update data streams.modifyDataStream
(Function<ModifyDataStreamRequest.Builder, ObjectBuilder<ModifyDataStreamRequest>> fn) Update data streams.open
(OpenRequest request) Open a closed index.final CompletableFuture<OpenResponse>
Open a closed index.Promote a data stream.promoteDataStream
(Function<PromoteDataStreamRequest.Builder, ObjectBuilder<PromoteDataStreamRequest>> fn) Promote a data stream.putAlias
(PutAliasRequest request) Create or update an alias.Create or update an alias.putDataLifecycle
(PutDataLifecycleRequest request) Update data stream lifecycles.putDataLifecycle
(Function<PutDataLifecycleRequest.Builder, ObjectBuilder<PutDataLifecycleRequest>> fn) Update data stream lifecycles.putIndexTemplate
(PutIndexTemplateRequest request) Create or update an index template.putIndexTemplate
(Function<PutIndexTemplateRequest.Builder, ObjectBuilder<PutIndexTemplateRequest>> fn) Create or update an index template.putMapping
(PutMappingRequest request) Update field mappings.Update field mappings.Update index settings.putSettings
(PutIndicesSettingsRequest request) Update index settings.putSettings
(Function<PutIndicesSettingsRequest.Builder, ObjectBuilder<PutIndicesSettingsRequest>> fn) Update index settings.putTemplate
(PutTemplateRequest request) Create or update an index template.Create or update an index template.recovery()
Get index recovery information.recovery
(RecoveryRequest request) Get index recovery information.Get index recovery information.refresh()
Refresh an index.refresh
(RefreshRequest request) Refresh an index.final CompletableFuture<RefreshResponse>
Refresh an index.Reload search analyzers.reloadSearchAnalyzers
(Function<ReloadSearchAnalyzersRequest.Builder, ObjectBuilder<ReloadSearchAnalyzersRequest>> fn) Reload search analyzers.Resolve the cluster.resolveCluster
(ResolveClusterRequest request) Resolve the cluster.Resolve the cluster.resolveIndex
(ResolveIndexRequest request) Resolve indices.Resolve indices.rollover
(RolloverRequest request) Roll over to a new index.Roll over to a new index.segments()
Get index segments.segments
(SegmentsRequest request) Get index segments.Get index segments.Get index shard stores.shardStores
(ShardStoresRequest request) Get index shard stores.Get index shard stores.shrink
(ShrinkRequest request) Shrink an index.final CompletableFuture<ShrinkResponse>
Shrink an index.Simulate an index.simulateIndexTemplate
(Function<SimulateIndexTemplateRequest.Builder, ObjectBuilder<SimulateIndexTemplateRequest>> fn) Simulate an index.Simulate an index template.simulateTemplate
(SimulateTemplateRequest request) Simulate an index template.simulateTemplate
(Function<SimulateTemplateRequest.Builder, ObjectBuilder<SimulateTemplateRequest>> fn) Simulate an index template.split
(SplitRequest request) Split an index.final CompletableFuture<SplitResponse>
Split an index.stats()
Get index statistics.stats
(IndicesStatsRequest request) Get index statistics.Get index statistics.Create or update an alias.updateAliases
(UpdateAliasesRequest request) Create or update an alias.Create or update an alias.Validate a query.validateQuery
(ValidateQueryRequest request) Validate a query.Validate a query.withTransportOptions
(TransportOptions transportOptions) Creates a new client with some request optionsMethods inherited from class co.elastic.clients.ApiClient
_jsonpMapper, _transport, _transportOptions, close, getDeserializer, withTransportOptions
-
Constructor Details
-
ElasticsearchIndicesAsyncClient
-
ElasticsearchIndicesAsyncClient
public ElasticsearchIndicesAsyncClient(ElasticsearchTransport transport, @Nullable TransportOptions transportOptions)
-
-
Method Details
-
withTransportOptions
public ElasticsearchIndicesAsyncClient withTransportOptions(@Nullable TransportOptions transportOptions) Description copied from class:ApiClient
Creates a new client with some request options- Specified by:
withTransportOptions
in classApiClient<ElasticsearchTransport,
ElasticsearchIndicesAsyncClient>
-
addBlock
Add an index block.Add an index block to an index. Index blocks limit the operations allowed on an index by blocking specific operation types.
- See Also:
-
addBlock
public final CompletableFuture<AddBlockResponse> addBlock(Function<AddBlockRequest.Builder, ObjectBuilder<AddBlockRequest>> fn) Add an index block.Add an index block to an index. Index blocks limit the operations allowed on an index by blocking specific operation types.
- Parameters:
fn
- a function that initializes a builder to create theAddBlockRequest
- See Also:
-
analyze
Get tokens from text analysis. The analyze API performs analysis on a text string and returns the resulting tokens.Generating excessive amount of tokens may cause a node to run out of memory. The
index.analyze.max_token_count
setting enables you to limit the number of tokens that can be produced. If more than this limit of tokens gets generated, an error occurs. The_analyze
endpoint without a specified index will always use10000
as its limit.- See Also:
-
analyze
public final CompletableFuture<AnalyzeResponse> analyze(Function<AnalyzeRequest.Builder, ObjectBuilder<AnalyzeRequest>> fn) Get tokens from text analysis. The analyze API performs analysis on a text string and returns the resulting tokens.Generating excessive amount of tokens may cause a node to run out of memory. The
index.analyze.max_token_count
setting enables you to limit the number of tokens that can be produced. If more than this limit of tokens gets generated, an error occurs. The_analyze
endpoint without a specified index will always use10000
as its limit.- Parameters:
fn
- a function that initializes a builder to create theAnalyzeRequest
- See Also:
-
analyze
Get tokens from text analysis. The analyze API performs analysis on a text string and returns the resulting tokens.Generating excessive amount of tokens may cause a node to run out of memory. The
index.analyze.max_token_count
setting enables you to limit the number of tokens that can be produced. If more than this limit of tokens gets generated, an error occurs. The_analyze
endpoint without a specified index will always use10000
as its limit.- See Also:
-
cancelMigrateReindex
public CompletableFuture<CancelMigrateReindexResponse> cancelMigrateReindex(CancelMigrateReindexRequest request) Cancel a migration reindex operation.Cancel a migration reindex attempt for a data stream or index.
- See Also:
-
cancelMigrateReindex
public final CompletableFuture<CancelMigrateReindexResponse> cancelMigrateReindex(Function<CancelMigrateReindexRequest.Builder, ObjectBuilder<CancelMigrateReindexRequest>> fn) Cancel a migration reindex operation.Cancel a migration reindex attempt for a data stream or index.
- Parameters:
fn
- a function that initializes a builder to create theCancelMigrateReindexRequest
- See Also:
-
clearCache
Clear the cache. Clear the cache of one or more indices. For data streams, the API clears the caches of the stream's backing indices.By default, the clear cache API clears all caches. To clear only specific caches, use the
fielddata
,query
, orrequest
parameters. To clear the cache only of specific fields, use thefields
parameter.- See Also:
-
clearCache
public final CompletableFuture<ClearCacheResponse> clearCache(Function<ClearCacheRequest.Builder, ObjectBuilder<ClearCacheRequest>> fn) Clear the cache. Clear the cache of one or more indices. For data streams, the API clears the caches of the stream's backing indices.By default, the clear cache API clears all caches. To clear only specific caches, use the
fielddata
,query
, orrequest
parameters. To clear the cache only of specific fields, use thefields
parameter.- Parameters:
fn
- a function that initializes a builder to create theClearCacheRequest
- See Also:
-
clearCache
Clear the cache. Clear the cache of one or more indices. For data streams, the API clears the caches of the stream's backing indices.By default, the clear cache API clears all caches. To clear only specific caches, use the
fielddata
,query
, orrequest
parameters. To clear the cache only of specific fields, use thefields
parameter.- See Also:
-
clone
Clone an index. Clone an existing index into a new index. Each original primary shard is cloned into a new primary shard in the new index.IMPORTANT: Elasticsearch does not apply index templates to the resulting index. The API also does not copy index metadata from the original index. Index metadata includes aliases, index lifecycle management phase definitions, and cross-cluster replication (CCR) follower information. For example, if you clone a CCR follower index, the resulting clone will not be a follower index.
The clone API copies most index settings from the source index to the resulting index, with the exception of
index.number_of_replicas
andindex.auto_expand_replicas
. To set the number of replicas in the resulting index, configure these settings in the clone request.Cloning works as follows:
- First, it creates a new target index with the same definition as the source index.
- Then it hard-links segments from the source index into the target index. If the file system does not support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
- Finally, it recovers the target index as though it were a closed index which had just been re-opened.
IMPORTANT: Indices can only be cloned if they meet the following requirements:
- The index must be marked as read-only and have a cluster health status of green.
- The target index must not exist.
- The source index must have the same number of primary shards as the target index.
- The node handling the clone process must have sufficient free disk space to accommodate a second copy of the existing index.
The current write index on a data stream cannot be cloned. In order to clone the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be cloned.
NOTE: Mappings cannot be specified in the
_clone
request. The mappings of the source index will be used for the target index.Monitor the cloning process
The cloning process can be monitored with the cat recovery API or the cluster health API can be used to wait until all primary shards have been allocated by setting the
wait_for_status
parameter toyellow
.The
_clone
API returns as soon as the target index has been added to the cluster state, before any shards have been allocated. At this point, all shards are in the state unassigned. If, for any reason, the target index can't be allocated, its primary shard will remain unassigned until it can be allocated on that node.Once the primary shard is allocated, it moves to state initializing, and the clone process begins. When the clone operation completes, the shard will become active. At that point, Elasticsearch will try to allocate any replicas and may decide to relocate the primary shard to another node.
Wait for active shards
Because the clone operation creates a new index to clone the shards to, the wait for active shards setting on index creation applies to the clone index action as well.
- See Also:
-
clone
public final CompletableFuture<CloneIndexResponse> clone(Function<CloneIndexRequest.Builder, ObjectBuilder<CloneIndexRequest>> fn) Clone an index. Clone an existing index into a new index. Each original primary shard is cloned into a new primary shard in the new index.IMPORTANT: Elasticsearch does not apply index templates to the resulting index. The API also does not copy index metadata from the original index. Index metadata includes aliases, index lifecycle management phase definitions, and cross-cluster replication (CCR) follower information. For example, if you clone a CCR follower index, the resulting clone will not be a follower index.
The clone API copies most index settings from the source index to the resulting index, with the exception of
index.number_of_replicas
andindex.auto_expand_replicas
. To set the number of replicas in the resulting index, configure these settings in the clone request.Cloning works as follows:
- First, it creates a new target index with the same definition as the source index.
- Then it hard-links segments from the source index into the target index. If the file system does not support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
- Finally, it recovers the target index as though it were a closed index which had just been re-opened.
IMPORTANT: Indices can only be cloned if they meet the following requirements:
- The index must be marked as read-only and have a cluster health status of green.
- The target index must not exist.
- The source index must have the same number of primary shards as the target index.
- The node handling the clone process must have sufficient free disk space to accommodate a second copy of the existing index.
The current write index on a data stream cannot be cloned. In order to clone the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be cloned.
NOTE: Mappings cannot be specified in the
_clone
request. The mappings of the source index will be used for the target index.Monitor the cloning process
The cloning process can be monitored with the cat recovery API or the cluster health API can be used to wait until all primary shards have been allocated by setting the
wait_for_status
parameter toyellow
.The
_clone
API returns as soon as the target index has been added to the cluster state, before any shards have been allocated. At this point, all shards are in the state unassigned. If, for any reason, the target index can't be allocated, its primary shard will remain unassigned until it can be allocated on that node.Once the primary shard is allocated, it moves to state initializing, and the clone process begins. When the clone operation completes, the shard will become active. At that point, Elasticsearch will try to allocate any replicas and may decide to relocate the primary shard to another node.
Wait for active shards
Because the clone operation creates a new index to clone the shards to, the wait for active shards setting on index creation applies to the clone index action as well.
- Parameters:
fn
- a function that initializes a builder to create theCloneIndexRequest
- See Also:
-
close
Close an index. A closed index is blocked for read or write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. Closed indices do not have to maintain internal data structures for indexing or searching documents, which results in a smaller overhead on the cluster.When opening or closing an index, the master node is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened and closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.
You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behaviour can be turned off using the
ignore_unavailable=true
parameter.By default, you must explicitly name the indices you are opening or closing. To open or close indices with
_all
,*
, or other wildcard expressions, change theaction.destructive_requires_name
setting tofalse
. This setting can also be changed with the cluster update settings API.Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting
cluster.indices.close.enable
tofalse
.- See Also:
-
close
public final CompletableFuture<CloseIndexResponse> close(Function<CloseIndexRequest.Builder, ObjectBuilder<CloseIndexRequest>> fn) Close an index. A closed index is blocked for read or write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. Closed indices do not have to maintain internal data structures for indexing or searching documents, which results in a smaller overhead on the cluster.When opening or closing an index, the master node is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened and closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.
You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behaviour can be turned off using the
ignore_unavailable=true
parameter.By default, you must explicitly name the indices you are opening or closing. To open or close indices with
_all
,*
, or other wildcard expressions, change theaction.destructive_requires_name
setting tofalse
. This setting can also be changed with the cluster update settings API.Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting
cluster.indices.close.enable
tofalse
.- Parameters:
fn
- a function that initializes a builder to create theCloseIndexRequest
- See Also:
-
create
Create an index. You can use the create index API to add a new index to an Elasticsearch cluster. When creating an index, you can specify the following:- Settings for the index.
- Mappings for fields in the index.
- Index aliases
Wait for active shards
By default, index creation will only return a response to the client when the primary copies of each shard have been started, or the request times out. The index creation response will indicate what happened. For example,
acknowledged
indicates whether the index was successfully created in the cluster,while shards_acknowledged
indicates whether the requisite number of shard copies were started for each shard in the index before timing out. Note that it is still possible for eitheracknowledged
orshards_acknowledged
to befalse
, but for the index creation to be successful. These values simply indicate whether the operation completed before the timeout. Ifacknowledged
is false, the request timed out before the cluster state was updated with the newly created index, but it probably will be created sometime soon. Ifshards_acknowledged
is false, then the request timed out before the requisite number of shards were started (by default just the primaries), even if the cluster state was successfully updated to reflect the newly created index (that is to say,acknowledged
istrue
).You can change the default of only waiting for the primary shards to start through the index setting
index.write.wait_for_active_shards
. Note that changing this setting will also affect thewait_for_active_shards
value on all subsequent write operations.- See Also:
-
create
public final CompletableFuture<CreateIndexResponse> create(Function<CreateIndexRequest.Builder, ObjectBuilder<CreateIndexRequest>> fn) Create an index. You can use the create index API to add a new index to an Elasticsearch cluster. When creating an index, you can specify the following:- Settings for the index.
- Mappings for fields in the index.
- Index aliases
Wait for active shards
By default, index creation will only return a response to the client when the primary copies of each shard have been started, or the request times out. The index creation response will indicate what happened. For example,
acknowledged
indicates whether the index was successfully created in the cluster,while shards_acknowledged
indicates whether the requisite number of shard copies were started for each shard in the index before timing out. Note that it is still possible for eitheracknowledged
orshards_acknowledged
to befalse
, but for the index creation to be successful. These values simply indicate whether the operation completed before the timeout. Ifacknowledged
is false, the request timed out before the cluster state was updated with the newly created index, but it probably will be created sometime soon. Ifshards_acknowledged
is false, then the request timed out before the requisite number of shards were started (by default just the primaries), even if the cluster state was successfully updated to reflect the newly created index (that is to say,acknowledged
istrue
).You can change the default of only waiting for the primary shards to start through the index setting
index.write.wait_for_active_shards
. Note that changing this setting will also affect thewait_for_active_shards
value on all subsequent write operations.- Parameters:
fn
- a function that initializes a builder to create theCreateIndexRequest
- See Also:
-
createDataStream
public CompletableFuture<CreateDataStreamResponse> createDataStream(CreateDataStreamRequest request) Create a data stream.You must have a matching index template with data stream enabled.
- See Also:
-
createDataStream
public final CompletableFuture<CreateDataStreamResponse> createDataStream(Function<CreateDataStreamRequest.Builder, ObjectBuilder<CreateDataStreamRequest>> fn) Create a data stream.You must have a matching index template with data stream enabled.
- Parameters:
fn
- a function that initializes a builder to create theCreateDataStreamRequest
- See Also:
-
createFrom
Create an index from a source index.Copy the mappings and settings from the source index to a destination index while allowing request settings and mappings to override the source values.
- See Also:
-
createFrom
public final CompletableFuture<CreateFromResponse> createFrom(Function<CreateFromRequest.Builder, ObjectBuilder<CreateFromRequest>> fn) Create an index from a source index.Copy the mappings and settings from the source index to a destination index while allowing request settings and mappings to override the source values.
- Parameters:
fn
- a function that initializes a builder to create theCreateFromRequest
- See Also:
-
dataStreamsStats
public CompletableFuture<DataStreamsStatsResponse> dataStreamsStats(DataStreamsStatsRequest request) Get data stream stats.Get statistics for one or more data streams.
- See Also:
-
dataStreamsStats
public final CompletableFuture<DataStreamsStatsResponse> dataStreamsStats(Function<DataStreamsStatsRequest.Builder, ObjectBuilder<DataStreamsStatsRequest>> fn) Get data stream stats.Get statistics for one or more data streams.
- Parameters:
fn
- a function that initializes a builder to create theDataStreamsStatsRequest
- See Also:
-
dataStreamsStats
Get data stream stats.Get statistics for one or more data streams.
- See Also:
-
delete
Delete indices. Deleting an index deletes its documents, shards, and metadata. It does not delete related Kibana components, such as data views, visualizations, or dashboards.You cannot delete the current write index of a data stream. To delete the index, you must roll over the data stream so a new write index is created. You can then use the delete index API to delete the previous write index.
- See Also:
-
delete
public final CompletableFuture<DeleteIndexResponse> delete(Function<DeleteIndexRequest.Builder, ObjectBuilder<DeleteIndexRequest>> fn) Delete indices. Deleting an index deletes its documents, shards, and metadata. It does not delete related Kibana components, such as data views, visualizations, or dashboards.You cannot delete the current write index of a data stream. To delete the index, you must roll over the data stream so a new write index is created. You can then use the delete index API to delete the previous write index.
- Parameters:
fn
- a function that initializes a builder to create theDeleteIndexRequest
- See Also:
-
deleteAlias
Delete an alias. Removes a data stream or index from an alias.- See Also:
-
deleteAlias
public final CompletableFuture<DeleteAliasResponse> deleteAlias(Function<DeleteAliasRequest.Builder, ObjectBuilder<DeleteAliasRequest>> fn) Delete an alias. Removes a data stream or index from an alias.- Parameters:
fn
- a function that initializes a builder to create theDeleteAliasRequest
- See Also:
-
deleteDataLifecycle
public CompletableFuture<DeleteDataLifecycleResponse> deleteDataLifecycle(DeleteDataLifecycleRequest request) Delete data stream lifecycles. Removes the data stream lifecycle from a data stream, rendering it not managed by the data stream lifecycle.- See Also:
-
deleteDataLifecycle
public final CompletableFuture<DeleteDataLifecycleResponse> deleteDataLifecycle(Function<DeleteDataLifecycleRequest.Builder, ObjectBuilder<DeleteDataLifecycleRequest>> fn) Delete data stream lifecycles. Removes the data stream lifecycle from a data stream, rendering it not managed by the data stream lifecycle.- Parameters:
fn
- a function that initializes a builder to create theDeleteDataLifecycleRequest
- See Also:
-
deleteDataStream
public CompletableFuture<DeleteDataStreamResponse> deleteDataStream(DeleteDataStreamRequest request) Delete data streams. Deletes one or more data streams and their backing indices.- See Also:
-
deleteDataStream
public final CompletableFuture<DeleteDataStreamResponse> deleteDataStream(Function<DeleteDataStreamRequest.Builder, ObjectBuilder<DeleteDataStreamRequest>> fn) Delete data streams. Deletes one or more data streams and their backing indices.- Parameters:
fn
- a function that initializes a builder to create theDeleteDataStreamRequest
- See Also:
-
deleteIndexTemplate
public CompletableFuture<DeleteIndexTemplateResponse> deleteIndexTemplate(DeleteIndexTemplateRequest request) Delete an index template. The provided <index-template> may contain multiple template names separated by a comma. If multiple template names are specified then there is no wildcard support and the provided names should match completely with existing templates.- See Also:
-
deleteIndexTemplate
public final CompletableFuture<DeleteIndexTemplateResponse> deleteIndexTemplate(Function<DeleteIndexTemplateRequest.Builder, ObjectBuilder<DeleteIndexTemplateRequest>> fn) Delete an index template. The provided <index-template> may contain multiple template names separated by a comma. If multiple template names are specified then there is no wildcard support and the provided names should match completely with existing templates.- Parameters:
fn
- a function that initializes a builder to create theDeleteIndexTemplateRequest
- See Also:
-
deleteTemplate
Delete a legacy index template.- See Also:
-
deleteTemplate
public final CompletableFuture<DeleteTemplateResponse> deleteTemplate(Function<DeleteTemplateRequest.Builder, ObjectBuilder<DeleteTemplateRequest>> fn) Delete a legacy index template.- Parameters:
fn
- a function that initializes a builder to create theDeleteTemplateRequest
- See Also:
-
diskUsage
Analyze the index disk usage. Analyze the disk usage of each field of an index or data stream. This API might not support indices created in previous Elasticsearch versions. The result of a small index can be inaccurate as some parts of an index might not be analyzed by the API.NOTE: The total size of fields of the analyzed shards of the index in the response is usually smaller than the index
store_size
value because some small metadata files are ignored and some parts of data files might not be scanned by the API. Since stored fields are stored together in a compressed format, the sizes of stored fields are also estimates and can be inaccurate. The stored size of the_id
field is likely underestimated while the_source
field is overestimated.- See Also:
-
diskUsage
public final CompletableFuture<DiskUsageResponse> diskUsage(Function<DiskUsageRequest.Builder, ObjectBuilder<DiskUsageRequest>> fn) Analyze the index disk usage. Analyze the disk usage of each field of an index or data stream. This API might not support indices created in previous Elasticsearch versions. The result of a small index can be inaccurate as some parts of an index might not be analyzed by the API.NOTE: The total size of fields of the analyzed shards of the index in the response is usually smaller than the index
store_size
value because some small metadata files are ignored and some parts of data files might not be scanned by the API. Since stored fields are stored together in a compressed format, the sizes of stored fields are also estimates and can be inaccurate. The stored size of the_id
field is likely underestimated while the_source
field is overestimated.- Parameters:
fn
- a function that initializes a builder to create theDiskUsageRequest
- See Also:
-
downsample
Downsample an index. Aggregate a time series (TSDS) index and store pre-computed statistical summaries (min
,max
,sum
,value_count
andavg
) for each metric field grouped by a configured time interval. For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index. All documents within an hour interval are summarized and stored as a single document in the downsample index.NOTE: Only indices in a time series data stream are supported. Neither field nor document level security can be defined on the source index. The source index must be read only (
index.blocks.write: true
).- See Also:
-
downsample
public final CompletableFuture<DownsampleResponse> downsample(Function<DownsampleRequest.Builder, ObjectBuilder<DownsampleRequest>> fn) Downsample an index. Aggregate a time series (TSDS) index and store pre-computed statistical summaries (min
,max
,sum
,value_count
andavg
) for each metric field grouped by a configured time interval. For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index. All documents within an hour interval are summarized and stored as a single document in the downsample index.NOTE: Only indices in a time series data stream are supported. Neither field nor document level security can be defined on the source index. The source index must be read only (
index.blocks.write: true
).- Parameters:
fn
- a function that initializes a builder to create theDownsampleRequest
- See Also:
-
exists
Check indices. Check if one or more indices, index aliases, or data streams exist.- See Also:
-
exists
public final CompletableFuture<BooleanResponse> exists(Function<ExistsRequest.Builder, ObjectBuilder<ExistsRequest>> fn) Check indices. Check if one or more indices, index aliases, or data streams exist.- Parameters:
fn
- a function that initializes a builder to create theExistsRequest
- See Also:
-
existsAlias
Check aliases.Check if one or more data stream or index aliases exist.
- See Also:
-
existsAlias
public final CompletableFuture<BooleanResponse> existsAlias(Function<ExistsAliasRequest.Builder, ObjectBuilder<ExistsAliasRequest>> fn) Check aliases.Check if one or more data stream or index aliases exist.
- Parameters:
fn
- a function that initializes a builder to create theExistsAliasRequest
- See Also:
-
existsIndexTemplate
Check index templates.Check whether index templates exist.
- See Also:
-
existsIndexTemplate
public final CompletableFuture<BooleanResponse> existsIndexTemplate(Function<ExistsIndexTemplateRequest.Builder, ObjectBuilder<ExistsIndexTemplateRequest>> fn) Check index templates.Check whether index templates exist.
- Parameters:
fn
- a function that initializes a builder to create theExistsIndexTemplateRequest
- See Also:
-
existsTemplate
Check existence of index templates. Get information about whether index templates exist. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
- See Also:
-
existsTemplate
public final CompletableFuture<BooleanResponse> existsTemplate(Function<ExistsTemplateRequest.Builder, ObjectBuilder<ExistsTemplateRequest>> fn) Check existence of index templates. Get information about whether index templates exist. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
- Parameters:
fn
- a function that initializes a builder to create theExistsTemplateRequest
- See Also:
-
explainDataLifecycle
public CompletableFuture<ExplainDataLifecycleResponse> explainDataLifecycle(ExplainDataLifecycleRequest request) Get the status for a data stream lifecycle. Get information about an index or data stream's current data stream lifecycle status, such as time since index creation, time since rollover, the lifecycle configuration managing the index, or any errors encountered during lifecycle execution.- See Also:
-
explainDataLifecycle
public final CompletableFuture<ExplainDataLifecycleResponse> explainDataLifecycle(Function<ExplainDataLifecycleRequest.Builder, ObjectBuilder<ExplainDataLifecycleRequest>> fn) Get the status for a data stream lifecycle. Get information about an index or data stream's current data stream lifecycle status, such as time since index creation, time since rollover, the lifecycle configuration managing the index, or any errors encountered during lifecycle execution.- Parameters:
fn
- a function that initializes a builder to create theExplainDataLifecycleRequest
- See Also:
-
fieldUsageStats
Get field usage stats. Get field usage information for each shard and field of an index. Field usage statistics are automatically captured when queries are running on a cluster. A shard-level search request that accesses a given field, even if multiple times during that request, is counted as a single use.The response body reports the per-shard usage count of the data structures that back the fields in the index. A given request will increment each count by a maximum value of 1, even if the request accesses the same field multiple times.
- See Also:
-
fieldUsageStats
public final CompletableFuture<FieldUsageStatsResponse> fieldUsageStats(Function<FieldUsageStatsRequest.Builder, ObjectBuilder<FieldUsageStatsRequest>> fn) Get field usage stats. Get field usage information for each shard and field of an index. Field usage statistics are automatically captured when queries are running on a cluster. A shard-level search request that accesses a given field, even if multiple times during that request, is counted as a single use.The response body reports the per-shard usage count of the data structures that back the fields in the index. A given request will increment each count by a maximum value of 1, even if the request accesses the same field multiple times.
- Parameters:
fn
- a function that initializes a builder to create theFieldUsageStatsRequest
- See Also:
-
flush
Flush data streams or indices. Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.
It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.
- See Also:
-
flush
public final CompletableFuture<FlushResponse> flush(Function<FlushRequest.Builder, ObjectBuilder<FlushRequest>> fn) Flush data streams or indices. Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.
It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.
- Parameters:
fn
- a function that initializes a builder to create theFlushRequest
- See Also:
-
flush
Flush data streams or indices. Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.
It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.
- See Also:
-
forcemerge
Force a merge. Perform the force merge operation on the shards of one or more indices. For data streams, the API forces a merge on the shards of the stream's backing indices.Merging reduces the number of segments in each shard by merging some of them together and also frees up the space used by deleted documents. Merging normally happens automatically, but sometimes it is useful to trigger a merge manually.
WARNING: We recommend force merging only a read-only index (meaning the index is no longer receiving writes). When documents are updated or deleted, the old version is not immediately removed but instead soft-deleted and marked with a "tombstone". These soft-deleted documents are automatically cleaned up during regular segment merges. But force merge can cause very large (greater than 5 GB) segments to be produced, which are not eligible for regular merges. So the number of soft-deleted documents can then grow rapidly, resulting in higher disk usage and worse search performance. If you regularly force merge an index receiving writes, this can also make snapshots more expensive, since the new documents can't be backed up incrementally.
Blocks during a force merge
Calls to this API block until the merge is complete (unless request contains
wait_for_completion=false
). If the client connection is lost before completion then the force merge process will continue in the background. Any new requests to force merge the same indices will also block until the ongoing force merge is complete.Running force merge asynchronously
If the request contains
wait_for_completion=false
, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to get the status of the task. However, you can not cancel this task as the force merge task is not cancelable. Elasticsearch creates a record of this task as a document at_tasks/<task_id>
. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.Force merging multiple indices
You can force merge multiple indices with a single request by targeting:
- One or more data streams that contain multiple backing indices
- Multiple indices
- One or more aliases
- All data streams and indices in a cluster
Each targeted shard is force-merged separately using the force_merge threadpool. By default each node only has a single
force_merge
thread which means that the shards on that node are force-merged one at a time. If you expand theforce_merge
threadpool on a node then it will force merge its shards in parallelForce merge makes the storage for the shard being merged temporarily increase, as it may require free space up to triple its size in case
max_num_segments parameter
is set to1
, to rewrite all segments into a new one.Data streams and time-based indices
Force-merging is useful for managing a data stream's older backing indices and other time-based indices, particularly after a rollover. In these cases, each index only receives indexing traffic for a certain period of time. Once an index receive no more writes, its shards can be force-merged to a single segment. This can be a good idea because single-segment shards can sometimes use simpler and more efficient data structures to perform searches. For example:
POST /.ds-my-data-stream-2099.03.07-000001/_forcemerge?max_num_segments=1
- See Also:
-
forcemerge
public final CompletableFuture<ForcemergeResponse> forcemerge(Function<ForcemergeRequest.Builder, ObjectBuilder<ForcemergeRequest>> fn) Force a merge. Perform the force merge operation on the shards of one or more indices. For data streams, the API forces a merge on the shards of the stream's backing indices.Merging reduces the number of segments in each shard by merging some of them together and also frees up the space used by deleted documents. Merging normally happens automatically, but sometimes it is useful to trigger a merge manually.
WARNING: We recommend force merging only a read-only index (meaning the index is no longer receiving writes). When documents are updated or deleted, the old version is not immediately removed but instead soft-deleted and marked with a "tombstone". These soft-deleted documents are automatically cleaned up during regular segment merges. But force merge can cause very large (greater than 5 GB) segments to be produced, which are not eligible for regular merges. So the number of soft-deleted documents can then grow rapidly, resulting in higher disk usage and worse search performance. If you regularly force merge an index receiving writes, this can also make snapshots more expensive, since the new documents can't be backed up incrementally.
Blocks during a force merge
Calls to this API block until the merge is complete (unless request contains
wait_for_completion=false
). If the client connection is lost before completion then the force merge process will continue in the background. Any new requests to force merge the same indices will also block until the ongoing force merge is complete.Running force merge asynchronously
If the request contains
wait_for_completion=false
, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to get the status of the task. However, you can not cancel this task as the force merge task is not cancelable. Elasticsearch creates a record of this task as a document at_tasks/<task_id>
. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.Force merging multiple indices
You can force merge multiple indices with a single request by targeting:
- One or more data streams that contain multiple backing indices
- Multiple indices
- One or more aliases
- All data streams and indices in a cluster
Each targeted shard is force-merged separately using the force_merge threadpool. By default each node only has a single
force_merge
thread which means that the shards on that node are force-merged one at a time. If you expand theforce_merge
threadpool on a node then it will force merge its shards in parallelForce merge makes the storage for the shard being merged temporarily increase, as it may require free space up to triple its size in case
max_num_segments parameter
is set to1
, to rewrite all segments into a new one.Data streams and time-based indices
Force-merging is useful for managing a data stream's older backing indices and other time-based indices, particularly after a rollover. In these cases, each index only receives indexing traffic for a certain period of time. Once an index receive no more writes, its shards can be force-merged to a single segment. This can be a good idea because single-segment shards can sometimes use simpler and more efficient data structures to perform searches. For example:
POST /.ds-my-data-stream-2099.03.07-000001/_forcemerge?max_num_segments=1
- Parameters:
fn
- a function that initializes a builder to create theForcemergeRequest
- See Also:
-
forcemerge
Force a merge. Perform the force merge operation on the shards of one or more indices. For data streams, the API forces a merge on the shards of the stream's backing indices.Merging reduces the number of segments in each shard by merging some of them together and also frees up the space used by deleted documents. Merging normally happens automatically, but sometimes it is useful to trigger a merge manually.
WARNING: We recommend force merging only a read-only index (meaning the index is no longer receiving writes). When documents are updated or deleted, the old version is not immediately removed but instead soft-deleted and marked with a "tombstone". These soft-deleted documents are automatically cleaned up during regular segment merges. But force merge can cause very large (greater than 5 GB) segments to be produced, which are not eligible for regular merges. So the number of soft-deleted documents can then grow rapidly, resulting in higher disk usage and worse search performance. If you regularly force merge an index receiving writes, this can also make snapshots more expensive, since the new documents can't be backed up incrementally.
Blocks during a force merge
Calls to this API block until the merge is complete (unless request contains
wait_for_completion=false
). If the client connection is lost before completion then the force merge process will continue in the background. Any new requests to force merge the same indices will also block until the ongoing force merge is complete.Running force merge asynchronously
If the request contains
wait_for_completion=false
, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to get the status of the task. However, you can not cancel this task as the force merge task is not cancelable. Elasticsearch creates a record of this task as a document at_tasks/<task_id>
. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.Force merging multiple indices
You can force merge multiple indices with a single request by targeting:
- One or more data streams that contain multiple backing indices
- Multiple indices
- One or more aliases
- All data streams and indices in a cluster
Each targeted shard is force-merged separately using the force_merge threadpool. By default each node only has a single
force_merge
thread which means that the shards on that node are force-merged one at a time. If you expand theforce_merge
threadpool on a node then it will force merge its shards in parallelForce merge makes the storage for the shard being merged temporarily increase, as it may require free space up to triple its size in case
max_num_segments parameter
is set to1
, to rewrite all segments into a new one.Data streams and time-based indices
Force-merging is useful for managing a data stream's older backing indices and other time-based indices, particularly after a rollover. In these cases, each index only receives indexing traffic for a certain period of time. Once an index receive no more writes, its shards can be force-merged to a single segment. This can be a good idea because single-segment shards can sometimes use simpler and more efficient data structures to perform searches. For example:
POST /.ds-my-data-stream-2099.03.07-000001/_forcemerge?max_num_segments=1
- See Also:
-
get
Get index information. Get information about one or more indices. For data streams, the API returns information about the stream’s backing indices.- See Also:
-
get
public final CompletableFuture<GetIndexResponse> get(Function<GetIndexRequest.Builder, ObjectBuilder<GetIndexRequest>> fn) Get index information. Get information about one or more indices. For data streams, the API returns information about the stream’s backing indices.- Parameters:
fn
- a function that initializes a builder to create theGetIndexRequest
- See Also:
-
getAlias
Get aliases. Retrieves information for one or more data stream or index aliases.- See Also:
-
getAlias
public final CompletableFuture<GetAliasResponse> getAlias(Function<GetAliasRequest.Builder, ObjectBuilder<GetAliasRequest>> fn) Get aliases. Retrieves information for one or more data stream or index aliases.- Parameters:
fn
- a function that initializes a builder to create theGetAliasRequest
- See Also:
-
getAlias
Get aliases. Retrieves information for one or more data stream or index aliases.- See Also:
-
getDataLifecycle
public CompletableFuture<GetDataLifecycleResponse> getDataLifecycle(GetDataLifecycleRequest request) Get data stream lifecycles.Get the data stream lifecycle configuration of one or more data streams.
- See Also:
-
getDataLifecycle
public final CompletableFuture<GetDataLifecycleResponse> getDataLifecycle(Function<GetDataLifecycleRequest.Builder, ObjectBuilder<GetDataLifecycleRequest>> fn) Get data stream lifecycles.Get the data stream lifecycle configuration of one or more data streams.
- Parameters:
fn
- a function that initializes a builder to create theGetDataLifecycleRequest
- See Also:
-
getDataLifecycleStats
Get data stream lifecycle stats. Get statistics about the data streams that are managed by a data stream lifecycle.- See Also:
-
getDataStream
Get data streams.Get information about one or more data streams.
- See Also:
-
getDataStream
public final CompletableFuture<GetDataStreamResponse> getDataStream(Function<GetDataStreamRequest.Builder, ObjectBuilder<GetDataStreamRequest>> fn) Get data streams.Get information about one or more data streams.
- Parameters:
fn
- a function that initializes a builder to create theGetDataStreamRequest
- See Also:
-
getDataStream
Get data streams.Get information about one or more data streams.
- See Also:
-
getFieldMapping
Get mapping definitions. Retrieves mapping definitions for one or more fields. For data streams, the API retrieves field mappings for the stream’s backing indices.This API is useful if you don't need a complete mapping or if an index mapping contains a large number of fields.
- See Also:
-
getFieldMapping
public final CompletableFuture<GetFieldMappingResponse> getFieldMapping(Function<GetFieldMappingRequest.Builder, ObjectBuilder<GetFieldMappingRequest>> fn) Get mapping definitions. Retrieves mapping definitions for one or more fields. For data streams, the API retrieves field mappings for the stream’s backing indices.This API is useful if you don't need a complete mapping or if an index mapping contains a large number of fields.
- Parameters:
fn
- a function that initializes a builder to create theGetFieldMappingRequest
- See Also:
-
getIndexTemplate
public CompletableFuture<GetIndexTemplateResponse> getIndexTemplate(GetIndexTemplateRequest request) Get index templates. Get information about one or more index templates.- See Also:
-
getIndexTemplate
public final CompletableFuture<GetIndexTemplateResponse> getIndexTemplate(Function<GetIndexTemplateRequest.Builder, ObjectBuilder<GetIndexTemplateRequest>> fn) Get index templates. Get information about one or more index templates.- Parameters:
fn
- a function that initializes a builder to create theGetIndexTemplateRequest
- See Also:
-
getIndexTemplate
Get index templates. Get information about one or more index templates.- See Also:
-
getMapping
Get mapping definitions. For data streams, the API retrieves mappings for the stream’s backing indices.- See Also:
-
getMapping
public final CompletableFuture<GetMappingResponse> getMapping(Function<GetMappingRequest.Builder, ObjectBuilder<GetMappingRequest>> fn) Get mapping definitions. For data streams, the API retrieves mappings for the stream’s backing indices.- Parameters:
fn
- a function that initializes a builder to create theGetMappingRequest
- See Also:
-
getMapping
Get mapping definitions. For data streams, the API retrieves mappings for the stream’s backing indices.- See Also:
-
getMigrateReindexStatus
public CompletableFuture<GetMigrateReindexStatusResponse> getMigrateReindexStatus(GetMigrateReindexStatusRequest request) Get the migration reindexing status.Get the status of a migration reindex attempt for a data stream or index.
- See Also:
-
getMigrateReindexStatus
public final CompletableFuture<GetMigrateReindexStatusResponse> getMigrateReindexStatus(Function<GetMigrateReindexStatusRequest.Builder, ObjectBuilder<GetMigrateReindexStatusRequest>> fn) Get the migration reindexing status.Get the status of a migration reindex attempt for a data stream or index.
- Parameters:
fn
- a function that initializes a builder to create theGetMigrateReindexStatusRequest
- See Also:
-
getSettings
Get index settings. Get setting information for one or more indices. For data streams, it returns setting information for the stream's backing indices.- See Also:
-
getSettings
public final CompletableFuture<GetIndicesSettingsResponse> getSettings(Function<GetIndicesSettingsRequest.Builder, ObjectBuilder<GetIndicesSettingsRequest>> fn) Get index settings. Get setting information for one or more indices. For data streams, it returns setting information for the stream's backing indices.- Parameters:
fn
- a function that initializes a builder to create theGetIndicesSettingsRequest
- See Also:
-
getSettings
Get index settings. Get setting information for one or more indices. For data streams, it returns setting information for the stream's backing indices.- See Also:
-
getTemplate
Get index templates. Get information about one or more index templates.IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
- See Also:
-
getTemplate
public final CompletableFuture<GetTemplateResponse> getTemplate(Function<GetTemplateRequest.Builder, ObjectBuilder<GetTemplateRequest>> fn) Get index templates. Get information about one or more index templates.IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
- Parameters:
fn
- a function that initializes a builder to create theGetTemplateRequest
- See Also:
-
getTemplate
Get index templates. Get information about one or more index templates.IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
- See Also:
-
migrateReindex
Reindex legacy backing indices.Reindex all legacy backing indices for a data stream. This operation occurs in a persistent task. The persistent task ID is returned immediately and the reindexing work is completed in that task.
- See Also:
-
migrateReindex
public final CompletableFuture<MigrateReindexResponse> migrateReindex(Function<MigrateReindexRequest.Builder, ObjectBuilder<MigrateReindexRequest>> fn) Reindex legacy backing indices.Reindex all legacy backing indices for a data stream. This operation occurs in a persistent task. The persistent task ID is returned immediately and the reindexing work is completed in that task.
- Parameters:
fn
- a function that initializes a builder to create theMigrateReindexRequest
- See Also:
-
migrateReindex
Reindex legacy backing indices.Reindex all legacy backing indices for a data stream. This operation occurs in a persistent task. The persistent task ID is returned immediately and the reindexing work is completed in that task.
- See Also:
-
migrateToDataStream
public CompletableFuture<MigrateToDataStreamResponse> migrateToDataStream(MigrateToDataStreamRequest request) Convert an index alias to a data stream. Converts an index alias to a data stream. You must have a matching index template that is data stream enabled. The alias must meet the following criteria: The alias must have a write index; All indices for the alias must have a@timestamp
field mapping of adate
ordate_nanos
field type; The alias must not have any filters; The alias must not use custom routing. If successful, the request removes the alias and creates a data stream with the same name. The indices for the alias become hidden backing indices for the stream. The write index for the alias becomes the write index for the stream.- See Also:
-
migrateToDataStream
public final CompletableFuture<MigrateToDataStreamResponse> migrateToDataStream(Function<MigrateToDataStreamRequest.Builder, ObjectBuilder<MigrateToDataStreamRequest>> fn) Convert an index alias to a data stream. Converts an index alias to a data stream. You must have a matching index template that is data stream enabled. The alias must meet the following criteria: The alias must have a write index; All indices for the alias must have a@timestamp
field mapping of adate
ordate_nanos
field type; The alias must not have any filters; The alias must not use custom routing. If successful, the request removes the alias and creates a data stream with the same name. The indices for the alias become hidden backing indices for the stream. The write index for the alias becomes the write index for the stream.- Parameters:
fn
- a function that initializes a builder to create theMigrateToDataStreamRequest
- See Also:
-
modifyDataStream
public CompletableFuture<ModifyDataStreamResponse> modifyDataStream(ModifyDataStreamRequest request) Update data streams. Performs one or more data stream modification actions in a single atomic operation.- See Also:
-
modifyDataStream
public final CompletableFuture<ModifyDataStreamResponse> modifyDataStream(Function<ModifyDataStreamRequest.Builder, ObjectBuilder<ModifyDataStreamRequest>> fn) Update data streams. Performs one or more data stream modification actions in a single atomic operation.- Parameters:
fn
- a function that initializes a builder to create theModifyDataStreamRequest
- See Also:
-
open
Open a closed index. For data streams, the API opens any closed backing indices.A closed index is blocked for read/write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. This allows closed indices to not have to maintain internal data structures for indexing or searching documents, resulting in a smaller overhead on the cluster.
When opening or closing an index, the master is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened or closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.
You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behavior can be turned off by using the
ignore_unavailable=true
parameter.By default, you must explicitly name the indices you are opening or closing. To open or close indices with
_all
,*
, or other wildcard expressions, change theaction.destructive_requires_name
setting tofalse
. This setting can also be changed with the cluster update settings API.Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting
cluster.indices.close.enable
tofalse
.Because opening or closing an index allocates its shards, the
wait_for_active_shards
setting on index creation applies to the_open
and_close
index actions as well.- See Also:
-
open
public final CompletableFuture<OpenResponse> open(Function<OpenRequest.Builder, ObjectBuilder<OpenRequest>> fn) Open a closed index. For data streams, the API opens any closed backing indices.A closed index is blocked for read/write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. This allows closed indices to not have to maintain internal data structures for indexing or searching documents, resulting in a smaller overhead on the cluster.
When opening or closing an index, the master is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened or closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.
You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behavior can be turned off by using the
ignore_unavailable=true
parameter.By default, you must explicitly name the indices you are opening or closing. To open or close indices with
_all
,*
, or other wildcard expressions, change theaction.destructive_requires_name
setting tofalse
. This setting can also be changed with the cluster update settings API.Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting
cluster.indices.close.enable
tofalse
.Because opening or closing an index allocates its shards, the
wait_for_active_shards
setting on index creation applies to the_open
and_close
index actions as well.- Parameters:
fn
- a function that initializes a builder to create theOpenRequest
- See Also:
-
promoteDataStream
public CompletableFuture<PromoteDataStreamResponse> promoteDataStream(PromoteDataStreamRequest request) Promote a data stream. Promote a data stream from a replicated data stream managed by cross-cluster replication (CCR) to a regular data stream.With CCR auto following, a data stream from a remote cluster can be replicated to the local cluster. These data streams can't be rolled over in the local cluster. These replicated data streams roll over only if the upstream data stream rolls over. In the event that the remote cluster is no longer available, the data stream in the local cluster can be promoted to a regular data stream, which allows these data streams to be rolled over in the local cluster.
NOTE: When promoting a data stream, ensure the local cluster has a data stream enabled index template that matches the data stream. If this is missing, the data stream will not be able to roll over until a matching index template is created. This will affect the lifecycle management of the data stream and interfere with the data stream size and retention.
- See Also:
-
promoteDataStream
public final CompletableFuture<PromoteDataStreamResponse> promoteDataStream(Function<PromoteDataStreamRequest.Builder, ObjectBuilder<PromoteDataStreamRequest>> fn) Promote a data stream. Promote a data stream from a replicated data stream managed by cross-cluster replication (CCR) to a regular data stream.With CCR auto following, a data stream from a remote cluster can be replicated to the local cluster. These data streams can't be rolled over in the local cluster. These replicated data streams roll over only if the upstream data stream rolls over. In the event that the remote cluster is no longer available, the data stream in the local cluster can be promoted to a regular data stream, which allows these data streams to be rolled over in the local cluster.
NOTE: When promoting a data stream, ensure the local cluster has a data stream enabled index template that matches the data stream. If this is missing, the data stream will not be able to roll over until a matching index template is created. This will affect the lifecycle management of the data stream and interfere with the data stream size and retention.
- Parameters:
fn
- a function that initializes a builder to create thePromoteDataStreamRequest
- See Also:
-
putAlias
Create or update an alias. Adds a data stream or index to an alias.- See Also:
-
putAlias
public final CompletableFuture<PutAliasResponse> putAlias(Function<PutAliasRequest.Builder, ObjectBuilder<PutAliasRequest>> fn) Create or update an alias. Adds a data stream or index to an alias.- Parameters:
fn
- a function that initializes a builder to create thePutAliasRequest
- See Also:
-
putDataLifecycle
public CompletableFuture<PutDataLifecycleResponse> putDataLifecycle(PutDataLifecycleRequest request) Update data stream lifecycles. Update the data stream lifecycle of the specified data streams.- See Also:
-
putDataLifecycle
public final CompletableFuture<PutDataLifecycleResponse> putDataLifecycle(Function<PutDataLifecycleRequest.Builder, ObjectBuilder<PutDataLifecycleRequest>> fn) Update data stream lifecycles. Update the data stream lifecycle of the specified data streams.- Parameters:
fn
- a function that initializes a builder to create thePutDataLifecycleRequest
- See Also:
-
putIndexTemplate
public CompletableFuture<PutIndexTemplateResponse> putIndexTemplate(PutIndexTemplateRequest request) Create or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.Elasticsearch applies templates to new indices based on an wildcard pattern that matches the index name. Index templates are applied during data stream or index creation. For data streams, these settings and mappings are applied when the stream's backing indices are created. Settings and mappings specified in a create index API request override any settings or mappings specified in an index template. Changes to index templates do not affect existing indices, including the existing backing indices of a data stream.
You can use C-style
/* *\/
block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.Multiple matching templates
If multiple index templates match the name of a new index or data stream, the template with the highest priority is used.
Multiple templates with overlapping index patterns at the same priority are not allowed and an error will be thrown when attempting to create a template matching an existing index template at identical priorities.
Composing aliases, mappings, and settings
When multiple component templates are specified in the
composed_of
field for an index template, they are merged in the order specified, meaning that later component templates override earlier component templates. Any mappings, settings, or aliases from the parent index template are merged in next. Finally, any configuration on the index request itself is merged. Mapping definitions are merged recursively, which means that later mapping components can introduce new field mappings and update the mapping configuration. If a field mapping is already contained in an earlier component, its definition will be completely overwritten by the later one. This recursive merging strategy applies not only to field mappings, but also root options likedynamic_templates
andmeta
. If an earlier component contains adynamic_templates
block, then by default newdynamic_templates
entries are appended onto the end. If an entry already exists with the same key, then it is overwritten by the new definition.- See Also:
-
putIndexTemplate
public final CompletableFuture<PutIndexTemplateResponse> putIndexTemplate(Function<PutIndexTemplateRequest.Builder, ObjectBuilder<PutIndexTemplateRequest>> fn) Create or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.Elasticsearch applies templates to new indices based on an wildcard pattern that matches the index name. Index templates are applied during data stream or index creation. For data streams, these settings and mappings are applied when the stream's backing indices are created. Settings and mappings specified in a create index API request override any settings or mappings specified in an index template. Changes to index templates do not affect existing indices, including the existing backing indices of a data stream.
You can use C-style
/* *\/
block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.Multiple matching templates
If multiple index templates match the name of a new index or data stream, the template with the highest priority is used.
Multiple templates with overlapping index patterns at the same priority are not allowed and an error will be thrown when attempting to create a template matching an existing index template at identical priorities.
Composing aliases, mappings, and settings
When multiple component templates are specified in the
composed_of
field for an index template, they are merged in the order specified, meaning that later component templates override earlier component templates. Any mappings, settings, or aliases from the parent index template are merged in next. Finally, any configuration on the index request itself is merged. Mapping definitions are merged recursively, which means that later mapping components can introduce new field mappings and update the mapping configuration. If a field mapping is already contained in an earlier component, its definition will be completely overwritten by the later one. This recursive merging strategy applies not only to field mappings, but also root options likedynamic_templates
andmeta
. If an earlier component contains adynamic_templates
block, then by default newdynamic_templates
entries are appended onto the end. If an entry already exists with the same key, then it is overwritten by the new definition.- Parameters:
fn
- a function that initializes a builder to create thePutIndexTemplateRequest
- See Also:
-
putMapping
Update field mappings. Add new fields to an existing data stream or index. You can also use this API to change the search settings of existing fields and add new properties to existing object fields. For data streams, these changes are applied to all backing indices by default.Add multi-fields to an existing field
Multi-fields let you index the same field in different ways. You can use this API to update the fields mapping parameter and enable multi-fields for an existing field. WARNING: If an index (or data stream) contains documents when you add a multi-field, those documents will not have values for the new multi-field. You can populate the new multi-field with the update by query API.
Change supported mapping parameters for an existing field
The documentation for each mapping parameter indicates whether you can update it for an existing field using this API. For example, you can use the update mapping API to update the
ignore_above
parameter.Change the mapping of an existing field
Except for supported mapping parameters, you can't change the mapping or field type of an existing field. Changing an existing field could invalidate data that's already indexed.
If you need to change the mapping of a field in a data stream's backing indices, refer to documentation about modifying data streams. If you need to change the mapping of a field in other indices, create a new index with the correct mapping and reindex your data into that index.
Rename a field
Renaming a field would invalidate data already indexed under the old field name. Instead, add an alias field to create an alternate field name.
- See Also:
-
putMapping
public final CompletableFuture<PutMappingResponse> putMapping(Function<PutMappingRequest.Builder, ObjectBuilder<PutMappingRequest>> fn) Update field mappings. Add new fields to an existing data stream or index. You can also use this API to change the search settings of existing fields and add new properties to existing object fields. For data streams, these changes are applied to all backing indices by default.Add multi-fields to an existing field
Multi-fields let you index the same field in different ways. You can use this API to update the fields mapping parameter and enable multi-fields for an existing field. WARNING: If an index (or data stream) contains documents when you add a multi-field, those documents will not have values for the new multi-field. You can populate the new multi-field with the update by query API.
Change supported mapping parameters for an existing field
The documentation for each mapping parameter indicates whether you can update it for an existing field using this API. For example, you can use the update mapping API to update the
ignore_above
parameter.Change the mapping of an existing field
Except for supported mapping parameters, you can't change the mapping or field type of an existing field. Changing an existing field could invalidate data that's already indexed.
If you need to change the mapping of a field in a data stream's backing indices, refer to documentation about modifying data streams. If you need to change the mapping of a field in other indices, create a new index with the correct mapping and reindex your data into that index.
Rename a field
Renaming a field would invalidate data already indexed under the old field name. Instead, add an alias field to create an alternate field name.
- Parameters:
fn
- a function that initializes a builder to create thePutMappingRequest
- See Also:
-
putSettings
Update index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default.To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation. To preserve existing settings from being updated, set the
preserve_existing
parameter totrue
.NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.
- See Also:
-
putSettings
public final CompletableFuture<PutIndicesSettingsResponse> putSettings(Function<PutIndicesSettingsRequest.Builder, ObjectBuilder<PutIndicesSettingsRequest>> fn) Update index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default.To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation. To preserve existing settings from being updated, set the
preserve_existing
parameter totrue
.NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.
- Parameters:
fn
- a function that initializes a builder to create thePutIndicesSettingsRequest
- See Also:
-
putSettings
Update index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default.To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation. To preserve existing settings from being updated, set the
preserve_existing
parameter totrue
.NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.
- See Also:
-
putTemplate
Create or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices. Elasticsearch applies templates to new indices based on an index pattern that matches the index name.IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
Composable templates always take precedence over legacy templates. If no composable template matches a new index, matching legacy templates are applied according to their order.
Index templates are only applied during index creation. Changes to index templates do not affect existing indices. Settings and mappings specified in create index API requests override any settings or mappings specified in an index template.
You can use C-style
/* *\/
block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.Indices matching multiple templates
Multiple index templates can potentially match an index, in this case, both the settings and mappings are merged into the final configuration of the index. The order of the merging can be controlled using the order parameter, with lower order being applied first, and higher orders overriding them. NOTE: Multiple matching templates with the same order value will result in a non-deterministic merging order.
- See Also:
-
putTemplate
public final CompletableFuture<PutTemplateResponse> putTemplate(Function<PutTemplateRequest.Builder, ObjectBuilder<PutTemplateRequest>> fn) Create or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices. Elasticsearch applies templates to new indices based on an index pattern that matches the index name.IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
Composable templates always take precedence over legacy templates. If no composable template matches a new index, matching legacy templates are applied according to their order.
Index templates are only applied during index creation. Changes to index templates do not affect existing indices. Settings and mappings specified in create index API requests override any settings or mappings specified in an index template.
You can use C-style
/* *\/
block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.Indices matching multiple templates
Multiple index templates can potentially match an index, in this case, both the settings and mappings are merged into the final configuration of the index. The order of the merging can be controlled using the order parameter, with lower order being applied first, and higher orders overriding them. NOTE: Multiple matching templates with the same order value will result in a non-deterministic merging order.
- Parameters:
fn
- a function that initializes a builder to create thePutTemplateRequest
- See Also:
-
recovery
Get index recovery information. Get information about ongoing and completed shard recoveries for one or more indices. For data streams, the API returns information for the stream's backing indices.All recoveries, whether ongoing or complete, are kept in the cluster state and may be reported on at any time.
Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or creating a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing.
Recovery automatically occurs during the following processes:
- When creating an index for the first time.
- When a node rejoins the cluster and starts up any missing primary shard copies using the data that it holds in its data path.
- Creation of new replica shard copies from the primary.
- Relocation of a shard copy to a different node in the same cluster.
- A snapshot restore operation.
- A clone, shrink, or split operation.
You can determine the cause of a shard recovery using the recovery or cat recovery APIs.
The index recovery API reports information about completed recoveries only for shard copies that currently exist in the cluster. It only reports the last recovery for each shard copy and does not report historical information about earlier recoveries, nor does it report information about the recoveries of shard copies that no longer exist. This means that if a shard copy completes a recovery and then Elasticsearch relocates it onto a different node then the information about the original recovery will not be shown in the recovery API.
- See Also:
-
recovery
public final CompletableFuture<RecoveryResponse> recovery(Function<RecoveryRequest.Builder, ObjectBuilder<RecoveryRequest>> fn) Get index recovery information. Get information about ongoing and completed shard recoveries for one or more indices. For data streams, the API returns information for the stream's backing indices.All recoveries, whether ongoing or complete, are kept in the cluster state and may be reported on at any time.
Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or creating a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing.
Recovery automatically occurs during the following processes:
- When creating an index for the first time.
- When a node rejoins the cluster and starts up any missing primary shard copies using the data that it holds in its data path.
- Creation of new replica shard copies from the primary.
- Relocation of a shard copy to a different node in the same cluster.
- A snapshot restore operation.
- A clone, shrink, or split operation.
You can determine the cause of a shard recovery using the recovery or cat recovery APIs.
The index recovery API reports information about completed recoveries only for shard copies that currently exist in the cluster. It only reports the last recovery for each shard copy and does not report historical information about earlier recoveries, nor does it report information about the recoveries of shard copies that no longer exist. This means that if a shard copy completes a recovery and then Elasticsearch relocates it onto a different node then the information about the original recovery will not be shown in the recovery API.
- Parameters:
fn
- a function that initializes a builder to create theRecoveryRequest
- See Also:
-
recovery
Get index recovery information. Get information about ongoing and completed shard recoveries for one or more indices. For data streams, the API returns information for the stream's backing indices.All recoveries, whether ongoing or complete, are kept in the cluster state and may be reported on at any time.
Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or creating a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing.
Recovery automatically occurs during the following processes:
- When creating an index for the first time.
- When a node rejoins the cluster and starts up any missing primary shard copies using the data that it holds in its data path.
- Creation of new replica shard copies from the primary.
- Relocation of a shard copy to a different node in the same cluster.
- A snapshot restore operation.
- A clone, shrink, or split operation.
You can determine the cause of a shard recovery using the recovery or cat recovery APIs.
The index recovery API reports information about completed recoveries only for shard copies that currently exist in the cluster. It only reports the last recovery for each shard copy and does not report historical information about earlier recoveries, nor does it report information about the recoveries of shard copies that no longer exist. This means that if a shard copy completes a recovery and then Elasticsearch relocates it onto a different node then the information about the original recovery will not be shown in the recovery API.
- See Also:
-
refresh
Refresh an index. A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. You can change this default interval with the
index.refresh_interval
setting.Refresh requests are synchronous and do not return a response until the refresh operation completes.
Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.
If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's
refresh=wait_for
query parameter option. This option ensures the indexing operation waits for a periodic refresh before running the search.- See Also:
-
refresh
public final CompletableFuture<RefreshResponse> refresh(Function<RefreshRequest.Builder, ObjectBuilder<RefreshRequest>> fn) Refresh an index. A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. You can change this default interval with the
index.refresh_interval
setting.Refresh requests are synchronous and do not return a response until the refresh operation completes.
Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.
If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's
refresh=wait_for
query parameter option. This option ensures the indexing operation waits for a periodic refresh before running the search.- Parameters:
fn
- a function that initializes a builder to create theRefreshRequest
- See Also:
-
refresh
Refresh an index. A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. You can change this default interval with the
index.refresh_interval
setting.Refresh requests are synchronous and do not return a response until the refresh operation completes.
Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.
If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's
refresh=wait_for
query parameter option. This option ensures the indexing operation waits for a periodic refresh before running the search.- See Also:
-
reloadSearchAnalyzers
public CompletableFuture<ReloadSearchAnalyzersResponse> reloadSearchAnalyzers(ReloadSearchAnalyzersRequest request) Reload search analyzers. Reload an index's search analyzers and their resources. For data streams, the API reloads search analyzers and resources for the stream's backing indices.IMPORTANT: After reloading the search analyzers you should clear the request cache to make sure it doesn't contain responses derived from the previous versions of the analyzer.
You can use the reload search analyzers API to pick up changes to synonym files used in the
synonym_graph
orsynonym
token filter of a search analyzer. To be eligible, the token filter must have anupdateable
flag oftrue
and only be used in search analyzers.NOTE: This API does not perform a reload for each shard of an index. Instead, it performs a reload for each node containing index shards. As a result, the total shard count returned by the API can differ from the number of index shards. Because reloading affects every node with an index shard, it is important to update the synonym file on every data node in the cluster--including nodes that don't contain a shard replica--before using this API. This ensures the synonym file is updated everywhere in the cluster in case shards are relocated in the future.
- See Also:
-
reloadSearchAnalyzers
public final CompletableFuture<ReloadSearchAnalyzersResponse> reloadSearchAnalyzers(Function<ReloadSearchAnalyzersRequest.Builder, ObjectBuilder<ReloadSearchAnalyzersRequest>> fn) Reload search analyzers. Reload an index's search analyzers and their resources. For data streams, the API reloads search analyzers and resources for the stream's backing indices.IMPORTANT: After reloading the search analyzers you should clear the request cache to make sure it doesn't contain responses derived from the previous versions of the analyzer.
You can use the reload search analyzers API to pick up changes to synonym files used in the
synonym_graph
orsynonym
token filter of a search analyzer. To be eligible, the token filter must have anupdateable
flag oftrue
and only be used in search analyzers.NOTE: This API does not perform a reload for each shard of an index. Instead, it performs a reload for each node containing index shards. As a result, the total shard count returned by the API can differ from the number of index shards. Because reloading affects every node with an index shard, it is important to update the synonym file on every data node in the cluster--including nodes that don't contain a shard replica--before using this API. This ensures the synonym file is updated everywhere in the cluster in case shards are relocated in the future.
- Parameters:
fn
- a function that initializes a builder to create theReloadSearchAnalyzersRequest
- See Also:
-
resolveCluster
Resolve the cluster.Resolve the specified index expressions to return information about each cluster, including the local "querying" cluster, if included. If no index expression is provided, the API will return information about all the remote clusters that are configured on the querying cluster.
This endpoint is useful before doing a cross-cluster search in order to determine which remote clusters should be included in a search.
You use the same index expression with this endpoint as you would for cross-cluster search. Index and cluster exclusions are also supported with this endpoint.
For each cluster in the index expression, information is returned about:
- Whether the querying ("local") cluster is currently connected
to each remote cluster specified in the index expression. Note that this
endpoint actively attempts to contact the remote clusters, unlike the
remote/info
endpoint. - Whether each remote cluster is configured with
skip_unavailable
astrue
orfalse
. - Whether there are any indices, aliases, or data streams on that cluster that match the index expression.
- Whether the search is likely to have errors returned when you do the cross-cluster search (including any authorization errors if you do not have permission to query the index).
- Cluster version information, including the Elasticsearch server version.
For example,
GET /_resolve/cluster/my-index-*,cluster*:my-index-*
returns information about the local cluster and all remotely configured clusters that start with the aliascluster*
. Each cluster returns information about whether it has any indices, aliases or data streams that matchmy-index-*
.Note on backwards compatibility
The ability to query without an index expression was added in version 8.18, so when querying remote clusters older than that, the local cluster will send the index expression
dummy*
to those remote clusters. Thus, if an errors occur, you may see a reference to that index expression even though you didn't request it. If it causes a problem, you can instead include an index expression like*:*
to bypass the issue.Advantages of using this endpoint before a cross-cluster search
You may want to exclude a cluster or index from a search when:
- A remote cluster is not currently connected and is configured with
skip_unavailable=false
. Running a cross-cluster search under those conditions will cause the entire search to fail. - A cluster has no matching indices, aliases or data streams for the index
expression (or your user does not have permissions to search them). For
example, suppose your index expression is
logs*,remote1:logs*
and the remote1 cluster has no indices, aliases or data streams that matchlogs*
. In that case, that cluster will return no results from that cluster if you include it in a cross-cluster search. - The index expression (combined with any query parameters you specify)
will likely cause an exception to be thrown when you do the search. In these
cases, the "error" field in the
_resolve/cluster
response will be present. (This is also where security/permission errors will be shown.) - A remote cluster is an older version that does not support the feature you want to use in your search.
Test availability of remote clusters
The
remote/info
endpoint is commonly used to test whether the "local" cluster (the cluster being queried) is connected to its remote clusters, but it does not necessarily reflect whether the remote cluster is available or not. The remote cluster may be available, while the local cluster is not currently connected to it.You can use the
_resolve/cluster
API to attempt to reconnect to remote clusters. For example withGET _resolve/cluster
orGET _resolve/cluster/*:*
. Theconnected
field in the response will indicate whether it was successful. If a connection was (re-)established, this will also cause theremote/info
endpoint to now indicate a connected status.- See Also:
- Whether the querying ("local") cluster is currently connected
to each remote cluster specified in the index expression. Note that this
endpoint actively attempts to contact the remote clusters, unlike the
-
resolveCluster
public final CompletableFuture<ResolveClusterResponse> resolveCluster(Function<ResolveClusterRequest.Builder, ObjectBuilder<ResolveClusterRequest>> fn) Resolve the cluster.Resolve the specified index expressions to return information about each cluster, including the local "querying" cluster, if included. If no index expression is provided, the API will return information about all the remote clusters that are configured on the querying cluster.
This endpoint is useful before doing a cross-cluster search in order to determine which remote clusters should be included in a search.
You use the same index expression with this endpoint as you would for cross-cluster search. Index and cluster exclusions are also supported with this endpoint.
For each cluster in the index expression, information is returned about:
- Whether the querying ("local") cluster is currently connected
to each remote cluster specified in the index expression. Note that this
endpoint actively attempts to contact the remote clusters, unlike the
remote/info
endpoint. - Whether each remote cluster is configured with
skip_unavailable
astrue
orfalse
. - Whether there are any indices, aliases, or data streams on that cluster that match the index expression.
- Whether the search is likely to have errors returned when you do the cross-cluster search (including any authorization errors if you do not have permission to query the index).
- Cluster version information, including the Elasticsearch server version.
For example,
GET /_resolve/cluster/my-index-*,cluster*:my-index-*
returns information about the local cluster and all remotely configured clusters that start with the aliascluster*
. Each cluster returns information about whether it has any indices, aliases or data streams that matchmy-index-*
.Note on backwards compatibility
The ability to query without an index expression was added in version 8.18, so when querying remote clusters older than that, the local cluster will send the index expression
dummy*
to those remote clusters. Thus, if an errors occur, you may see a reference to that index expression even though you didn't request it. If it causes a problem, you can instead include an index expression like*:*
to bypass the issue.Advantages of using this endpoint before a cross-cluster search
You may want to exclude a cluster or index from a search when:
- A remote cluster is not currently connected and is configured with
skip_unavailable=false
. Running a cross-cluster search under those conditions will cause the entire search to fail. - A cluster has no matching indices, aliases or data streams for the index
expression (or your user does not have permissions to search them). For
example, suppose your index expression is
logs*,remote1:logs*
and the remote1 cluster has no indices, aliases or data streams that matchlogs*
. In that case, that cluster will return no results from that cluster if you include it in a cross-cluster search. - The index expression (combined with any query parameters you specify)
will likely cause an exception to be thrown when you do the search. In these
cases, the "error" field in the
_resolve/cluster
response will be present. (This is also where security/permission errors will be shown.) - A remote cluster is an older version that does not support the feature you want to use in your search.
Test availability of remote clusters
The
remote/info
endpoint is commonly used to test whether the "local" cluster (the cluster being queried) is connected to its remote clusters, but it does not necessarily reflect whether the remote cluster is available or not. The remote cluster may be available, while the local cluster is not currently connected to it.You can use the
_resolve/cluster
API to attempt to reconnect to remote clusters. For example withGET _resolve/cluster
orGET _resolve/cluster/*:*
. Theconnected
field in the response will indicate whether it was successful. If a connection was (re-)established, this will also cause theremote/info
endpoint to now indicate a connected status.- Parameters:
fn
- a function that initializes a builder to create theResolveClusterRequest
- See Also:
- Whether the querying ("local") cluster is currently connected
to each remote cluster specified in the index expression. Note that this
endpoint actively attempts to contact the remote clusters, unlike the
-
resolveCluster
Resolve the cluster.Resolve the specified index expressions to return information about each cluster, including the local "querying" cluster, if included. If no index expression is provided, the API will return information about all the remote clusters that are configured on the querying cluster.
This endpoint is useful before doing a cross-cluster search in order to determine which remote clusters should be included in a search.
You use the same index expression with this endpoint as you would for cross-cluster search. Index and cluster exclusions are also supported with this endpoint.
For each cluster in the index expression, information is returned about:
- Whether the querying ("local") cluster is currently connected
to each remote cluster specified in the index expression. Note that this
endpoint actively attempts to contact the remote clusters, unlike the
remote/info
endpoint. - Whether each remote cluster is configured with
skip_unavailable
astrue
orfalse
. - Whether there are any indices, aliases, or data streams on that cluster that match the index expression.
- Whether the search is likely to have errors returned when you do the cross-cluster search (including any authorization errors if you do not have permission to query the index).
- Cluster version information, including the Elasticsearch server version.
For example,
GET /_resolve/cluster/my-index-*,cluster*:my-index-*
returns information about the local cluster and all remotely configured clusters that start with the aliascluster*
. Each cluster returns information about whether it has any indices, aliases or data streams that matchmy-index-*
.Note on backwards compatibility
The ability to query without an index expression was added in version 8.18, so when querying remote clusters older than that, the local cluster will send the index expression
dummy*
to those remote clusters. Thus, if an errors occur, you may see a reference to that index expression even though you didn't request it. If it causes a problem, you can instead include an index expression like*:*
to bypass the issue.Advantages of using this endpoint before a cross-cluster search
You may want to exclude a cluster or index from a search when:
- A remote cluster is not currently connected and is configured with
skip_unavailable=false
. Running a cross-cluster search under those conditions will cause the entire search to fail. - A cluster has no matching indices, aliases or data streams for the index
expression (or your user does not have permissions to search them). For
example, suppose your index expression is
logs*,remote1:logs*
and the remote1 cluster has no indices, aliases or data streams that matchlogs*
. In that case, that cluster will return no results from that cluster if you include it in a cross-cluster search. - The index expression (combined with any query parameters you specify)
will likely cause an exception to be thrown when you do the search. In these
cases, the "error" field in the
_resolve/cluster
response will be present. (This is also where security/permission errors will be shown.) - A remote cluster is an older version that does not support the feature you want to use in your search.
Test availability of remote clusters
The
remote/info
endpoint is commonly used to test whether the "local" cluster (the cluster being queried) is connected to its remote clusters, but it does not necessarily reflect whether the remote cluster is available or not. The remote cluster may be available, while the local cluster is not currently connected to it.You can use the
_resolve/cluster
API to attempt to reconnect to remote clusters. For example withGET _resolve/cluster
orGET _resolve/cluster/*:*
. Theconnected
field in the response will indicate whether it was successful. If a connection was (re-)established, this will also cause theremote/info
endpoint to now indicate a connected status.- See Also:
- Whether the querying ("local") cluster is currently connected
to each remote cluster specified in the index expression. Note that this
endpoint actively attempts to contact the remote clusters, unlike the
-
resolveIndex
Resolve indices. Resolve the names and/or index patterns for indices, aliases, and data streams. Multiple patterns and remote clusters are supported.- See Also:
-
resolveIndex
public final CompletableFuture<ResolveIndexResponse> resolveIndex(Function<ResolveIndexRequest.Builder, ObjectBuilder<ResolveIndexRequest>> fn) Resolve indices. Resolve the names and/or index patterns for indices, aliases, and data streams. Multiple patterns and remote clusters are supported.- Parameters:
fn
- a function that initializes a builder to create theResolveIndexRequest
- See Also:
-
rollover
Roll over to a new index. TIP: It is recommended to use the index lifecycle rollover action to automate rollovers.The rollover API creates a new index for a data stream or index alias. The API behavior depends on the rollover target.
Roll over a data stream
If you roll over a data stream, the API creates a new write index for the stream. The stream's previous write index becomes a regular backing index. A rollover also increments the data stream's generation.
Roll over an index alias with a write index
TIP: Prior to Elasticsearch 7.9, you'd typically use an index alias with a write index to manage time series data. Data streams replace this functionality, require less maintenance, and automatically integrate with data tiers.
If an index alias points to multiple indices, one of the indices must be a write index. The rollover API creates a new write index for the alias with
is_write_index
set totrue
. The API alsosets is_write_index
tofalse
for the previous write index.Roll over an index alias with one index
If you roll over an index alias that points to only one index, the API creates a new index for the alias and removes the original index from the alias.
NOTE: A rollover creates a new index and is subject to the
wait_for_active_shards
setting.Increment index names for an alias
When you roll over an index alias, you can specify a name for the new index. If you don't specify a name and the current index ends with
-
and a number, such asmy-index-000001
ormy-index-3
, the new index name increments that number. For example, if you roll over an alias with a current index ofmy-index-000001
, the rollover creates a new index namedmy-index-000002
. This number is always six characters and zero-padded, regardless of the previous index's name.If you use an index alias for time series data, you can use date math in the index name to track the rollover date. For example, you can create an alias that points to an index named
<my-index-{now/d}-000001>
. If you create the index on May 6, 2099, the index's name ismy-index-2099.05.06-000001
. If you roll over the alias on May 7, 2099, the new index's name ismy-index-2099.05.07-000002
.- See Also:
-
rollover
public final CompletableFuture<RolloverResponse> rollover(Function<RolloverRequest.Builder, ObjectBuilder<RolloverRequest>> fn) Roll over to a new index. TIP: It is recommended to use the index lifecycle rollover action to automate rollovers.The rollover API creates a new index for a data stream or index alias. The API behavior depends on the rollover target.
Roll over a data stream
If you roll over a data stream, the API creates a new write index for the stream. The stream's previous write index becomes a regular backing index. A rollover also increments the data stream's generation.
Roll over an index alias with a write index
TIP: Prior to Elasticsearch 7.9, you'd typically use an index alias with a write index to manage time series data. Data streams replace this functionality, require less maintenance, and automatically integrate with data tiers.
If an index alias points to multiple indices, one of the indices must be a write index. The rollover API creates a new write index for the alias with
is_write_index
set totrue
. The API alsosets is_write_index
tofalse
for the previous write index.Roll over an index alias with one index
If you roll over an index alias that points to only one index, the API creates a new index for the alias and removes the original index from the alias.
NOTE: A rollover creates a new index and is subject to the
wait_for_active_shards
setting.Increment index names for an alias
When you roll over an index alias, you can specify a name for the new index. If you don't specify a name and the current index ends with
-
and a number, such asmy-index-000001
ormy-index-3
, the new index name increments that number. For example, if you roll over an alias with a current index ofmy-index-000001
, the rollover creates a new index namedmy-index-000002
. This number is always six characters and zero-padded, regardless of the previous index's name.If you use an index alias for time series data, you can use date math in the index name to track the rollover date. For example, you can create an alias that points to an index named
<my-index-{now/d}-000001>
. If you create the index on May 6, 2099, the index's name ismy-index-2099.05.06-000001
. If you roll over the alias on May 7, 2099, the new index's name ismy-index-2099.05.07-000002
.- Parameters:
fn
- a function that initializes a builder to create theRolloverRequest
- See Also:
-
segments
Get index segments. Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the stream's backing indices.- See Also:
-
segments
public final CompletableFuture<SegmentsResponse> segments(Function<SegmentsRequest.Builder, ObjectBuilder<SegmentsRequest>> fn) Get index segments. Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the stream's backing indices.- Parameters:
fn
- a function that initializes a builder to create theSegmentsRequest
- See Also:
-
segments
Get index segments. Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the stream's backing indices.- See Also:
-
shardStores
Get index shard stores. Get store information about replica shards in one or more indices. For data streams, the API retrieves store information for the stream's backing indices.The index shard stores API returns the following information:
- The node on which each replica shard exists.
- The allocation ID for each replica shard.
- A unique ID for each replica shard.
- Any errors encountered while opening the shard index or from an earlier failure.
By default, the API returns store information only for primary shards that are unassigned or have one or more unassigned replica shards.
- See Also:
-
shardStores
public final CompletableFuture<ShardStoresResponse> shardStores(Function<ShardStoresRequest.Builder, ObjectBuilder<ShardStoresRequest>> fn) Get index shard stores. Get store information about replica shards in one or more indices. For data streams, the API retrieves store information for the stream's backing indices.The index shard stores API returns the following information:
- The node on which each replica shard exists.
- The allocation ID for each replica shard.
- A unique ID for each replica shard.
- Any errors encountered while opening the shard index or from an earlier failure.
By default, the API returns store information only for primary shards that are unassigned or have one or more unassigned replica shards.
- Parameters:
fn
- a function that initializes a builder to create theShardStoresRequest
- See Also:
-
shardStores
Get index shard stores. Get store information about replica shards in one or more indices. For data streams, the API retrieves store information for the stream's backing indices.The index shard stores API returns the following information:
- The node on which each replica shard exists.
- The allocation ID for each replica shard.
- A unique ID for each replica shard.
- Any errors encountered while opening the shard index or from an earlier failure.
By default, the API returns store information only for primary shards that are unassigned or have one or more unassigned replica shards.
- See Also:
-
shrink
Shrink an index. Shrink an index into a new index with fewer primary shards.Before you can shrink an index:
- The index must be read-only.
- A copy of every shard in the index must reside on the same node.
- The index must have a green health status.
To make shard allocation easier, we recommend you also remove the index's replica shards. You can later re-add replica shards as part of the shrink operation.
The requested number of primary shards in the target index must be a factor of the number of shards in the source index. For example an index with 8 primary shards can be shrunk into 4, 2 or 1 primary shards or an index with 15 primary shards can be shrunk into 5, 3 or 1. If the number of shards in the index is a prime number it can only be shrunk into a single primary shard Before shrinking, a (primary or replica) copy of every shard in the index must be present on the same node.
The current write index on a data stream cannot be shrunk. In order to shrink the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be shrunk.
A shrink operation:
- Creates a new target index with the same definition as the source index, but with a smaller number of primary shards.
- Hard-links segments from the source index into the target index. If the file system does not support hard-linking, then all segments are copied into the new index, which is a much more time consuming process. Also if using multiple data paths, shards on different data paths require a full copy of segment files if they are not on the same disk since hardlinks do not work across disks.
- Recovers the target index as though it were a closed index which had just
been re-opened. Recovers shards to the
.routing.allocation.initial_recovery._id
index setting.
IMPORTANT: Indices can only be shrunk if they satisfy the following requirements:
- The target index must not exist.
- The source index must have more primary shards than the target index.
- The number of primary shards in the target index must be a factor of the number of primary shards in the source index. The source index must have more primary shards than the target index.
- The index must not contain more than 2,147,483,519 documents in total across all shards that will be shrunk into a single shard on the target index as this is the maximum number of docs that can fit into a single shard.
- The node handling the shrink process must have sufficient free disk space to accommodate a second copy of the existing index.
- See Also:
-
shrink
public final CompletableFuture<ShrinkResponse> shrink(Function<ShrinkRequest.Builder, ObjectBuilder<ShrinkRequest>> fn) Shrink an index. Shrink an index into a new index with fewer primary shards.Before you can shrink an index:
- The index must be read-only.
- A copy of every shard in the index must reside on the same node.
- The index must have a green health status.
To make shard allocation easier, we recommend you also remove the index's replica shards. You can later re-add replica shards as part of the shrink operation.
The requested number of primary shards in the target index must be a factor of the number of shards in the source index. For example an index with 8 primary shards can be shrunk into 4, 2 or 1 primary shards or an index with 15 primary shards can be shrunk into 5, 3 or 1. If the number of shards in the index is a prime number it can only be shrunk into a single primary shard Before shrinking, a (primary or replica) copy of every shard in the index must be present on the same node.
The current write index on a data stream cannot be shrunk. In order to shrink the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be shrunk.
A shrink operation:
- Creates a new target index with the same definition as the source index, but with a smaller number of primary shards.
- Hard-links segments from the source index into the target index. If the file system does not support hard-linking, then all segments are copied into the new index, which is a much more time consuming process. Also if using multiple data paths, shards on different data paths require a full copy of segment files if they are not on the same disk since hardlinks do not work across disks.
- Recovers the target index as though it were a closed index which had just
been re-opened. Recovers shards to the
.routing.allocation.initial_recovery._id
index setting.
IMPORTANT: Indices can only be shrunk if they satisfy the following requirements:
- The target index must not exist.
- The source index must have more primary shards than the target index.
- The number of primary shards in the target index must be a factor of the number of primary shards in the source index. The source index must have more primary shards than the target index.
- The index must not contain more than 2,147,483,519 documents in total across all shards that will be shrunk into a single shard on the target index as this is the maximum number of docs that can fit into a single shard.
- The node handling the shrink process must have sufficient free disk space to accommodate a second copy of the existing index.
- Parameters:
fn
- a function that initializes a builder to create theShrinkRequest
- See Also:
-
simulateIndexTemplate
public CompletableFuture<SimulateIndexTemplateResponse> simulateIndexTemplate(SimulateIndexTemplateRequest request) Simulate an index. Get the index configuration that would be applied to the specified index from an existing index template.- See Also:
-
simulateIndexTemplate
public final CompletableFuture<SimulateIndexTemplateResponse> simulateIndexTemplate(Function<SimulateIndexTemplateRequest.Builder, ObjectBuilder<SimulateIndexTemplateRequest>> fn) Simulate an index. Get the index configuration that would be applied to the specified index from an existing index template.- Parameters:
fn
- a function that initializes a builder to create theSimulateIndexTemplateRequest
- See Also:
-
simulateTemplate
public CompletableFuture<SimulateTemplateResponse> simulateTemplate(SimulateTemplateRequest request) Simulate an index template. Get the index configuration that would be applied by a particular index template.- See Also:
-
simulateTemplate
public final CompletableFuture<SimulateTemplateResponse> simulateTemplate(Function<SimulateTemplateRequest.Builder, ObjectBuilder<SimulateTemplateRequest>> fn) Simulate an index template. Get the index configuration that would be applied by a particular index template.- Parameters:
fn
- a function that initializes a builder to create theSimulateTemplateRequest
- See Also:
-
simulateTemplate
Simulate an index template. Get the index configuration that would be applied by a particular index template.- See Also:
-
split
Split an index. Split an index into a new index with more primary shards.-
Before you can split an index:
-
The index must be read-only.
-
The cluster health status must be green.
You can do make an index read-only with the following request using the add index block API:
PUT /my_source_index/_block/write
The current write index on a data stream cannot be split. In order to split the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be split.
The number of times the index can be split (and the number of shards that each original shard can be split into) is determined by the
index.number_of_routing_shards
setting. The number of routing shards specifies the hashing space that is used internally to distribute documents across shards with consistent hashing. For instance, a 5 shard index withnumber_of_routing_shards
set to 30 (5 x 2 x 3) could be split by a factor of 2 or 3.A split operation:
- Creates a new target index with the same definition as the source index, but with a larger number of primary shards.
- Hard-links segments from the source index into the target index. If the file system doesn't support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
- Hashes all documents again, after low level files are created, to delete documents that belong to a different shard.
- Recovers the target index as though it were a closed index which had just been re-opened.
IMPORTANT: Indices can only be split if they satisfy the following requirements:
- The target index must not exist.
- The source index must have fewer primary shards than the target index.
- The number of primary shards in the target index must be a multiple of the number of primary shards in the source index.
- The node handling the split process must have sufficient free disk space to accommodate a second copy of the existing index.
- See Also:
-
-
split
public final CompletableFuture<SplitResponse> split(Function<SplitRequest.Builder, ObjectBuilder<SplitRequest>> fn) Split an index. Split an index into a new index with more primary shards.-
Before you can split an index:
-
The index must be read-only.
-
The cluster health status must be green.
You can do make an index read-only with the following request using the add index block API:
PUT /my_source_index/_block/write
The current write index on a data stream cannot be split. In order to split the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be split.
The number of times the index can be split (and the number of shards that each original shard can be split into) is determined by the
index.number_of_routing_shards
setting. The number of routing shards specifies the hashing space that is used internally to distribute documents across shards with consistent hashing. For instance, a 5 shard index withnumber_of_routing_shards
set to 30 (5 x 2 x 3) could be split by a factor of 2 or 3.A split operation:
- Creates a new target index with the same definition as the source index, but with a larger number of primary shards.
- Hard-links segments from the source index into the target index. If the file system doesn't support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
- Hashes all documents again, after low level files are created, to delete documents that belong to a different shard.
- Recovers the target index as though it were a closed index which had just been re-opened.
IMPORTANT: Indices can only be split if they satisfy the following requirements:
- The target index must not exist.
- The source index must have fewer primary shards than the target index.
- The number of primary shards in the target index must be a multiple of the number of primary shards in the source index.
- The node handling the split process must have sufficient free disk space to accommodate a second copy of the existing index.
- Parameters:
fn
- a function that initializes a builder to create theSplitRequest
- See Also:
-
-
stats
Get index statistics. For data streams, the API retrieves statistics for the stream's backing indices.By default, the returned statistics are index-level with
primaries
andtotal
aggregations.primaries
are the values for only the primary shards.total
are the accumulated values for both primary and replica shards.To get shard-level statistics, set the
level
parameter toshards
.NOTE: When moving to another node, the shard-level statistics for a shard are cleared. Although the shard is no longer part of the node, that node retains any node-level statistics to which the shard contributed.
- See Also:
-
stats
public final CompletableFuture<IndicesStatsResponse> stats(Function<IndicesStatsRequest.Builder, ObjectBuilder<IndicesStatsRequest>> fn) Get index statistics. For data streams, the API retrieves statistics for the stream's backing indices.By default, the returned statistics are index-level with
primaries
andtotal
aggregations.primaries
are the values for only the primary shards.total
are the accumulated values for both primary and replica shards.To get shard-level statistics, set the
level
parameter toshards
.NOTE: When moving to another node, the shard-level statistics for a shard are cleared. Although the shard is no longer part of the node, that node retains any node-level statistics to which the shard contributed.
- Parameters:
fn
- a function that initializes a builder to create theIndicesStatsRequest
- See Also:
-
stats
Get index statistics. For data streams, the API retrieves statistics for the stream's backing indices.By default, the returned statistics are index-level with
primaries
andtotal
aggregations.primaries
are the values for only the primary shards.total
are the accumulated values for both primary and replica shards.To get shard-level statistics, set the
level
parameter toshards
.NOTE: When moving to another node, the shard-level statistics for a shard are cleared. Although the shard is no longer part of the node, that node retains any node-level statistics to which the shard contributed.
- See Also:
-
updateAliases
Create or update an alias. Adds a data stream or index to an alias.- See Also:
-
updateAliases
public final CompletableFuture<UpdateAliasesResponse> updateAliases(Function<UpdateAliasesRequest.Builder, ObjectBuilder<UpdateAliasesRequest>> fn) Create or update an alias. Adds a data stream or index to an alias.- Parameters:
fn
- a function that initializes a builder to create theUpdateAliasesRequest
- See Also:
-
updateAliases
Create or update an alias. Adds a data stream or index to an alias.- See Also:
-
validateQuery
Validate a query. Validates a query without running it.- See Also:
-
validateQuery
public final CompletableFuture<ValidateQueryResponse> validateQuery(Function<ValidateQueryRequest.Builder, ObjectBuilder<ValidateQueryRequest>> fn) Validate a query. Validates a query without running it.- Parameters:
fn
- a function that initializes a builder to create theValidateQueryRequest
- See Also:
-
validateQuery
Validate a query. Validates a query without running it.- See Also:
-