Class ElasticsearchIndicesAsyncClient

java.lang.Object
co.elastic.clients.ApiClient<ElasticsearchTransport,ElasticsearchIndicesAsyncClient>
co.elastic.clients.elasticsearch.indices.ElasticsearchIndicesAsyncClient
All Implemented Interfaces:
Closeable, AutoCloseable

public class ElasticsearchIndicesAsyncClient extends ApiClient<ElasticsearchTransport,ElasticsearchIndicesAsyncClient>
Client for the indices namespace.
  • Constructor Details

  • Method Details

    • withTransportOptions

      public ElasticsearchIndicesAsyncClient withTransportOptions(@Nullable TransportOptions transportOptions)
      Description copied from class: ApiClient
      Creates a new client with some request options
      Specified by:
      withTransportOptions in class ApiClient<ElasticsearchTransport,ElasticsearchIndicesAsyncClient>
    • addBlock

      Add an index block.

      Add an index block to an index. Index blocks limit the operations allowed on an index by blocking specific operation types.

      See Also:
    • addBlock

      Add an index block.

      Add an index block to an index. Index blocks limit the operations allowed on an index by blocking specific operation types.

      Parameters:
      fn - a function that initializes a builder to create the AddBlockRequest
      See Also:
    • analyze

      Get tokens from text analysis. The analyze API performs analysis on a text string and returns the resulting tokens.

      Generating excessive amount of tokens may cause a node to run out of memory. The index.analyze.max_token_count setting enables you to limit the number of tokens that can be produced. If more than this limit of tokens gets generated, an error occurs. The _analyze endpoint without a specified index will always use 10000 as its limit.

      See Also:
    • analyze

      Get tokens from text analysis. The analyze API performs analysis on a text string and returns the resulting tokens.

      Generating excessive amount of tokens may cause a node to run out of memory. The index.analyze.max_token_count setting enables you to limit the number of tokens that can be produced. If more than this limit of tokens gets generated, an error occurs. The _analyze endpoint without a specified index will always use 10000 as its limit.

      Parameters:
      fn - a function that initializes a builder to create the AnalyzeRequest
      See Also:
    • analyze

      Get tokens from text analysis. The analyze API performs analysis on a text string and returns the resulting tokens.

      Generating excessive amount of tokens may cause a node to run out of memory. The index.analyze.max_token_count setting enables you to limit the number of tokens that can be produced. If more than this limit of tokens gets generated, an error occurs. The _analyze endpoint without a specified index will always use 10000 as its limit.

      See Also:
    • cancelMigrateReindex

      Cancel a migration reindex operation.

      Cancel a migration reindex attempt for a data stream or index.

      See Also:
    • cancelMigrateReindex

      Cancel a migration reindex operation.

      Cancel a migration reindex attempt for a data stream or index.

      Parameters:
      fn - a function that initializes a builder to create the CancelMigrateReindexRequest
      See Also:
    • clearCache

      Clear the cache. Clear the cache of one or more indices. For data streams, the API clears the caches of the stream's backing indices.

      By default, the clear cache API clears all caches. To clear only specific caches, use the fielddata, query, or request parameters. To clear the cache only of specific fields, use the fields parameter.

      See Also:
    • clearCache

      Clear the cache. Clear the cache of one or more indices. For data streams, the API clears the caches of the stream's backing indices.

      By default, the clear cache API clears all caches. To clear only specific caches, use the fielddata, query, or request parameters. To clear the cache only of specific fields, use the fields parameter.

      Parameters:
      fn - a function that initializes a builder to create the ClearCacheRequest
      See Also:
    • clearCache

      public CompletableFuture<ClearCacheResponse> clearCache()
      Clear the cache. Clear the cache of one or more indices. For data streams, the API clears the caches of the stream's backing indices.

      By default, the clear cache API clears all caches. To clear only specific caches, use the fielddata, query, or request parameters. To clear the cache only of specific fields, use the fields parameter.

      See Also:
    • clone

      Clone an index. Clone an existing index into a new index. Each original primary shard is cloned into a new primary shard in the new index.

      IMPORTANT: Elasticsearch does not apply index templates to the resulting index. The API also does not copy index metadata from the original index. Index metadata includes aliases, index lifecycle management phase definitions, and cross-cluster replication (CCR) follower information. For example, if you clone a CCR follower index, the resulting clone will not be a follower index.

      The clone API copies most index settings from the source index to the resulting index, with the exception of index.number_of_replicas and index.auto_expand_replicas. To set the number of replicas in the resulting index, configure these settings in the clone request.

      Cloning works as follows:

      • First, it creates a new target index with the same definition as the source index.
      • Then it hard-links segments from the source index into the target index. If the file system does not support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
      • Finally, it recovers the target index as though it were a closed index which had just been re-opened.

      IMPORTANT: Indices can only be cloned if they meet the following requirements:

      • The index must be marked as read-only and have a cluster health status of green.
      • The target index must not exist.
      • The source index must have the same number of primary shards as the target index.
      • The node handling the clone process must have sufficient free disk space to accommodate a second copy of the existing index.

      The current write index on a data stream cannot be cloned. In order to clone the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be cloned.

      NOTE: Mappings cannot be specified in the _clone request. The mappings of the source index will be used for the target index.

      Monitor the cloning process

      The cloning process can be monitored with the cat recovery API or the cluster health API can be used to wait until all primary shards have been allocated by setting the wait_for_status parameter to yellow.

      The _clone API returns as soon as the target index has been added to the cluster state, before any shards have been allocated. At this point, all shards are in the state unassigned. If, for any reason, the target index can't be allocated, its primary shard will remain unassigned until it can be allocated on that node.

      Once the primary shard is allocated, it moves to state initializing, and the clone process begins. When the clone operation completes, the shard will become active. At that point, Elasticsearch will try to allocate any replicas and may decide to relocate the primary shard to another node.

      Wait for active shards

      Because the clone operation creates a new index to clone the shards to, the wait for active shards setting on index creation applies to the clone index action as well.

      See Also:
    • clone

      Clone an index. Clone an existing index into a new index. Each original primary shard is cloned into a new primary shard in the new index.

      IMPORTANT: Elasticsearch does not apply index templates to the resulting index. The API also does not copy index metadata from the original index. Index metadata includes aliases, index lifecycle management phase definitions, and cross-cluster replication (CCR) follower information. For example, if you clone a CCR follower index, the resulting clone will not be a follower index.

      The clone API copies most index settings from the source index to the resulting index, with the exception of index.number_of_replicas and index.auto_expand_replicas. To set the number of replicas in the resulting index, configure these settings in the clone request.

      Cloning works as follows:

      • First, it creates a new target index with the same definition as the source index.
      • Then it hard-links segments from the source index into the target index. If the file system does not support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
      • Finally, it recovers the target index as though it were a closed index which had just been re-opened.

      IMPORTANT: Indices can only be cloned if they meet the following requirements:

      • The index must be marked as read-only and have a cluster health status of green.
      • The target index must not exist.
      • The source index must have the same number of primary shards as the target index.
      • The node handling the clone process must have sufficient free disk space to accommodate a second copy of the existing index.

      The current write index on a data stream cannot be cloned. In order to clone the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be cloned.

      NOTE: Mappings cannot be specified in the _clone request. The mappings of the source index will be used for the target index.

      Monitor the cloning process

      The cloning process can be monitored with the cat recovery API or the cluster health API can be used to wait until all primary shards have been allocated by setting the wait_for_status parameter to yellow.

      The _clone API returns as soon as the target index has been added to the cluster state, before any shards have been allocated. At this point, all shards are in the state unassigned. If, for any reason, the target index can't be allocated, its primary shard will remain unassigned until it can be allocated on that node.

      Once the primary shard is allocated, it moves to state initializing, and the clone process begins. When the clone operation completes, the shard will become active. At that point, Elasticsearch will try to allocate any replicas and may decide to relocate the primary shard to another node.

      Wait for active shards

      Because the clone operation creates a new index to clone the shards to, the wait for active shards setting on index creation applies to the clone index action as well.

      Parameters:
      fn - a function that initializes a builder to create the CloneIndexRequest
      See Also:
    • close

      Close an index. A closed index is blocked for read or write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. Closed indices do not have to maintain internal data structures for indexing or searching documents, which results in a smaller overhead on the cluster.

      When opening or closing an index, the master node is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened and closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.

      You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behaviour can be turned off using the ignore_unavailable=true parameter.

      By default, you must explicitly name the indices you are opening or closing. To open or close indices with _all, *, or other wildcard expressions, change the action.destructive_requires_name setting to false. This setting can also be changed with the cluster update settings API.

      Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting cluster.indices.close.enable to false.

      See Also:
    • close

      Close an index. A closed index is blocked for read or write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. Closed indices do not have to maintain internal data structures for indexing or searching documents, which results in a smaller overhead on the cluster.

      When opening or closing an index, the master node is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened and closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.

      You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behaviour can be turned off using the ignore_unavailable=true parameter.

      By default, you must explicitly name the indices you are opening or closing. To open or close indices with _all, *, or other wildcard expressions, change the action.destructive_requires_name setting to false. This setting can also be changed with the cluster update settings API.

      Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting cluster.indices.close.enable to false.

      Parameters:
      fn - a function that initializes a builder to create the CloseIndexRequest
      See Also:
    • create

      Create an index. You can use the create index API to add a new index to an Elasticsearch cluster. When creating an index, you can specify the following:
      • Settings for the index.
      • Mappings for fields in the index.
      • Index aliases

      Wait for active shards

      By default, index creation will only return a response to the client when the primary copies of each shard have been started, or the request times out. The index creation response will indicate what happened. For example, acknowledged indicates whether the index was successfully created in the cluster, while shards_acknowledged indicates whether the requisite number of shard copies were started for each shard in the index before timing out. Note that it is still possible for either acknowledged or shards_acknowledged to be false, but for the index creation to be successful. These values simply indicate whether the operation completed before the timeout. If acknowledged is false, the request timed out before the cluster state was updated with the newly created index, but it probably will be created sometime soon. If shards_acknowledged is false, then the request timed out before the requisite number of shards were started (by default just the primaries), even if the cluster state was successfully updated to reflect the newly created index (that is to say, acknowledged is true).

      You can change the default of only waiting for the primary shards to start through the index setting index.write.wait_for_active_shards. Note that changing this setting will also affect the wait_for_active_shards value on all subsequent write operations.

      See Also:
    • create

      Create an index. You can use the create index API to add a new index to an Elasticsearch cluster. When creating an index, you can specify the following:
      • Settings for the index.
      • Mappings for fields in the index.
      • Index aliases

      Wait for active shards

      By default, index creation will only return a response to the client when the primary copies of each shard have been started, or the request times out. The index creation response will indicate what happened. For example, acknowledged indicates whether the index was successfully created in the cluster, while shards_acknowledged indicates whether the requisite number of shard copies were started for each shard in the index before timing out. Note that it is still possible for either acknowledged or shards_acknowledged to be false, but for the index creation to be successful. These values simply indicate whether the operation completed before the timeout. If acknowledged is false, the request timed out before the cluster state was updated with the newly created index, but it probably will be created sometime soon. If shards_acknowledged is false, then the request timed out before the requisite number of shards were started (by default just the primaries), even if the cluster state was successfully updated to reflect the newly created index (that is to say, acknowledged is true).

      You can change the default of only waiting for the primary shards to start through the index setting index.write.wait_for_active_shards. Note that changing this setting will also affect the wait_for_active_shards value on all subsequent write operations.

      Parameters:
      fn - a function that initializes a builder to create the CreateIndexRequest
      See Also:
    • createDataStream

      Create a data stream.

      You must have a matching index template with data stream enabled.

      See Also:
    • createDataStream

      Create a data stream.

      You must have a matching index template with data stream enabled.

      Parameters:
      fn - a function that initializes a builder to create the CreateDataStreamRequest
      See Also:
    • createFrom

      Create an index from a source index.

      Copy the mappings and settings from the source index to a destination index while allowing request settings and mappings to override the source values.

      See Also:
    • createFrom

      Create an index from a source index.

      Copy the mappings and settings from the source index to a destination index while allowing request settings and mappings to override the source values.

      Parameters:
      fn - a function that initializes a builder to create the CreateFromRequest
      See Also:
    • dataStreamsStats

      Get data stream stats.

      Get statistics for one or more data streams.

      See Also:
    • dataStreamsStats

      Get data stream stats.

      Get statistics for one or more data streams.

      Parameters:
      fn - a function that initializes a builder to create the DataStreamsStatsRequest
      See Also:
    • dataStreamsStats

      public CompletableFuture<DataStreamsStatsResponse> dataStreamsStats()
      Get data stream stats.

      Get statistics for one or more data streams.

      See Also:
    • delete

      Delete indices. Deleting an index deletes its documents, shards, and metadata. It does not delete related Kibana components, such as data views, visualizations, or dashboards.

      You cannot delete the current write index of a data stream. To delete the index, you must roll over the data stream so a new write index is created. You can then use the delete index API to delete the previous write index.

      See Also:
    • delete

      Delete indices. Deleting an index deletes its documents, shards, and metadata. It does not delete related Kibana components, such as data views, visualizations, or dashboards.

      You cannot delete the current write index of a data stream. To delete the index, you must roll over the data stream so a new write index is created. You can then use the delete index API to delete the previous write index.

      Parameters:
      fn - a function that initializes a builder to create the DeleteIndexRequest
      See Also:
    • deleteAlias

      Delete an alias. Removes a data stream or index from an alias.
      See Also:
    • deleteAlias

      Delete an alias. Removes a data stream or index from an alias.
      Parameters:
      fn - a function that initializes a builder to create the DeleteAliasRequest
      See Also:
    • deleteDataLifecycle

      Delete data stream lifecycles. Removes the data stream lifecycle from a data stream, rendering it not managed by the data stream lifecycle.
      See Also:
    • deleteDataLifecycle

      Delete data stream lifecycles. Removes the data stream lifecycle from a data stream, rendering it not managed by the data stream lifecycle.
      Parameters:
      fn - a function that initializes a builder to create the DeleteDataLifecycleRequest
      See Also:
    • deleteDataStream

      Delete data streams. Deletes one or more data streams and their backing indices.
      See Also:
    • deleteDataStream

      Delete data streams. Deletes one or more data streams and their backing indices.
      Parameters:
      fn - a function that initializes a builder to create the DeleteDataStreamRequest
      See Also:
    • deleteIndexTemplate

      Delete an index template. The provided <index-template> may contain multiple template names separated by a comma. If multiple template names are specified then there is no wildcard support and the provided names should match completely with existing templates.
      See Also:
    • deleteIndexTemplate

      Delete an index template. The provided <index-template> may contain multiple template names separated by a comma. If multiple template names are specified then there is no wildcard support and the provided names should match completely with existing templates.
      Parameters:
      fn - a function that initializes a builder to create the DeleteIndexTemplateRequest
      See Also:
    • deleteTemplate

      Delete a legacy index template.
      See Also:
    • deleteTemplate

      Delete a legacy index template.
      Parameters:
      fn - a function that initializes a builder to create the DeleteTemplateRequest
      See Also:
    • diskUsage

      Analyze the index disk usage. Analyze the disk usage of each field of an index or data stream. This API might not support indices created in previous Elasticsearch versions. The result of a small index can be inaccurate as some parts of an index might not be analyzed by the API.

      NOTE: The total size of fields of the analyzed shards of the index in the response is usually smaller than the index store_size value because some small metadata files are ignored and some parts of data files might not be scanned by the API. Since stored fields are stored together in a compressed format, the sizes of stored fields are also estimates and can be inaccurate. The stored size of the _id field is likely underestimated while the _source field is overestimated.

      See Also:
    • diskUsage

      Analyze the index disk usage. Analyze the disk usage of each field of an index or data stream. This API might not support indices created in previous Elasticsearch versions. The result of a small index can be inaccurate as some parts of an index might not be analyzed by the API.

      NOTE: The total size of fields of the analyzed shards of the index in the response is usually smaller than the index store_size value because some small metadata files are ignored and some parts of data files might not be scanned by the API. Since stored fields are stored together in a compressed format, the sizes of stored fields are also estimates and can be inaccurate. The stored size of the _id field is likely underestimated while the _source field is overestimated.

      Parameters:
      fn - a function that initializes a builder to create the DiskUsageRequest
      See Also:
    • downsample

      Downsample an index. Aggregate a time series (TSDS) index and store pre-computed statistical summaries (min, max, sum, value_count and avg) for each metric field grouped by a configured time interval. For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index. All documents within an hour interval are summarized and stored as a single document in the downsample index.

      NOTE: Only indices in a time series data stream are supported. Neither field nor document level security can be defined on the source index. The source index must be read only (index.blocks.write: true).

      See Also:
    • downsample

      Downsample an index. Aggregate a time series (TSDS) index and store pre-computed statistical summaries (min, max, sum, value_count and avg) for each metric field grouped by a configured time interval. For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index. All documents within an hour interval are summarized and stored as a single document in the downsample index.

      NOTE: Only indices in a time series data stream are supported. Neither field nor document level security can be defined on the source index. The source index must be read only (index.blocks.write: true).

      Parameters:
      fn - a function that initializes a builder to create the DownsampleRequest
      See Also:
    • exists

      Check indices. Check if one or more indices, index aliases, or data streams exist.
      See Also:
    • exists

      Check indices. Check if one or more indices, index aliases, or data streams exist.
      Parameters:
      fn - a function that initializes a builder to create the ExistsRequest
      See Also:
    • existsAlias

      public CompletableFuture<BooleanResponse> existsAlias(ExistsAliasRequest request)
      Check aliases.

      Check if one or more data stream or index aliases exist.

      See Also:
    • existsAlias

      Check aliases.

      Check if one or more data stream or index aliases exist.

      Parameters:
      fn - a function that initializes a builder to create the ExistsAliasRequest
      See Also:
    • existsIndexTemplate

      public CompletableFuture<BooleanResponse> existsIndexTemplate(ExistsIndexTemplateRequest request)
      Check index templates.

      Check whether index templates exist.

      See Also:
    • existsIndexTemplate

      Check index templates.

      Check whether index templates exist.

      Parameters:
      fn - a function that initializes a builder to create the ExistsIndexTemplateRequest
      See Also:
    • existsTemplate

      public CompletableFuture<BooleanResponse> existsTemplate(ExistsTemplateRequest request)
      Check existence of index templates. Get information about whether index templates exist. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.

      IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

      See Also:
    • existsTemplate

      Check existence of index templates. Get information about whether index templates exist. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.

      IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

      Parameters:
      fn - a function that initializes a builder to create the ExistsTemplateRequest
      See Also:
    • explainDataLifecycle

      Get the status for a data stream lifecycle. Get information about an index or data stream's current data stream lifecycle status, such as time since index creation, time since rollover, the lifecycle configuration managing the index, or any errors encountered during lifecycle execution.
      See Also:
    • explainDataLifecycle

      Get the status for a data stream lifecycle. Get information about an index or data stream's current data stream lifecycle status, such as time since index creation, time since rollover, the lifecycle configuration managing the index, or any errors encountered during lifecycle execution.
      Parameters:
      fn - a function that initializes a builder to create the ExplainDataLifecycleRequest
      See Also:
    • fieldUsageStats

      Get field usage stats. Get field usage information for each shard and field of an index. Field usage statistics are automatically captured when queries are running on a cluster. A shard-level search request that accesses a given field, even if multiple times during that request, is counted as a single use.

      The response body reports the per-shard usage count of the data structures that back the fields in the index. A given request will increment each count by a maximum value of 1, even if the request accesses the same field multiple times.

      See Also:
    • fieldUsageStats

      Get field usage stats. Get field usage information for each shard and field of an index. Field usage statistics are automatically captured when queries are running on a cluster. A shard-level search request that accesses a given field, even if multiple times during that request, is counted as a single use.

      The response body reports the per-shard usage count of the data structures that back the fields in the index. A given request will increment each count by a maximum value of 1, even if the request accesses the same field multiple times.

      Parameters:
      fn - a function that initializes a builder to create the FieldUsageStatsRequest
      See Also:
    • flush

      public CompletableFuture<FlushResponse> flush(FlushRequest request)
      Flush data streams or indices. Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.

      After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.

      It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.

      See Also:
    • flush

      Flush data streams or indices. Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.

      After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.

      It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.

      Parameters:
      fn - a function that initializes a builder to create the FlushRequest
      See Also:
    • flush

      Flush data streams or indices. Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.

      After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.

      It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.

      See Also:
    • forcemerge

      Force a merge. Perform the force merge operation on the shards of one or more indices. For data streams, the API forces a merge on the shards of the stream's backing indices.

      Merging reduces the number of segments in each shard by merging some of them together and also frees up the space used by deleted documents. Merging normally happens automatically, but sometimes it is useful to trigger a merge manually.

      WARNING: We recommend force merging only a read-only index (meaning the index is no longer receiving writes). When documents are updated or deleted, the old version is not immediately removed but instead soft-deleted and marked with a "tombstone". These soft-deleted documents are automatically cleaned up during regular segment merges. But force merge can cause very large (greater than 5 GB) segments to be produced, which are not eligible for regular merges. So the number of soft-deleted documents can then grow rapidly, resulting in higher disk usage and worse search performance. If you regularly force merge an index receiving writes, this can also make snapshots more expensive, since the new documents can't be backed up incrementally.

      Blocks during a force merge

      Calls to this API block until the merge is complete (unless request contains wait_for_completion=false). If the client connection is lost before completion then the force merge process will continue in the background. Any new requests to force merge the same indices will also block until the ongoing force merge is complete.

      Running force merge asynchronously

      If the request contains wait_for_completion=false, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to get the status of the task. However, you can not cancel this task as the force merge task is not cancelable. Elasticsearch creates a record of this task as a document at _tasks/<task_id>. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.

      Force merging multiple indices

      You can force merge multiple indices with a single request by targeting:

      • One or more data streams that contain multiple backing indices
      • Multiple indices
      • One or more aliases
      • All data streams and indices in a cluster

      Each targeted shard is force-merged separately using the force_merge threadpool. By default each node only has a single force_merge thread which means that the shards on that node are force-merged one at a time. If you expand the force_merge threadpool on a node then it will force merge its shards in parallel

      Force merge makes the storage for the shard being merged temporarily increase, as it may require free space up to triple its size in case max_num_segments parameter is set to 1, to rewrite all segments into a new one.

      Data streams and time-based indices

      Force-merging is useful for managing a data stream's older backing indices and other time-based indices, particularly after a rollover. In these cases, each index only receives indexing traffic for a certain period of time. Once an index receive no more writes, its shards can be force-merged to a single segment. This can be a good idea because single-segment shards can sometimes use simpler and more efficient data structures to perform searches. For example:

       POST /.ds-my-data-stream-2099.03.07-000001/_forcemerge?max_num_segments=1
       
       
      See Also:
    • forcemerge

      Force a merge. Perform the force merge operation on the shards of one or more indices. For data streams, the API forces a merge on the shards of the stream's backing indices.

      Merging reduces the number of segments in each shard by merging some of them together and also frees up the space used by deleted documents. Merging normally happens automatically, but sometimes it is useful to trigger a merge manually.

      WARNING: We recommend force merging only a read-only index (meaning the index is no longer receiving writes). When documents are updated or deleted, the old version is not immediately removed but instead soft-deleted and marked with a "tombstone". These soft-deleted documents are automatically cleaned up during regular segment merges. But force merge can cause very large (greater than 5 GB) segments to be produced, which are not eligible for regular merges. So the number of soft-deleted documents can then grow rapidly, resulting in higher disk usage and worse search performance. If you regularly force merge an index receiving writes, this can also make snapshots more expensive, since the new documents can't be backed up incrementally.

      Blocks during a force merge

      Calls to this API block until the merge is complete (unless request contains wait_for_completion=false). If the client connection is lost before completion then the force merge process will continue in the background. Any new requests to force merge the same indices will also block until the ongoing force merge is complete.

      Running force merge asynchronously

      If the request contains wait_for_completion=false, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to get the status of the task. However, you can not cancel this task as the force merge task is not cancelable. Elasticsearch creates a record of this task as a document at _tasks/<task_id>. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.

      Force merging multiple indices

      You can force merge multiple indices with a single request by targeting:

      • One or more data streams that contain multiple backing indices
      • Multiple indices
      • One or more aliases
      • All data streams and indices in a cluster

      Each targeted shard is force-merged separately using the force_merge threadpool. By default each node only has a single force_merge thread which means that the shards on that node are force-merged one at a time. If you expand the force_merge threadpool on a node then it will force merge its shards in parallel

      Force merge makes the storage for the shard being merged temporarily increase, as it may require free space up to triple its size in case max_num_segments parameter is set to 1, to rewrite all segments into a new one.

      Data streams and time-based indices

      Force-merging is useful for managing a data stream's older backing indices and other time-based indices, particularly after a rollover. In these cases, each index only receives indexing traffic for a certain period of time. Once an index receive no more writes, its shards can be force-merged to a single segment. This can be a good idea because single-segment shards can sometimes use simpler and more efficient data structures to perform searches. For example:

       POST /.ds-my-data-stream-2099.03.07-000001/_forcemerge?max_num_segments=1
       
       
      Parameters:
      fn - a function that initializes a builder to create the ForcemergeRequest
      See Also:
    • forcemerge

      public CompletableFuture<ForcemergeResponse> forcemerge()
      Force a merge. Perform the force merge operation on the shards of one or more indices. For data streams, the API forces a merge on the shards of the stream's backing indices.

      Merging reduces the number of segments in each shard by merging some of them together and also frees up the space used by deleted documents. Merging normally happens automatically, but sometimes it is useful to trigger a merge manually.

      WARNING: We recommend force merging only a read-only index (meaning the index is no longer receiving writes). When documents are updated or deleted, the old version is not immediately removed but instead soft-deleted and marked with a "tombstone". These soft-deleted documents are automatically cleaned up during regular segment merges. But force merge can cause very large (greater than 5 GB) segments to be produced, which are not eligible for regular merges. So the number of soft-deleted documents can then grow rapidly, resulting in higher disk usage and worse search performance. If you regularly force merge an index receiving writes, this can also make snapshots more expensive, since the new documents can't be backed up incrementally.

      Blocks during a force merge

      Calls to this API block until the merge is complete (unless request contains wait_for_completion=false). If the client connection is lost before completion then the force merge process will continue in the background. Any new requests to force merge the same indices will also block until the ongoing force merge is complete.

      Running force merge asynchronously

      If the request contains wait_for_completion=false, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to get the status of the task. However, you can not cancel this task as the force merge task is not cancelable. Elasticsearch creates a record of this task as a document at _tasks/<task_id>. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.

      Force merging multiple indices

      You can force merge multiple indices with a single request by targeting:

      • One or more data streams that contain multiple backing indices
      • Multiple indices
      • One or more aliases
      • All data streams and indices in a cluster

      Each targeted shard is force-merged separately using the force_merge threadpool. By default each node only has a single force_merge thread which means that the shards on that node are force-merged one at a time. If you expand the force_merge threadpool on a node then it will force merge its shards in parallel

      Force merge makes the storage for the shard being merged temporarily increase, as it may require free space up to triple its size in case max_num_segments parameter is set to 1, to rewrite all segments into a new one.

      Data streams and time-based indices

      Force-merging is useful for managing a data stream's older backing indices and other time-based indices, particularly after a rollover. In these cases, each index only receives indexing traffic for a certain period of time. Once an index receive no more writes, its shards can be force-merged to a single segment. This can be a good idea because single-segment shards can sometimes use simpler and more efficient data structures to perform searches. For example:

       POST /.ds-my-data-stream-2099.03.07-000001/_forcemerge?max_num_segments=1
       
       
      See Also:
    • get

      Get index information. Get information about one or more indices. For data streams, the API returns information about the stream’s backing indices.
      See Also:
    • get

      Get index information. Get information about one or more indices. For data streams, the API returns information about the stream’s backing indices.
      Parameters:
      fn - a function that initializes a builder to create the GetIndexRequest
      See Also:
    • getAlias

      Get aliases. Retrieves information for one or more data stream or index aliases.
      See Also:
    • getAlias

      Get aliases. Retrieves information for one or more data stream or index aliases.
      Parameters:
      fn - a function that initializes a builder to create the GetAliasRequest
      See Also:
    • getAlias

      public CompletableFuture<GetAliasResponse> getAlias()
      Get aliases. Retrieves information for one or more data stream or index aliases.
      See Also:
    • getDataLifecycle

      Get data stream lifecycles.

      Get the data stream lifecycle configuration of one or more data streams.

      See Also:
    • getDataLifecycle

      Get data stream lifecycles.

      Get the data stream lifecycle configuration of one or more data streams.

      Parameters:
      fn - a function that initializes a builder to create the GetDataLifecycleRequest
      See Also:
    • getDataLifecycleStats

      public CompletableFuture<GetDataLifecycleStatsResponse> getDataLifecycleStats()
      Get data stream lifecycle stats. Get statistics about the data streams that are managed by a data stream lifecycle.
      See Also:
    • getDataStream

      Get data streams.

      Get information about one or more data streams.

      See Also:
    • getDataStream

      Get data streams.

      Get information about one or more data streams.

      Parameters:
      fn - a function that initializes a builder to create the GetDataStreamRequest
      See Also:
    • getDataStream

      public CompletableFuture<GetDataStreamResponse> getDataStream()
      Get data streams.

      Get information about one or more data streams.

      See Also:
    • getFieldMapping

      Get mapping definitions. Retrieves mapping definitions for one or more fields. For data streams, the API retrieves field mappings for the stream’s backing indices.

      This API is useful if you don't need a complete mapping or if an index mapping contains a large number of fields.

      See Also:
    • getFieldMapping

      Get mapping definitions. Retrieves mapping definitions for one or more fields. For data streams, the API retrieves field mappings for the stream’s backing indices.

      This API is useful if you don't need a complete mapping or if an index mapping contains a large number of fields.

      Parameters:
      fn - a function that initializes a builder to create the GetFieldMappingRequest
      See Also:
    • getIndexTemplate

      Get index templates. Get information about one or more index templates.
      See Also:
    • getIndexTemplate

      Get index templates. Get information about one or more index templates.
      Parameters:
      fn - a function that initializes a builder to create the GetIndexTemplateRequest
      See Also:
    • getIndexTemplate

      public CompletableFuture<GetIndexTemplateResponse> getIndexTemplate()
      Get index templates. Get information about one or more index templates.
      See Also:
    • getMapping

      Get mapping definitions. For data streams, the API retrieves mappings for the stream’s backing indices.
      See Also:
    • getMapping

      Get mapping definitions. For data streams, the API retrieves mappings for the stream’s backing indices.
      Parameters:
      fn - a function that initializes a builder to create the GetMappingRequest
      See Also:
    • getMapping

      public CompletableFuture<GetMappingResponse> getMapping()
      Get mapping definitions. For data streams, the API retrieves mappings for the stream’s backing indices.
      See Also:
    • getMigrateReindexStatus

      Get the migration reindexing status.

      Get the status of a migration reindex attempt for a data stream or index.

      See Also:
    • getMigrateReindexStatus

      Get the migration reindexing status.

      Get the status of a migration reindex attempt for a data stream or index.

      Parameters:
      fn - a function that initializes a builder to create the GetMigrateReindexStatusRequest
      See Also:
    • getSettings

      Get index settings. Get setting information for one or more indices. For data streams, it returns setting information for the stream's backing indices.
      See Also:
    • getSettings

      Get index settings. Get setting information for one or more indices. For data streams, it returns setting information for the stream's backing indices.
      Parameters:
      fn - a function that initializes a builder to create the GetIndicesSettingsRequest
      See Also:
    • getSettings

      Get index settings. Get setting information for one or more indices. For data streams, it returns setting information for the stream's backing indices.
      See Also:
    • getTemplate

      Get index templates. Get information about one or more index templates.

      IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

      See Also:
    • getTemplate

      Get index templates. Get information about one or more index templates.

      IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

      Parameters:
      fn - a function that initializes a builder to create the GetTemplateRequest
      See Also:
    • getTemplate

      public CompletableFuture<GetTemplateResponse> getTemplate()
      Get index templates. Get information about one or more index templates.

      IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

      See Also:
    • migrateReindex

      Reindex legacy backing indices.

      Reindex all legacy backing indices for a data stream. This operation occurs in a persistent task. The persistent task ID is returned immediately and the reindexing work is completed in that task.

      See Also:
    • migrateReindex

      Reindex legacy backing indices.

      Reindex all legacy backing indices for a data stream. This operation occurs in a persistent task. The persistent task ID is returned immediately and the reindexing work is completed in that task.

      Parameters:
      fn - a function that initializes a builder to create the MigrateReindexRequest
      See Also:
    • migrateReindex

      public CompletableFuture<MigrateReindexResponse> migrateReindex()
      Reindex legacy backing indices.

      Reindex all legacy backing indices for a data stream. This operation occurs in a persistent task. The persistent task ID is returned immediately and the reindexing work is completed in that task.

      See Also:
    • migrateToDataStream

      Convert an index alias to a data stream. Converts an index alias to a data stream. You must have a matching index template that is data stream enabled. The alias must meet the following criteria: The alias must have a write index; All indices for the alias must have a @timestamp field mapping of a date or date_nanos field type; The alias must not have any filters; The alias must not use custom routing. If successful, the request removes the alias and creates a data stream with the same name. The indices for the alias become hidden backing indices for the stream. The write index for the alias becomes the write index for the stream.
      See Also:
    • migrateToDataStream

      Convert an index alias to a data stream. Converts an index alias to a data stream. You must have a matching index template that is data stream enabled. The alias must meet the following criteria: The alias must have a write index; All indices for the alias must have a @timestamp field mapping of a date or date_nanos field type; The alias must not have any filters; The alias must not use custom routing. If successful, the request removes the alias and creates a data stream with the same name. The indices for the alias become hidden backing indices for the stream. The write index for the alias becomes the write index for the stream.
      Parameters:
      fn - a function that initializes a builder to create the MigrateToDataStreamRequest
      See Also:
    • modifyDataStream

      Update data streams. Performs one or more data stream modification actions in a single atomic operation.
      See Also:
    • modifyDataStream

      Update data streams. Performs one or more data stream modification actions in a single atomic operation.
      Parameters:
      fn - a function that initializes a builder to create the ModifyDataStreamRequest
      See Also:
    • open

      public CompletableFuture<OpenResponse> open(OpenRequest request)
      Open a closed index. For data streams, the API opens any closed backing indices.

      A closed index is blocked for read/write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. This allows closed indices to not have to maintain internal data structures for indexing or searching documents, resulting in a smaller overhead on the cluster.

      When opening or closing an index, the master is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened or closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.

      You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behavior can be turned off by using the ignore_unavailable=true parameter.

      By default, you must explicitly name the indices you are opening or closing. To open or close indices with _all, *, or other wildcard expressions, change the action.destructive_requires_name setting to false. This setting can also be changed with the cluster update settings API.

      Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting cluster.indices.close.enable to false.

      Because opening or closing an index allocates its shards, the wait_for_active_shards setting on index creation applies to the _open and _close index actions as well.

      See Also:
    • open

      Open a closed index. For data streams, the API opens any closed backing indices.

      A closed index is blocked for read/write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. This allows closed indices to not have to maintain internal data structures for indexing or searching documents, resulting in a smaller overhead on the cluster.

      When opening or closing an index, the master is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened or closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.

      You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behavior can be turned off by using the ignore_unavailable=true parameter.

      By default, you must explicitly name the indices you are opening or closing. To open or close indices with _all, *, or other wildcard expressions, change the action.destructive_requires_name setting to false. This setting can also be changed with the cluster update settings API.

      Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting cluster.indices.close.enable to false.

      Because opening or closing an index allocates its shards, the wait_for_active_shards setting on index creation applies to the _open and _close index actions as well.

      Parameters:
      fn - a function that initializes a builder to create the OpenRequest
      See Also:
    • promoteDataStream

      Promote a data stream. Promote a data stream from a replicated data stream managed by cross-cluster replication (CCR) to a regular data stream.

      With CCR auto following, a data stream from a remote cluster can be replicated to the local cluster. These data streams can't be rolled over in the local cluster. These replicated data streams roll over only if the upstream data stream rolls over. In the event that the remote cluster is no longer available, the data stream in the local cluster can be promoted to a regular data stream, which allows these data streams to be rolled over in the local cluster.

      NOTE: When promoting a data stream, ensure the local cluster has a data stream enabled index template that matches the data stream. If this is missing, the data stream will not be able to roll over until a matching index template is created. This will affect the lifecycle management of the data stream and interfere with the data stream size and retention.

      See Also:
    • promoteDataStream

      Promote a data stream. Promote a data stream from a replicated data stream managed by cross-cluster replication (CCR) to a regular data stream.

      With CCR auto following, a data stream from a remote cluster can be replicated to the local cluster. These data streams can't be rolled over in the local cluster. These replicated data streams roll over only if the upstream data stream rolls over. In the event that the remote cluster is no longer available, the data stream in the local cluster can be promoted to a regular data stream, which allows these data streams to be rolled over in the local cluster.

      NOTE: When promoting a data stream, ensure the local cluster has a data stream enabled index template that matches the data stream. If this is missing, the data stream will not be able to roll over until a matching index template is created. This will affect the lifecycle management of the data stream and interfere with the data stream size and retention.

      Parameters:
      fn - a function that initializes a builder to create the PromoteDataStreamRequest
      See Also:
    • putAlias

      Create or update an alias. Adds a data stream or index to an alias.
      See Also:
    • putAlias

      Create or update an alias. Adds a data stream or index to an alias.
      Parameters:
      fn - a function that initializes a builder to create the PutAliasRequest
      See Also:
    • putDataLifecycle

      Update data stream lifecycles. Update the data stream lifecycle of the specified data streams.
      See Also:
    • putDataLifecycle

      Update data stream lifecycles. Update the data stream lifecycle of the specified data streams.
      Parameters:
      fn - a function that initializes a builder to create the PutDataLifecycleRequest
      See Also:
    • putIndexTemplate

      Create or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.

      Elasticsearch applies templates to new indices based on an wildcard pattern that matches the index name. Index templates are applied during data stream or index creation. For data streams, these settings and mappings are applied when the stream's backing indices are created. Settings and mappings specified in a create index API request override any settings or mappings specified in an index template. Changes to index templates do not affect existing indices, including the existing backing indices of a data stream.

      You can use C-style /* *\/ block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.

      Multiple matching templates

      If multiple index templates match the name of a new index or data stream, the template with the highest priority is used.

      Multiple templates with overlapping index patterns at the same priority are not allowed and an error will be thrown when attempting to create a template matching an existing index template at identical priorities.

      Composing aliases, mappings, and settings

      When multiple component templates are specified in the composed_of field for an index template, they are merged in the order specified, meaning that later component templates override earlier component templates. Any mappings, settings, or aliases from the parent index template are merged in next. Finally, any configuration on the index request itself is merged. Mapping definitions are merged recursively, which means that later mapping components can introduce new field mappings and update the mapping configuration. If a field mapping is already contained in an earlier component, its definition will be completely overwritten by the later one. This recursive merging strategy applies not only to field mappings, but also root options like dynamic_templates and meta. If an earlier component contains a dynamic_templates block, then by default new dynamic_templates entries are appended onto the end. If an entry already exists with the same key, then it is overwritten by the new definition.

      See Also:
    • putIndexTemplate

      Create or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.

      Elasticsearch applies templates to new indices based on an wildcard pattern that matches the index name. Index templates are applied during data stream or index creation. For data streams, these settings and mappings are applied when the stream's backing indices are created. Settings and mappings specified in a create index API request override any settings or mappings specified in an index template. Changes to index templates do not affect existing indices, including the existing backing indices of a data stream.

      You can use C-style /* *\/ block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.

      Multiple matching templates

      If multiple index templates match the name of a new index or data stream, the template with the highest priority is used.

      Multiple templates with overlapping index patterns at the same priority are not allowed and an error will be thrown when attempting to create a template matching an existing index template at identical priorities.

      Composing aliases, mappings, and settings

      When multiple component templates are specified in the composed_of field for an index template, they are merged in the order specified, meaning that later component templates override earlier component templates. Any mappings, settings, or aliases from the parent index template are merged in next. Finally, any configuration on the index request itself is merged. Mapping definitions are merged recursively, which means that later mapping components can introduce new field mappings and update the mapping configuration. If a field mapping is already contained in an earlier component, its definition will be completely overwritten by the later one. This recursive merging strategy applies not only to field mappings, but also root options like dynamic_templates and meta. If an earlier component contains a dynamic_templates block, then by default new dynamic_templates entries are appended onto the end. If an entry already exists with the same key, then it is overwritten by the new definition.

      Parameters:
      fn - a function that initializes a builder to create the PutIndexTemplateRequest
      See Also:
    • putMapping

      Update field mappings. Add new fields to an existing data stream or index. You can also use this API to change the search settings of existing fields and add new properties to existing object fields. For data streams, these changes are applied to all backing indices by default.

      Add multi-fields to an existing field

      Multi-fields let you index the same field in different ways. You can use this API to update the fields mapping parameter and enable multi-fields for an existing field. WARNING: If an index (or data stream) contains documents when you add a multi-field, those documents will not have values for the new multi-field. You can populate the new multi-field with the update by query API.

      Change supported mapping parameters for an existing field

      The documentation for each mapping parameter indicates whether you can update it for an existing field using this API. For example, you can use the update mapping API to update the ignore_above parameter.

      Change the mapping of an existing field

      Except for supported mapping parameters, you can't change the mapping or field type of an existing field. Changing an existing field could invalidate data that's already indexed.

      If you need to change the mapping of a field in a data stream's backing indices, refer to documentation about modifying data streams. If you need to change the mapping of a field in other indices, create a new index with the correct mapping and reindex your data into that index.

      Rename a field

      Renaming a field would invalidate data already indexed under the old field name. Instead, add an alias field to create an alternate field name.

      See Also:
    • putMapping

      Update field mappings. Add new fields to an existing data stream or index. You can also use this API to change the search settings of existing fields and add new properties to existing object fields. For data streams, these changes are applied to all backing indices by default.

      Add multi-fields to an existing field

      Multi-fields let you index the same field in different ways. You can use this API to update the fields mapping parameter and enable multi-fields for an existing field. WARNING: If an index (or data stream) contains documents when you add a multi-field, those documents will not have values for the new multi-field. You can populate the new multi-field with the update by query API.

      Change supported mapping parameters for an existing field

      The documentation for each mapping parameter indicates whether you can update it for an existing field using this API. For example, you can use the update mapping API to update the ignore_above parameter.

      Change the mapping of an existing field

      Except for supported mapping parameters, you can't change the mapping or field type of an existing field. Changing an existing field could invalidate data that's already indexed.

      If you need to change the mapping of a field in a data stream's backing indices, refer to documentation about modifying data streams. If you need to change the mapping of a field in other indices, create a new index with the correct mapping and reindex your data into that index.

      Rename a field

      Renaming a field would invalidate data already indexed under the old field name. Instead, add an alias field to create an alternate field name.

      Parameters:
      fn - a function that initializes a builder to create the PutMappingRequest
      See Also:
    • putSettings

      Update index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default.

      To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation. To preserve existing settings from being updated, set the preserve_existing parameter to true.

      NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.

      See Also:
    • putSettings

      Update index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default.

      To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation. To preserve existing settings from being updated, set the preserve_existing parameter to true.

      NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.

      Parameters:
      fn - a function that initializes a builder to create the PutIndicesSettingsRequest
      See Also:
    • putSettings

      Update index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default.

      To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation. To preserve existing settings from being updated, set the preserve_existing parameter to true.

      NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.

      See Also:
    • putTemplate

      Create or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices. Elasticsearch applies templates to new indices based on an index pattern that matches the index name.

      IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

      Composable templates always take precedence over legacy templates. If no composable template matches a new index, matching legacy templates are applied according to their order.

      Index templates are only applied during index creation. Changes to index templates do not affect existing indices. Settings and mappings specified in create index API requests override any settings or mappings specified in an index template.

      You can use C-style /* *\/ block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.

      Indices matching multiple templates

      Multiple index templates can potentially match an index, in this case, both the settings and mappings are merged into the final configuration of the index. The order of the merging can be controlled using the order parameter, with lower order being applied first, and higher orders overriding them. NOTE: Multiple matching templates with the same order value will result in a non-deterministic merging order.

      See Also:
    • putTemplate

      Create or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices. Elasticsearch applies templates to new indices based on an index pattern that matches the index name.

      IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

      Composable templates always take precedence over legacy templates. If no composable template matches a new index, matching legacy templates are applied according to their order.

      Index templates are only applied during index creation. Changes to index templates do not affect existing indices. Settings and mappings specified in create index API requests override any settings or mappings specified in an index template.

      You can use C-style /* *\/ block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.

      Indices matching multiple templates

      Multiple index templates can potentially match an index, in this case, both the settings and mappings are merged into the final configuration of the index. The order of the merging can be controlled using the order parameter, with lower order being applied first, and higher orders overriding them. NOTE: Multiple matching templates with the same order value will result in a non-deterministic merging order.

      Parameters:
      fn - a function that initializes a builder to create the PutTemplateRequest
      See Also:
    • recovery

      Get index recovery information. Get information about ongoing and completed shard recoveries for one or more indices. For data streams, the API returns information for the stream's backing indices.

      All recoveries, whether ongoing or complete, are kept in the cluster state and may be reported on at any time.

      Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or creating a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing.

      Recovery automatically occurs during the following processes:

      • When creating an index for the first time.
      • When a node rejoins the cluster and starts up any missing primary shard copies using the data that it holds in its data path.
      • Creation of new replica shard copies from the primary.
      • Relocation of a shard copy to a different node in the same cluster.
      • A snapshot restore operation.
      • A clone, shrink, or split operation.

      You can determine the cause of a shard recovery using the recovery or cat recovery APIs.

      The index recovery API reports information about completed recoveries only for shard copies that currently exist in the cluster. It only reports the last recovery for each shard copy and does not report historical information about earlier recoveries, nor does it report information about the recoveries of shard copies that no longer exist. This means that if a shard copy completes a recovery and then Elasticsearch relocates it onto a different node then the information about the original recovery will not be shown in the recovery API.

      See Also:
    • recovery

      Get index recovery information. Get information about ongoing and completed shard recoveries for one or more indices. For data streams, the API returns information for the stream's backing indices.

      All recoveries, whether ongoing or complete, are kept in the cluster state and may be reported on at any time.

      Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or creating a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing.

      Recovery automatically occurs during the following processes:

      • When creating an index for the first time.
      • When a node rejoins the cluster and starts up any missing primary shard copies using the data that it holds in its data path.
      • Creation of new replica shard copies from the primary.
      • Relocation of a shard copy to a different node in the same cluster.
      • A snapshot restore operation.
      • A clone, shrink, or split operation.

      You can determine the cause of a shard recovery using the recovery or cat recovery APIs.

      The index recovery API reports information about completed recoveries only for shard copies that currently exist in the cluster. It only reports the last recovery for each shard copy and does not report historical information about earlier recoveries, nor does it report information about the recoveries of shard copies that no longer exist. This means that if a shard copy completes a recovery and then Elasticsearch relocates it onto a different node then the information about the original recovery will not be shown in the recovery API.

      Parameters:
      fn - a function that initializes a builder to create the RecoveryRequest
      See Also:
    • recovery

      public CompletableFuture<RecoveryResponse> recovery()
      Get index recovery information. Get information about ongoing and completed shard recoveries for one or more indices. For data streams, the API returns information for the stream's backing indices.

      All recoveries, whether ongoing or complete, are kept in the cluster state and may be reported on at any time.

      Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or creating a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing.

      Recovery automatically occurs during the following processes:

      • When creating an index for the first time.
      • When a node rejoins the cluster and starts up any missing primary shard copies using the data that it holds in its data path.
      • Creation of new replica shard copies from the primary.
      • Relocation of a shard copy to a different node in the same cluster.
      • A snapshot restore operation.
      • A clone, shrink, or split operation.

      You can determine the cause of a shard recovery using the recovery or cat recovery APIs.

      The index recovery API reports information about completed recoveries only for shard copies that currently exist in the cluster. It only reports the last recovery for each shard copy and does not report historical information about earlier recoveries, nor does it report information about the recoveries of shard copies that no longer exist. This means that if a shard copy completes a recovery and then Elasticsearch relocates it onto a different node then the information about the original recovery will not be shown in the recovery API.

      See Also:
    • refresh

      Refresh an index. A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.

      By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. You can change this default interval with the index.refresh_interval setting.

      Refresh requests are synchronous and do not return a response until the refresh operation completes.

      Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.

      If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's refresh=wait_for query parameter option. This option ensures the indexing operation waits for a periodic refresh before running the search.

      See Also:
    • refresh

      Refresh an index. A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.

      By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. You can change this default interval with the index.refresh_interval setting.

      Refresh requests are synchronous and do not return a response until the refresh operation completes.

      Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.

      If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's refresh=wait_for query parameter option. This option ensures the indexing operation waits for a periodic refresh before running the search.

      Parameters:
      fn - a function that initializes a builder to create the RefreshRequest
      See Also:
    • refresh

      Refresh an index. A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.

      By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. You can change this default interval with the index.refresh_interval setting.

      Refresh requests are synchronous and do not return a response until the refresh operation completes.

      Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.

      If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's refresh=wait_for query parameter option. This option ensures the indexing operation waits for a periodic refresh before running the search.

      See Also:
    • reloadSearchAnalyzers

      Reload search analyzers. Reload an index's search analyzers and their resources. For data streams, the API reloads search analyzers and resources for the stream's backing indices.

      IMPORTANT: After reloading the search analyzers you should clear the request cache to make sure it doesn't contain responses derived from the previous versions of the analyzer.

      You can use the reload search analyzers API to pick up changes to synonym files used in the synonym_graph or synonym token filter of a search analyzer. To be eligible, the token filter must have an updateable flag of true and only be used in search analyzers.

      NOTE: This API does not perform a reload for each shard of an index. Instead, it performs a reload for each node containing index shards. As a result, the total shard count returned by the API can differ from the number of index shards. Because reloading affects every node with an index shard, it is important to update the synonym file on every data node in the cluster--including nodes that don't contain a shard replica--before using this API. This ensures the synonym file is updated everywhere in the cluster in case shards are relocated in the future.

      See Also:
    • reloadSearchAnalyzers

      Reload search analyzers. Reload an index's search analyzers and their resources. For data streams, the API reloads search analyzers and resources for the stream's backing indices.

      IMPORTANT: After reloading the search analyzers you should clear the request cache to make sure it doesn't contain responses derived from the previous versions of the analyzer.

      You can use the reload search analyzers API to pick up changes to synonym files used in the synonym_graph or synonym token filter of a search analyzer. To be eligible, the token filter must have an updateable flag of true and only be used in search analyzers.

      NOTE: This API does not perform a reload for each shard of an index. Instead, it performs a reload for each node containing index shards. As a result, the total shard count returned by the API can differ from the number of index shards. Because reloading affects every node with an index shard, it is important to update the synonym file on every data node in the cluster--including nodes that don't contain a shard replica--before using this API. This ensures the synonym file is updated everywhere in the cluster in case shards are relocated in the future.

      Parameters:
      fn - a function that initializes a builder to create the ReloadSearchAnalyzersRequest
      See Also:
    • resolveCluster

      Resolve the cluster.

      Resolve the specified index expressions to return information about each cluster, including the local "querying" cluster, if included. If no index expression is provided, the API will return information about all the remote clusters that are configured on the querying cluster.

      This endpoint is useful before doing a cross-cluster search in order to determine which remote clusters should be included in a search.

      You use the same index expression with this endpoint as you would for cross-cluster search. Index and cluster exclusions are also supported with this endpoint.

      For each cluster in the index expression, information is returned about:

      • Whether the querying ("local") cluster is currently connected to each remote cluster specified in the index expression. Note that this endpoint actively attempts to contact the remote clusters, unlike the remote/info endpoint.
      • Whether each remote cluster is configured with skip_unavailable as true or false.
      • Whether there are any indices, aliases, or data streams on that cluster that match the index expression.
      • Whether the search is likely to have errors returned when you do the cross-cluster search (including any authorization errors if you do not have permission to query the index).
      • Cluster version information, including the Elasticsearch server version.

      For example, GET /_resolve/cluster/my-index-*,cluster*:my-index-* returns information about the local cluster and all remotely configured clusters that start with the alias cluster*. Each cluster returns information about whether it has any indices, aliases or data streams that match my-index-*.

      Note on backwards compatibility

      The ability to query without an index expression was added in version 8.18, so when querying remote clusters older than that, the local cluster will send the index expression dummy* to those remote clusters. Thus, if an errors occur, you may see a reference to that index expression even though you didn't request it. If it causes a problem, you can instead include an index expression like *:* to bypass the issue.

      Advantages of using this endpoint before a cross-cluster search

      You may want to exclude a cluster or index from a search when:

      • A remote cluster is not currently connected and is configured with skip_unavailable=false. Running a cross-cluster search under those conditions will cause the entire search to fail.
      • A cluster has no matching indices, aliases or data streams for the index expression (or your user does not have permissions to search them). For example, suppose your index expression is logs*,remote1:logs* and the remote1 cluster has no indices, aliases or data streams that match logs*. In that case, that cluster will return no results from that cluster if you include it in a cross-cluster search.
      • The index expression (combined with any query parameters you specify) will likely cause an exception to be thrown when you do the search. In these cases, the "error" field in the _resolve/cluster response will be present. (This is also where security/permission errors will be shown.)
      • A remote cluster is an older version that does not support the feature you want to use in your search.

      Test availability of remote clusters

      The remote/info endpoint is commonly used to test whether the "local" cluster (the cluster being queried) is connected to its remote clusters, but it does not necessarily reflect whether the remote cluster is available or not. The remote cluster may be available, while the local cluster is not currently connected to it.

      You can use the _resolve/cluster API to attempt to reconnect to remote clusters. For example with GET _resolve/cluster or GET _resolve/cluster/*:*. The connected field in the response will indicate whether it was successful. If a connection was (re-)established, this will also cause the remote/info endpoint to now indicate a connected status.

      See Also:
    • resolveCluster

      Resolve the cluster.

      Resolve the specified index expressions to return information about each cluster, including the local "querying" cluster, if included. If no index expression is provided, the API will return information about all the remote clusters that are configured on the querying cluster.

      This endpoint is useful before doing a cross-cluster search in order to determine which remote clusters should be included in a search.

      You use the same index expression with this endpoint as you would for cross-cluster search. Index and cluster exclusions are also supported with this endpoint.

      For each cluster in the index expression, information is returned about:

      • Whether the querying ("local") cluster is currently connected to each remote cluster specified in the index expression. Note that this endpoint actively attempts to contact the remote clusters, unlike the remote/info endpoint.
      • Whether each remote cluster is configured with skip_unavailable as true or false.
      • Whether there are any indices, aliases, or data streams on that cluster that match the index expression.
      • Whether the search is likely to have errors returned when you do the cross-cluster search (including any authorization errors if you do not have permission to query the index).
      • Cluster version information, including the Elasticsearch server version.

      For example, GET /_resolve/cluster/my-index-*,cluster*:my-index-* returns information about the local cluster and all remotely configured clusters that start with the alias cluster*. Each cluster returns information about whether it has any indices, aliases or data streams that match my-index-*.

      Note on backwards compatibility

      The ability to query without an index expression was added in version 8.18, so when querying remote clusters older than that, the local cluster will send the index expression dummy* to those remote clusters. Thus, if an errors occur, you may see a reference to that index expression even though you didn't request it. If it causes a problem, you can instead include an index expression like *:* to bypass the issue.

      Advantages of using this endpoint before a cross-cluster search

      You may want to exclude a cluster or index from a search when:

      • A remote cluster is not currently connected and is configured with skip_unavailable=false. Running a cross-cluster search under those conditions will cause the entire search to fail.
      • A cluster has no matching indices, aliases or data streams for the index expression (or your user does not have permissions to search them). For example, suppose your index expression is logs*,remote1:logs* and the remote1 cluster has no indices, aliases or data streams that match logs*. In that case, that cluster will return no results from that cluster if you include it in a cross-cluster search.
      • The index expression (combined with any query parameters you specify) will likely cause an exception to be thrown when you do the search. In these cases, the "error" field in the _resolve/cluster response will be present. (This is also where security/permission errors will be shown.)
      • A remote cluster is an older version that does not support the feature you want to use in your search.

      Test availability of remote clusters

      The remote/info endpoint is commonly used to test whether the "local" cluster (the cluster being queried) is connected to its remote clusters, but it does not necessarily reflect whether the remote cluster is available or not. The remote cluster may be available, while the local cluster is not currently connected to it.

      You can use the _resolve/cluster API to attempt to reconnect to remote clusters. For example with GET _resolve/cluster or GET _resolve/cluster/*:*. The connected field in the response will indicate whether it was successful. If a connection was (re-)established, this will also cause the remote/info endpoint to now indicate a connected status.

      Parameters:
      fn - a function that initializes a builder to create the ResolveClusterRequest
      See Also:
    • resolveCluster

      public CompletableFuture<ResolveClusterResponse> resolveCluster()
      Resolve the cluster.

      Resolve the specified index expressions to return information about each cluster, including the local "querying" cluster, if included. If no index expression is provided, the API will return information about all the remote clusters that are configured on the querying cluster.

      This endpoint is useful before doing a cross-cluster search in order to determine which remote clusters should be included in a search.

      You use the same index expression with this endpoint as you would for cross-cluster search. Index and cluster exclusions are also supported with this endpoint.

      For each cluster in the index expression, information is returned about:

      • Whether the querying ("local") cluster is currently connected to each remote cluster specified in the index expression. Note that this endpoint actively attempts to contact the remote clusters, unlike the remote/info endpoint.
      • Whether each remote cluster is configured with skip_unavailable as true or false.
      • Whether there are any indices, aliases, or data streams on that cluster that match the index expression.
      • Whether the search is likely to have errors returned when you do the cross-cluster search (including any authorization errors if you do not have permission to query the index).
      • Cluster version information, including the Elasticsearch server version.

      For example, GET /_resolve/cluster/my-index-*,cluster*:my-index-* returns information about the local cluster and all remotely configured clusters that start with the alias cluster*. Each cluster returns information about whether it has any indices, aliases or data streams that match my-index-*.

      Note on backwards compatibility

      The ability to query without an index expression was added in version 8.18, so when querying remote clusters older than that, the local cluster will send the index expression dummy* to those remote clusters. Thus, if an errors occur, you may see a reference to that index expression even though you didn't request it. If it causes a problem, you can instead include an index expression like *:* to bypass the issue.

      Advantages of using this endpoint before a cross-cluster search

      You may want to exclude a cluster or index from a search when:

      • A remote cluster is not currently connected and is configured with skip_unavailable=false. Running a cross-cluster search under those conditions will cause the entire search to fail.
      • A cluster has no matching indices, aliases or data streams for the index expression (or your user does not have permissions to search them). For example, suppose your index expression is logs*,remote1:logs* and the remote1 cluster has no indices, aliases or data streams that match logs*. In that case, that cluster will return no results from that cluster if you include it in a cross-cluster search.
      • The index expression (combined with any query parameters you specify) will likely cause an exception to be thrown when you do the search. In these cases, the "error" field in the _resolve/cluster response will be present. (This is also where security/permission errors will be shown.)
      • A remote cluster is an older version that does not support the feature you want to use in your search.

      Test availability of remote clusters

      The remote/info endpoint is commonly used to test whether the "local" cluster (the cluster being queried) is connected to its remote clusters, but it does not necessarily reflect whether the remote cluster is available or not. The remote cluster may be available, while the local cluster is not currently connected to it.

      You can use the _resolve/cluster API to attempt to reconnect to remote clusters. For example with GET _resolve/cluster or GET _resolve/cluster/*:*. The connected field in the response will indicate whether it was successful. If a connection was (re-)established, this will also cause the remote/info endpoint to now indicate a connected status.

      See Also:
    • resolveIndex

      Resolve indices. Resolve the names and/or index patterns for indices, aliases, and data streams. Multiple patterns and remote clusters are supported.
      See Also:
    • resolveIndex

      Resolve indices. Resolve the names and/or index patterns for indices, aliases, and data streams. Multiple patterns and remote clusters are supported.
      Parameters:
      fn - a function that initializes a builder to create the ResolveIndexRequest
      See Also:
    • rollover

      Roll over to a new index. TIP: It is recommended to use the index lifecycle rollover action to automate rollovers.

      The rollover API creates a new index for a data stream or index alias. The API behavior depends on the rollover target.

      Roll over a data stream

      If you roll over a data stream, the API creates a new write index for the stream. The stream's previous write index becomes a regular backing index. A rollover also increments the data stream's generation.

      Roll over an index alias with a write index

      TIP: Prior to Elasticsearch 7.9, you'd typically use an index alias with a write index to manage time series data. Data streams replace this functionality, require less maintenance, and automatically integrate with data tiers.

      If an index alias points to multiple indices, one of the indices must be a write index. The rollover API creates a new write index for the alias with is_write_index set to true. The API also sets is_write_index to false for the previous write index.

      Roll over an index alias with one index

      If you roll over an index alias that points to only one index, the API creates a new index for the alias and removes the original index from the alias.

      NOTE: A rollover creates a new index and is subject to the wait_for_active_shards setting.

      Increment index names for an alias

      When you roll over an index alias, you can specify a name for the new index. If you don't specify a name and the current index ends with - and a number, such as my-index-000001 or my-index-3, the new index name increments that number. For example, if you roll over an alias with a current index of my-index-000001, the rollover creates a new index named my-index-000002. This number is always six characters and zero-padded, regardless of the previous index's name.

      If you use an index alias for time series data, you can use date math in the index name to track the rollover date. For example, you can create an alias that points to an index named <my-index-{now/d}-000001>. If you create the index on May 6, 2099, the index's name is my-index-2099.05.06-000001. If you roll over the alias on May 7, 2099, the new index's name is my-index-2099.05.07-000002.

      See Also:
    • rollover

      Roll over to a new index. TIP: It is recommended to use the index lifecycle rollover action to automate rollovers.

      The rollover API creates a new index for a data stream or index alias. The API behavior depends on the rollover target.

      Roll over a data stream

      If you roll over a data stream, the API creates a new write index for the stream. The stream's previous write index becomes a regular backing index. A rollover also increments the data stream's generation.

      Roll over an index alias with a write index

      TIP: Prior to Elasticsearch 7.9, you'd typically use an index alias with a write index to manage time series data. Data streams replace this functionality, require less maintenance, and automatically integrate with data tiers.

      If an index alias points to multiple indices, one of the indices must be a write index. The rollover API creates a new write index for the alias with is_write_index set to true. The API also sets is_write_index to false for the previous write index.

      Roll over an index alias with one index

      If you roll over an index alias that points to only one index, the API creates a new index for the alias and removes the original index from the alias.

      NOTE: A rollover creates a new index and is subject to the wait_for_active_shards setting.

      Increment index names for an alias

      When you roll over an index alias, you can specify a name for the new index. If you don't specify a name and the current index ends with - and a number, such as my-index-000001 or my-index-3, the new index name increments that number. For example, if you roll over an alias with a current index of my-index-000001, the rollover creates a new index named my-index-000002. This number is always six characters and zero-padded, regardless of the previous index's name.

      If you use an index alias for time series data, you can use date math in the index name to track the rollover date. For example, you can create an alias that points to an index named <my-index-{now/d}-000001>. If you create the index on May 6, 2099, the index's name is my-index-2099.05.06-000001. If you roll over the alias on May 7, 2099, the new index's name is my-index-2099.05.07-000002.

      Parameters:
      fn - a function that initializes a builder to create the RolloverRequest
      See Also:
    • segments

      Get index segments. Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the stream's backing indices.
      See Also:
    • segments

      Get index segments. Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the stream's backing indices.
      Parameters:
      fn - a function that initializes a builder to create the SegmentsRequest
      See Also:
    • segments

      public CompletableFuture<SegmentsResponse> segments()
      Get index segments. Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the stream's backing indices.
      See Also:
    • shardStores

      Get index shard stores. Get store information about replica shards in one or more indices. For data streams, the API retrieves store information for the stream's backing indices.

      The index shard stores API returns the following information:

      • The node on which each replica shard exists.
      • The allocation ID for each replica shard.
      • A unique ID for each replica shard.
      • Any errors encountered while opening the shard index or from an earlier failure.

      By default, the API returns store information only for primary shards that are unassigned or have one or more unassigned replica shards.

      See Also:
    • shardStores

      Get index shard stores. Get store information about replica shards in one or more indices. For data streams, the API retrieves store information for the stream's backing indices.

      The index shard stores API returns the following information:

      • The node on which each replica shard exists.
      • The allocation ID for each replica shard.
      • A unique ID for each replica shard.
      • Any errors encountered while opening the shard index or from an earlier failure.

      By default, the API returns store information only for primary shards that are unassigned or have one or more unassigned replica shards.

      Parameters:
      fn - a function that initializes a builder to create the ShardStoresRequest
      See Also:
    • shardStores

      public CompletableFuture<ShardStoresResponse> shardStores()
      Get index shard stores. Get store information about replica shards in one or more indices. For data streams, the API retrieves store information for the stream's backing indices.

      The index shard stores API returns the following information:

      • The node on which each replica shard exists.
      • The allocation ID for each replica shard.
      • A unique ID for each replica shard.
      • Any errors encountered while opening the shard index or from an earlier failure.

      By default, the API returns store information only for primary shards that are unassigned or have one or more unassigned replica shards.

      See Also:
    • shrink

      Shrink an index. Shrink an index into a new index with fewer primary shards.

      Before you can shrink an index:

      • The index must be read-only.
      • A copy of every shard in the index must reside on the same node.
      • The index must have a green health status.

      To make shard allocation easier, we recommend you also remove the index's replica shards. You can later re-add replica shards as part of the shrink operation.

      The requested number of primary shards in the target index must be a factor of the number of shards in the source index. For example an index with 8 primary shards can be shrunk into 4, 2 or 1 primary shards or an index with 15 primary shards can be shrunk into 5, 3 or 1. If the number of shards in the index is a prime number it can only be shrunk into a single primary shard Before shrinking, a (primary or replica) copy of every shard in the index must be present on the same node.

      The current write index on a data stream cannot be shrunk. In order to shrink the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be shrunk.

      A shrink operation:

      • Creates a new target index with the same definition as the source index, but with a smaller number of primary shards.
      • Hard-links segments from the source index into the target index. If the file system does not support hard-linking, then all segments are copied into the new index, which is a much more time consuming process. Also if using multiple data paths, shards on different data paths require a full copy of segment files if they are not on the same disk since hardlinks do not work across disks.
      • Recovers the target index as though it were a closed index which had just been re-opened. Recovers shards to the .routing.allocation.initial_recovery._id index setting.

      IMPORTANT: Indices can only be shrunk if they satisfy the following requirements:

      • The target index must not exist.
      • The source index must have more primary shards than the target index.
      • The number of primary shards in the target index must be a factor of the number of primary shards in the source index. The source index must have more primary shards than the target index.
      • The index must not contain more than 2,147,483,519 documents in total across all shards that will be shrunk into a single shard on the target index as this is the maximum number of docs that can fit into a single shard.
      • The node handling the shrink process must have sufficient free disk space to accommodate a second copy of the existing index.
      See Also:
    • shrink

      Shrink an index. Shrink an index into a new index with fewer primary shards.

      Before you can shrink an index:

      • The index must be read-only.
      • A copy of every shard in the index must reside on the same node.
      • The index must have a green health status.

      To make shard allocation easier, we recommend you also remove the index's replica shards. You can later re-add replica shards as part of the shrink operation.

      The requested number of primary shards in the target index must be a factor of the number of shards in the source index. For example an index with 8 primary shards can be shrunk into 4, 2 or 1 primary shards or an index with 15 primary shards can be shrunk into 5, 3 or 1. If the number of shards in the index is a prime number it can only be shrunk into a single primary shard Before shrinking, a (primary or replica) copy of every shard in the index must be present on the same node.

      The current write index on a data stream cannot be shrunk. In order to shrink the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be shrunk.

      A shrink operation:

      • Creates a new target index with the same definition as the source index, but with a smaller number of primary shards.
      • Hard-links segments from the source index into the target index. If the file system does not support hard-linking, then all segments are copied into the new index, which is a much more time consuming process. Also if using multiple data paths, shards on different data paths require a full copy of segment files if they are not on the same disk since hardlinks do not work across disks.
      • Recovers the target index as though it were a closed index which had just been re-opened. Recovers shards to the .routing.allocation.initial_recovery._id index setting.

      IMPORTANT: Indices can only be shrunk if they satisfy the following requirements:

      • The target index must not exist.
      • The source index must have more primary shards than the target index.
      • The number of primary shards in the target index must be a factor of the number of primary shards in the source index. The source index must have more primary shards than the target index.
      • The index must not contain more than 2,147,483,519 documents in total across all shards that will be shrunk into a single shard on the target index as this is the maximum number of docs that can fit into a single shard.
      • The node handling the shrink process must have sufficient free disk space to accommodate a second copy of the existing index.
      Parameters:
      fn - a function that initializes a builder to create the ShrinkRequest
      See Also:
    • simulateIndexTemplate

      Simulate an index. Get the index configuration that would be applied to the specified index from an existing index template.
      See Also:
    • simulateIndexTemplate

      Simulate an index. Get the index configuration that would be applied to the specified index from an existing index template.
      Parameters:
      fn - a function that initializes a builder to create the SimulateIndexTemplateRequest
      See Also:
    • simulateTemplate

      Simulate an index template. Get the index configuration that would be applied by a particular index template.
      See Also:
    • simulateTemplate

      Simulate an index template. Get the index configuration that would be applied by a particular index template.
      Parameters:
      fn - a function that initializes a builder to create the SimulateTemplateRequest
      See Also:
    • simulateTemplate

      public CompletableFuture<SimulateTemplateResponse> simulateTemplate()
      Simulate an index template. Get the index configuration that would be applied by a particular index template.
      See Also:
    • split

      public CompletableFuture<SplitResponse> split(SplitRequest request)
      Split an index. Split an index into a new index with more primary shards.
      • Before you can split an index:

      • The index must be read-only.

      • The cluster health status must be green.

      You can do make an index read-only with the following request using the add index block API:

       PUT /my_source_index/_block/write
       
       

      The current write index on a data stream cannot be split. In order to split the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be split.

      The number of times the index can be split (and the number of shards that each original shard can be split into) is determined by the index.number_of_routing_shards setting. The number of routing shards specifies the hashing space that is used internally to distribute documents across shards with consistent hashing. For instance, a 5 shard index with number_of_routing_shards set to 30 (5 x 2 x 3) could be split by a factor of 2 or 3.

      A split operation:

      • Creates a new target index with the same definition as the source index, but with a larger number of primary shards.
      • Hard-links segments from the source index into the target index. If the file system doesn't support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
      • Hashes all documents again, after low level files are created, to delete documents that belong to a different shard.
      • Recovers the target index as though it were a closed index which had just been re-opened.

      IMPORTANT: Indices can only be split if they satisfy the following requirements:

      • The target index must not exist.
      • The source index must have fewer primary shards than the target index.
      • The number of primary shards in the target index must be a multiple of the number of primary shards in the source index.
      • The node handling the split process must have sufficient free disk space to accommodate a second copy of the existing index.
      See Also:
    • split

      Split an index. Split an index into a new index with more primary shards.
      • Before you can split an index:

      • The index must be read-only.

      • The cluster health status must be green.

      You can do make an index read-only with the following request using the add index block API:

       PUT /my_source_index/_block/write
       
       

      The current write index on a data stream cannot be split. In order to split the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be split.

      The number of times the index can be split (and the number of shards that each original shard can be split into) is determined by the index.number_of_routing_shards setting. The number of routing shards specifies the hashing space that is used internally to distribute documents across shards with consistent hashing. For instance, a 5 shard index with number_of_routing_shards set to 30 (5 x 2 x 3) could be split by a factor of 2 or 3.

      A split operation:

      • Creates a new target index with the same definition as the source index, but with a larger number of primary shards.
      • Hard-links segments from the source index into the target index. If the file system doesn't support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
      • Hashes all documents again, after low level files are created, to delete documents that belong to a different shard.
      • Recovers the target index as though it were a closed index which had just been re-opened.

      IMPORTANT: Indices can only be split if they satisfy the following requirements:

      • The target index must not exist.
      • The source index must have fewer primary shards than the target index.
      • The number of primary shards in the target index must be a multiple of the number of primary shards in the source index.
      • The node handling the split process must have sufficient free disk space to accommodate a second copy of the existing index.
      Parameters:
      fn - a function that initializes a builder to create the SplitRequest
      See Also:
    • stats

      Get index statistics. For data streams, the API retrieves statistics for the stream's backing indices.

      By default, the returned statistics are index-level with primaries and total aggregations. primaries are the values for only the primary shards. total are the accumulated values for both primary and replica shards.

      To get shard-level statistics, set the level parameter to shards.

      NOTE: When moving to another node, the shard-level statistics for a shard are cleared. Although the shard is no longer part of the node, that node retains any node-level statistics to which the shard contributed.

      See Also:
    • stats

      Get index statistics. For data streams, the API retrieves statistics for the stream's backing indices.

      By default, the returned statistics are index-level with primaries and total aggregations. primaries are the values for only the primary shards. total are the accumulated values for both primary and replica shards.

      To get shard-level statistics, set the level parameter to shards.

      NOTE: When moving to another node, the shard-level statistics for a shard are cleared. Although the shard is no longer part of the node, that node retains any node-level statistics to which the shard contributed.

      Parameters:
      fn - a function that initializes a builder to create the IndicesStatsRequest
      See Also:
    • stats

      Get index statistics. For data streams, the API retrieves statistics for the stream's backing indices.

      By default, the returned statistics are index-level with primaries and total aggregations. primaries are the values for only the primary shards. total are the accumulated values for both primary and replica shards.

      To get shard-level statistics, set the level parameter to shards.

      NOTE: When moving to another node, the shard-level statistics for a shard are cleared. Although the shard is no longer part of the node, that node retains any node-level statistics to which the shard contributed.

      See Also:
    • updateAliases

      Create or update an alias. Adds a data stream or index to an alias.
      See Also:
    • updateAliases

      Create or update an alias. Adds a data stream or index to an alias.
      Parameters:
      fn - a function that initializes a builder to create the UpdateAliasesRequest
      See Also:
    • updateAliases

      public CompletableFuture<UpdateAliasesResponse> updateAliases()
      Create or update an alias. Adds a data stream or index to an alias.
      See Also:
    • validateQuery

      Validate a query. Validates a query without running it.
      See Also:
    • validateQuery

      Validate a query. Validates a query without running it.
      Parameters:
      fn - a function that initializes a builder to create the ValidateQueryRequest
      See Also:
    • validateQuery

      public CompletableFuture<ValidateQueryResponse> validateQuery()
      Validate a query. Validates a query without running it.
      See Also: