Class ElasticsearchIndicesClient
- All Implemented Interfaces:
Closeable
,AutoCloseable
-
Field Summary
Fields inherited from class co.elastic.clients.ApiClient
transport, transportOptions
-
Constructor Summary
ConstructorsConstructorDescriptionElasticsearchIndicesClient
(ElasticsearchTransport transport) ElasticsearchIndicesClient
(ElasticsearchTransport transport, TransportOptions transportOptions) -
Method Summary
Modifier and TypeMethodDescriptionaddBlock
(AddBlockRequest request) Add an index block.final AddBlockResponse
Add an index block.analyze()
Get tokens from text analysis.analyze
(AnalyzeRequest request) Get tokens from text analysis.final AnalyzeResponse
Get tokens from text analysis.Cancel a migration reindex operation.cancelMigrateReindex
(Function<CancelMigrateReindexRequest.Builder, ObjectBuilder<CancelMigrateReindexRequest>> fn) Cancel a migration reindex operation.Clear the cache.clearCache
(ClearCacheRequest request) Clear the cache.final ClearCacheResponse
Clear the cache.clone
(CloneIndexRequest request) Clone an index.final CloneIndexResponse
Clone an index.close
(CloseIndexRequest request) Close an index.final CloseIndexResponse
Close an index.create
(CreateIndexRequest request) Create an index.final CreateIndexResponse
Create an index.createDataStream
(CreateDataStreamRequest request) Create a data stream.final CreateDataStreamResponse
createDataStream
(Function<CreateDataStreamRequest.Builder, ObjectBuilder<CreateDataStreamRequest>> fn) Create a data stream.createFrom
(CreateFromRequest request) Create an index from a source index.final CreateFromResponse
Create an index from a source index.Get data stream stats.dataStreamsStats
(DataStreamsStatsRequest request) Get data stream stats.final DataStreamsStatsResponse
dataStreamsStats
(Function<DataStreamsStatsRequest.Builder, ObjectBuilder<DataStreamsStatsRequest>> fn) Get data stream stats.delete
(DeleteIndexRequest request) Delete indices.final DeleteIndexResponse
Delete indices.deleteAlias
(DeleteAliasRequest request) Delete an alias.final DeleteAliasResponse
Delete an alias.Delete data stream lifecycles.deleteDataLifecycle
(Function<DeleteDataLifecycleRequest.Builder, ObjectBuilder<DeleteDataLifecycleRequest>> fn) Delete data stream lifecycles.deleteDataStream
(DeleteDataStreamRequest request) Delete data streams.final DeleteDataStreamResponse
deleteDataStream
(Function<DeleteDataStreamRequest.Builder, ObjectBuilder<DeleteDataStreamRequest>> fn) Delete data streams.Delete an index template.deleteIndexTemplate
(Function<DeleteIndexTemplateRequest.Builder, ObjectBuilder<DeleteIndexTemplateRequest>> fn) Delete an index template.deleteTemplate
(DeleteTemplateRequest request) Delete a legacy index template.final DeleteTemplateResponse
Delete a legacy index template.diskUsage
(DiskUsageRequest request) Analyze the index disk usage.final DiskUsageResponse
Analyze the index disk usage.downsample
(DownsampleRequest request) Downsample an index.final DownsampleResponse
Downsample an index.exists
(ExistsRequest request) Check indices.final BooleanResponse
Check indices.existsAlias
(ExistsAliasRequest request) Check aliases.final BooleanResponse
Check aliases.Check index templates.final BooleanResponse
existsIndexTemplate
(Function<ExistsIndexTemplateRequest.Builder, ObjectBuilder<ExistsIndexTemplateRequest>> fn) Check index templates.existsTemplate
(ExistsTemplateRequest request) Check existence of index templates.final BooleanResponse
Check existence of index templates.Get the status for a data stream lifecycle.explainDataLifecycle
(Function<ExplainDataLifecycleRequest.Builder, ObjectBuilder<ExplainDataLifecycleRequest>> fn) Get the status for a data stream lifecycle.fieldUsageStats
(FieldUsageStatsRequest request) Get field usage stats.final FieldUsageStatsResponse
Get field usage stats.flush()
Flush data streams or indices.flush
(FlushRequest request) Flush data streams or indices.final FlushResponse
Flush data streams or indices.Force a merge.forcemerge
(ForcemergeRequest request) Force a merge.final ForcemergeResponse
Force a merge.get
(GetIndexRequest request) Get index information.final GetIndexResponse
Get index information.getAlias()
Get aliases.getAlias
(GetAliasRequest request) Get aliases.final GetAliasResponse
Get aliases.getDataLifecycle
(GetDataLifecycleRequest request) Get data stream lifecycles.final GetDataLifecycleResponse
getDataLifecycle
(Function<GetDataLifecycleRequest.Builder, ObjectBuilder<GetDataLifecycleRequest>> fn) Get data stream lifecycles.Get data stream lifecycle stats.Get data streams.getDataStream
(GetDataStreamRequest request) Get data streams.final GetDataStreamResponse
Get data streams.getFieldMapping
(GetFieldMappingRequest request) Get mapping definitions.final GetFieldMappingResponse
Get mapping definitions.Get index templates.getIndexTemplate
(GetIndexTemplateRequest request) Get index templates.final GetIndexTemplateResponse
getIndexTemplate
(Function<GetIndexTemplateRequest.Builder, ObjectBuilder<GetIndexTemplateRequest>> fn) Get index templates.Get mapping definitions.getMapping
(GetMappingRequest request) Get mapping definitions.final GetMappingResponse
Get mapping definitions.Get the migration reindexing status.getMigrateReindexStatus
(Function<GetMigrateReindexStatusRequest.Builder, ObjectBuilder<GetMigrateReindexStatusRequest>> fn) Get the migration reindexing status.Get index settings.getSettings
(GetIndicesSettingsRequest request) Get index settings.getSettings
(Function<GetIndicesSettingsRequest.Builder, ObjectBuilder<GetIndicesSettingsRequest>> fn) Get index settings.Get index templates.getTemplate
(GetTemplateRequest request) Get index templates.final GetTemplateResponse
Get index templates.Reindex legacy backing indices.migrateReindex
(MigrateReindexRequest request) Reindex legacy backing indices.final MigrateReindexResponse
Reindex legacy backing indices.Convert an index alias to a data stream.migrateToDataStream
(Function<MigrateToDataStreamRequest.Builder, ObjectBuilder<MigrateToDataStreamRequest>> fn) Convert an index alias to a data stream.modifyDataStream
(ModifyDataStreamRequest request) Update data streams.final ModifyDataStreamResponse
modifyDataStream
(Function<ModifyDataStreamRequest.Builder, ObjectBuilder<ModifyDataStreamRequest>> fn) Update data streams.open
(OpenRequest request) Open a closed index.final OpenResponse
Open a closed index.Promote a data stream.promoteDataStream
(Function<PromoteDataStreamRequest.Builder, ObjectBuilder<PromoteDataStreamRequest>> fn) Promote a data stream.putAlias
(PutAliasRequest request) Create or update an alias.final PutAliasResponse
Create or update an alias.putDataLifecycle
(PutDataLifecycleRequest request) Update data stream lifecycles.final PutDataLifecycleResponse
putDataLifecycle
(Function<PutDataLifecycleRequest.Builder, ObjectBuilder<PutDataLifecycleRequest>> fn) Update data stream lifecycles.putIndexTemplate
(PutIndexTemplateRequest request) Create or update an index template.final PutIndexTemplateResponse
putIndexTemplate
(Function<PutIndexTemplateRequest.Builder, ObjectBuilder<PutIndexTemplateRequest>> fn) Create or update an index template.putMapping
(PutMappingRequest request) Update field mappings.final PutMappingResponse
Update field mappings.Update index settings.putSettings
(PutIndicesSettingsRequest request) Update index settings.putSettings
(Function<PutIndicesSettingsRequest.Builder, ObjectBuilder<PutIndicesSettingsRequest>> fn) Update index settings.putTemplate
(PutTemplateRequest request) Create or update an index template.final PutTemplateResponse
Create or update an index template.recovery()
Get index recovery information.recovery
(RecoveryRequest request) Get index recovery information.final RecoveryResponse
Get index recovery information.refresh()
Refresh an index.refresh
(RefreshRequest request) Refresh an index.final RefreshResponse
Refresh an index.Reload search analyzers.reloadSearchAnalyzers
(Function<ReloadSearchAnalyzersRequest.Builder, ObjectBuilder<ReloadSearchAnalyzersRequest>> fn) Reload search analyzers.Resolve the cluster.resolveCluster
(ResolveClusterRequest request) Resolve the cluster.final ResolveClusterResponse
Resolve the cluster.resolveIndex
(ResolveIndexRequest request) Resolve indices.final ResolveIndexResponse
Resolve indices.rollover
(RolloverRequest request) Roll over to a new index.final RolloverResponse
Roll over to a new index.segments()
Get index segments.segments
(SegmentsRequest request) Get index segments.final SegmentsResponse
Get index segments.Get index shard stores.shardStores
(ShardStoresRequest request) Get index shard stores.final ShardStoresResponse
Get index shard stores.shrink
(ShrinkRequest request) Shrink an index.final ShrinkResponse
Shrink an index.Simulate an index.simulateIndexTemplate
(Function<SimulateIndexTemplateRequest.Builder, ObjectBuilder<SimulateIndexTemplateRequest>> fn) Simulate an index.Simulate an index template.simulateTemplate
(SimulateTemplateRequest request) Simulate an index template.final SimulateTemplateResponse
simulateTemplate
(Function<SimulateTemplateRequest.Builder, ObjectBuilder<SimulateTemplateRequest>> fn) Simulate an index template.split
(SplitRequest request) Split an index.final SplitResponse
Split an index.stats()
Get index statistics.stats
(IndicesStatsRequest request) Get index statistics.final IndicesStatsResponse
Get index statistics.Create or update an alias.updateAliases
(UpdateAliasesRequest request) Create or update an alias.final UpdateAliasesResponse
Create or update an alias.Validate a query.validateQuery
(ValidateQueryRequest request) Validate a query.final ValidateQueryResponse
Validate a query.withTransportOptions
(TransportOptions transportOptions) Creates a new client with some request optionsMethods inherited from class co.elastic.clients.ApiClient
_jsonpMapper, _transport, _transportOptions, close, getDeserializer, withTransportOptions
-
Constructor Details
-
ElasticsearchIndicesClient
-
ElasticsearchIndicesClient
public ElasticsearchIndicesClient(ElasticsearchTransport transport, @Nullable TransportOptions transportOptions)
-
-
Method Details
-
withTransportOptions
Description copied from class:ApiClient
Creates a new client with some request options- Specified by:
withTransportOptions
in classApiClient<ElasticsearchTransport,
ElasticsearchIndicesClient>
-
addBlock
public AddBlockResponse addBlock(AddBlockRequest request) throws IOException, ElasticsearchException Add an index block.Add an index block to an index. Index blocks limit the operations allowed on an index by blocking specific operation types.
- Throws:
IOException
ElasticsearchException
- See Also:
-
addBlock
public final AddBlockResponse addBlock(Function<AddBlockRequest.Builder, ObjectBuilder<AddBlockRequest>> fn) throws IOException, ElasticsearchExceptionAdd an index block.Add an index block to an index. Index blocks limit the operations allowed on an index by blocking specific operation types.
- Parameters:
fn
- a function that initializes a builder to create theAddBlockRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
analyze
Get tokens from text analysis. The analyze API performs analysis on a text string and returns the resulting tokens.Generating excessive amount of tokens may cause a node to run out of memory. The
index.analyze.max_token_count
setting enables you to limit the number of tokens that can be produced. If more than this limit of tokens gets generated, an error occurs. The_analyze
endpoint without a specified index will always use10000
as its limit.- Throws:
IOException
ElasticsearchException
- See Also:
-
analyze
public final AnalyzeResponse analyze(Function<AnalyzeRequest.Builder, ObjectBuilder<AnalyzeRequest>> fn) throws IOException, ElasticsearchExceptionGet tokens from text analysis. The analyze API performs analysis on a text string and returns the resulting tokens.Generating excessive amount of tokens may cause a node to run out of memory. The
index.analyze.max_token_count
setting enables you to limit the number of tokens that can be produced. If more than this limit of tokens gets generated, an error occurs. The_analyze
endpoint without a specified index will always use10000
as its limit.- Parameters:
fn
- a function that initializes a builder to create theAnalyzeRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
analyze
Get tokens from text analysis. The analyze API performs analysis on a text string and returns the resulting tokens.Generating excessive amount of tokens may cause a node to run out of memory. The
index.analyze.max_token_count
setting enables you to limit the number of tokens that can be produced. If more than this limit of tokens gets generated, an error occurs. The_analyze
endpoint without a specified index will always use10000
as its limit.- Throws:
IOException
ElasticsearchException
- See Also:
-
cancelMigrateReindex
public CancelMigrateReindexResponse cancelMigrateReindex(CancelMigrateReindexRequest request) throws IOException, ElasticsearchException Cancel a migration reindex operation.Cancel a migration reindex attempt for a data stream or index.
- Throws:
IOException
ElasticsearchException
- See Also:
-
cancelMigrateReindex
public final CancelMigrateReindexResponse cancelMigrateReindex(Function<CancelMigrateReindexRequest.Builder, ObjectBuilder<CancelMigrateReindexRequest>> fn) throws IOException, ElasticsearchExceptionCancel a migration reindex operation.Cancel a migration reindex attempt for a data stream or index.
- Parameters:
fn
- a function that initializes a builder to create theCancelMigrateReindexRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
clearCache
public ClearCacheResponse clearCache(ClearCacheRequest request) throws IOException, ElasticsearchException Clear the cache. Clear the cache of one or more indices. For data streams, the API clears the caches of the stream's backing indices.By default, the clear cache API clears all caches. To clear only specific caches, use the
fielddata
,query
, orrequest
parameters. To clear the cache only of specific fields, use thefields
parameter.- Throws:
IOException
ElasticsearchException
- See Also:
-
clearCache
public final ClearCacheResponse clearCache(Function<ClearCacheRequest.Builder, ObjectBuilder<ClearCacheRequest>> fn) throws IOException, ElasticsearchExceptionClear the cache. Clear the cache of one or more indices. For data streams, the API clears the caches of the stream's backing indices.By default, the clear cache API clears all caches. To clear only specific caches, use the
fielddata
,query
, orrequest
parameters. To clear the cache only of specific fields, use thefields
parameter.- Parameters:
fn
- a function that initializes a builder to create theClearCacheRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
clearCache
Clear the cache. Clear the cache of one or more indices. For data streams, the API clears the caches of the stream's backing indices.By default, the clear cache API clears all caches. To clear only specific caches, use the
fielddata
,query
, orrequest
parameters. To clear the cache only of specific fields, use thefields
parameter.- Throws:
IOException
ElasticsearchException
- See Also:
-
clone
public CloneIndexResponse clone(CloneIndexRequest request) throws IOException, ElasticsearchException Clone an index. Clone an existing index into a new index. Each original primary shard is cloned into a new primary shard in the new index.IMPORTANT: Elasticsearch does not apply index templates to the resulting index. The API also does not copy index metadata from the original index. Index metadata includes aliases, index lifecycle management phase definitions, and cross-cluster replication (CCR) follower information. For example, if you clone a CCR follower index, the resulting clone will not be a follower index.
The clone API copies most index settings from the source index to the resulting index, with the exception of
index.number_of_replicas
andindex.auto_expand_replicas
. To set the number of replicas in the resulting index, configure these settings in the clone request.Cloning works as follows:
- First, it creates a new target index with the same definition as the source index.
- Then it hard-links segments from the source index into the target index. If the file system does not support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
- Finally, it recovers the target index as though it were a closed index which had just been re-opened.
IMPORTANT: Indices can only be cloned if they meet the following requirements:
- The index must be marked as read-only and have a cluster health status of green.
- The target index must not exist.
- The source index must have the same number of primary shards as the target index.
- The node handling the clone process must have sufficient free disk space to accommodate a second copy of the existing index.
The current write index on a data stream cannot be cloned. In order to clone the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be cloned.
NOTE: Mappings cannot be specified in the
_clone
request. The mappings of the source index will be used for the target index.Monitor the cloning process
The cloning process can be monitored with the cat recovery API or the cluster health API can be used to wait until all primary shards have been allocated by setting the
wait_for_status
parameter toyellow
.The
_clone
API returns as soon as the target index has been added to the cluster state, before any shards have been allocated. At this point, all shards are in the state unassigned. If, for any reason, the target index can't be allocated, its primary shard will remain unassigned until it can be allocated on that node.Once the primary shard is allocated, it moves to state initializing, and the clone process begins. When the clone operation completes, the shard will become active. At that point, Elasticsearch will try to allocate any replicas and may decide to relocate the primary shard to another node.
Wait for active shards
Because the clone operation creates a new index to clone the shards to, the wait for active shards setting on index creation applies to the clone index action as well.
- Throws:
IOException
ElasticsearchException
- See Also:
-
clone
public final CloneIndexResponse clone(Function<CloneIndexRequest.Builder, ObjectBuilder<CloneIndexRequest>> fn) throws IOException, ElasticsearchExceptionClone an index. Clone an existing index into a new index. Each original primary shard is cloned into a new primary shard in the new index.IMPORTANT: Elasticsearch does not apply index templates to the resulting index. The API also does not copy index metadata from the original index. Index metadata includes aliases, index lifecycle management phase definitions, and cross-cluster replication (CCR) follower information. For example, if you clone a CCR follower index, the resulting clone will not be a follower index.
The clone API copies most index settings from the source index to the resulting index, with the exception of
index.number_of_replicas
andindex.auto_expand_replicas
. To set the number of replicas in the resulting index, configure these settings in the clone request.Cloning works as follows:
- First, it creates a new target index with the same definition as the source index.
- Then it hard-links segments from the source index into the target index. If the file system does not support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
- Finally, it recovers the target index as though it were a closed index which had just been re-opened.
IMPORTANT: Indices can only be cloned if they meet the following requirements:
- The index must be marked as read-only and have a cluster health status of green.
- The target index must not exist.
- The source index must have the same number of primary shards as the target index.
- The node handling the clone process must have sufficient free disk space to accommodate a second copy of the existing index.
The current write index on a data stream cannot be cloned. In order to clone the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be cloned.
NOTE: Mappings cannot be specified in the
_clone
request. The mappings of the source index will be used for the target index.Monitor the cloning process
The cloning process can be monitored with the cat recovery API or the cluster health API can be used to wait until all primary shards have been allocated by setting the
wait_for_status
parameter toyellow
.The
_clone
API returns as soon as the target index has been added to the cluster state, before any shards have been allocated. At this point, all shards are in the state unassigned. If, for any reason, the target index can't be allocated, its primary shard will remain unassigned until it can be allocated on that node.Once the primary shard is allocated, it moves to state initializing, and the clone process begins. When the clone operation completes, the shard will become active. At that point, Elasticsearch will try to allocate any replicas and may decide to relocate the primary shard to another node.
Wait for active shards
Because the clone operation creates a new index to clone the shards to, the wait for active shards setting on index creation applies to the clone index action as well.
- Parameters:
fn
- a function that initializes a builder to create theCloneIndexRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
close
public CloseIndexResponse close(CloseIndexRequest request) throws IOException, ElasticsearchException Close an index. A closed index is blocked for read or write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. Closed indices do not have to maintain internal data structures for indexing or searching documents, which results in a smaller overhead on the cluster.When opening or closing an index, the master node is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened and closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.
You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behaviour can be turned off using the
ignore_unavailable=true
parameter.By default, you must explicitly name the indices you are opening or closing. To open or close indices with
_all
,*
, or other wildcard expressions, change theaction.destructive_requires_name
setting tofalse
. This setting can also be changed with the cluster update settings API.Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting
cluster.indices.close.enable
tofalse
.- Throws:
IOException
ElasticsearchException
- See Also:
-
close
public final CloseIndexResponse close(Function<CloseIndexRequest.Builder, ObjectBuilder<CloseIndexRequest>> fn) throws IOException, ElasticsearchExceptionClose an index. A closed index is blocked for read or write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. Closed indices do not have to maintain internal data structures for indexing or searching documents, which results in a smaller overhead on the cluster.When opening or closing an index, the master node is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened and closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.
You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behaviour can be turned off using the
ignore_unavailable=true
parameter.By default, you must explicitly name the indices you are opening or closing. To open or close indices with
_all
,*
, or other wildcard expressions, change theaction.destructive_requires_name
setting tofalse
. This setting can also be changed with the cluster update settings API.Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting
cluster.indices.close.enable
tofalse
.- Parameters:
fn
- a function that initializes a builder to create theCloseIndexRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
create
public CreateIndexResponse create(CreateIndexRequest request) throws IOException, ElasticsearchException Create an index. You can use the create index API to add a new index to an Elasticsearch cluster. When creating an index, you can specify the following:- Settings for the index.
- Mappings for fields in the index.
- Index aliases
Wait for active shards
By default, index creation will only return a response to the client when the primary copies of each shard have been started, or the request times out. The index creation response will indicate what happened. For example,
acknowledged
indicates whether the index was successfully created in the cluster,while shards_acknowledged
indicates whether the requisite number of shard copies were started for each shard in the index before timing out. Note that it is still possible for eitheracknowledged
orshards_acknowledged
to befalse
, but for the index creation to be successful. These values simply indicate whether the operation completed before the timeout. Ifacknowledged
is false, the request timed out before the cluster state was updated with the newly created index, but it probably will be created sometime soon. Ifshards_acknowledged
is false, then the request timed out before the requisite number of shards were started (by default just the primaries), even if the cluster state was successfully updated to reflect the newly created index (that is to say,acknowledged
istrue
).You can change the default of only waiting for the primary shards to start through the index setting
index.write.wait_for_active_shards
. Note that changing this setting will also affect thewait_for_active_shards
value on all subsequent write operations.- Throws:
IOException
ElasticsearchException
- See Also:
-
create
public final CreateIndexResponse create(Function<CreateIndexRequest.Builder, ObjectBuilder<CreateIndexRequest>> fn) throws IOException, ElasticsearchExceptionCreate an index. You can use the create index API to add a new index to an Elasticsearch cluster. When creating an index, you can specify the following:- Settings for the index.
- Mappings for fields in the index.
- Index aliases
Wait for active shards
By default, index creation will only return a response to the client when the primary copies of each shard have been started, or the request times out. The index creation response will indicate what happened. For example,
acknowledged
indicates whether the index was successfully created in the cluster,while shards_acknowledged
indicates whether the requisite number of shard copies were started for each shard in the index before timing out. Note that it is still possible for eitheracknowledged
orshards_acknowledged
to befalse
, but for the index creation to be successful. These values simply indicate whether the operation completed before the timeout. Ifacknowledged
is false, the request timed out before the cluster state was updated with the newly created index, but it probably will be created sometime soon. Ifshards_acknowledged
is false, then the request timed out before the requisite number of shards were started (by default just the primaries), even if the cluster state was successfully updated to reflect the newly created index (that is to say,acknowledged
istrue
).You can change the default of only waiting for the primary shards to start through the index setting
index.write.wait_for_active_shards
. Note that changing this setting will also affect thewait_for_active_shards
value on all subsequent write operations.- Parameters:
fn
- a function that initializes a builder to create theCreateIndexRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
createDataStream
public CreateDataStreamResponse createDataStream(CreateDataStreamRequest request) throws IOException, ElasticsearchException Create a data stream.You must have a matching index template with data stream enabled.
- Throws:
IOException
ElasticsearchException
- See Also:
-
createDataStream
public final CreateDataStreamResponse createDataStream(Function<CreateDataStreamRequest.Builder, ObjectBuilder<CreateDataStreamRequest>> fn) throws IOException, ElasticsearchExceptionCreate a data stream.You must have a matching index template with data stream enabled.
- Parameters:
fn
- a function that initializes a builder to create theCreateDataStreamRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
createFrom
public CreateFromResponse createFrom(CreateFromRequest request) throws IOException, ElasticsearchException Create an index from a source index.Copy the mappings and settings from the source index to a destination index while allowing request settings and mappings to override the source values.
- Throws:
IOException
ElasticsearchException
- See Also:
-
createFrom
public final CreateFromResponse createFrom(Function<CreateFromRequest.Builder, ObjectBuilder<CreateFromRequest>> fn) throws IOException, ElasticsearchExceptionCreate an index from a source index.Copy the mappings and settings from the source index to a destination index while allowing request settings and mappings to override the source values.
- Parameters:
fn
- a function that initializes a builder to create theCreateFromRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
dataStreamsStats
public DataStreamsStatsResponse dataStreamsStats(DataStreamsStatsRequest request) throws IOException, ElasticsearchException Get data stream stats.Get statistics for one or more data streams.
- Throws:
IOException
ElasticsearchException
- See Also:
-
dataStreamsStats
public final DataStreamsStatsResponse dataStreamsStats(Function<DataStreamsStatsRequest.Builder, ObjectBuilder<DataStreamsStatsRequest>> fn) throws IOException, ElasticsearchExceptionGet data stream stats.Get statistics for one or more data streams.
- Parameters:
fn
- a function that initializes a builder to create theDataStreamsStatsRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
dataStreamsStats
Get data stream stats.Get statistics for one or more data streams.
- Throws:
IOException
ElasticsearchException
- See Also:
-
delete
public DeleteIndexResponse delete(DeleteIndexRequest request) throws IOException, ElasticsearchException Delete indices. Deleting an index deletes its documents, shards, and metadata. It does not delete related Kibana components, such as data views, visualizations, or dashboards.You cannot delete the current write index of a data stream. To delete the index, you must roll over the data stream so a new write index is created. You can then use the delete index API to delete the previous write index.
- Throws:
IOException
ElasticsearchException
- See Also:
-
delete
public final DeleteIndexResponse delete(Function<DeleteIndexRequest.Builder, ObjectBuilder<DeleteIndexRequest>> fn) throws IOException, ElasticsearchExceptionDelete indices. Deleting an index deletes its documents, shards, and metadata. It does not delete related Kibana components, such as data views, visualizations, or dashboards.You cannot delete the current write index of a data stream. To delete the index, you must roll over the data stream so a new write index is created. You can then use the delete index API to delete the previous write index.
- Parameters:
fn
- a function that initializes a builder to create theDeleteIndexRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
deleteAlias
public DeleteAliasResponse deleteAlias(DeleteAliasRequest request) throws IOException, ElasticsearchException Delete an alias. Removes a data stream or index from an alias.- Throws:
IOException
ElasticsearchException
- See Also:
-
deleteAlias
public final DeleteAliasResponse deleteAlias(Function<DeleteAliasRequest.Builder, ObjectBuilder<DeleteAliasRequest>> fn) throws IOException, ElasticsearchExceptionDelete an alias. Removes a data stream or index from an alias.- Parameters:
fn
- a function that initializes a builder to create theDeleteAliasRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
deleteDataLifecycle
public DeleteDataLifecycleResponse deleteDataLifecycle(DeleteDataLifecycleRequest request) throws IOException, ElasticsearchException Delete data stream lifecycles. Removes the data stream lifecycle from a data stream, rendering it not managed by the data stream lifecycle.- Throws:
IOException
ElasticsearchException
- See Also:
-
deleteDataLifecycle
public final DeleteDataLifecycleResponse deleteDataLifecycle(Function<DeleteDataLifecycleRequest.Builder, ObjectBuilder<DeleteDataLifecycleRequest>> fn) throws IOException, ElasticsearchExceptionDelete data stream lifecycles. Removes the data stream lifecycle from a data stream, rendering it not managed by the data stream lifecycle.- Parameters:
fn
- a function that initializes a builder to create theDeleteDataLifecycleRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
deleteDataStream
public DeleteDataStreamResponse deleteDataStream(DeleteDataStreamRequest request) throws IOException, ElasticsearchException Delete data streams. Deletes one or more data streams and their backing indices.- Throws:
IOException
ElasticsearchException
- See Also:
-
deleteDataStream
public final DeleteDataStreamResponse deleteDataStream(Function<DeleteDataStreamRequest.Builder, ObjectBuilder<DeleteDataStreamRequest>> fn) throws IOException, ElasticsearchExceptionDelete data streams. Deletes one or more data streams and their backing indices.- Parameters:
fn
- a function that initializes a builder to create theDeleteDataStreamRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
deleteIndexTemplate
public DeleteIndexTemplateResponse deleteIndexTemplate(DeleteIndexTemplateRequest request) throws IOException, ElasticsearchException Delete an index template. The provided <index-template> may contain multiple template names separated by a comma. If multiple template names are specified then there is no wildcard support and the provided names should match completely with existing templates.- Throws:
IOException
ElasticsearchException
- See Also:
-
deleteIndexTemplate
public final DeleteIndexTemplateResponse deleteIndexTemplate(Function<DeleteIndexTemplateRequest.Builder, ObjectBuilder<DeleteIndexTemplateRequest>> fn) throws IOException, ElasticsearchExceptionDelete an index template. The provided <index-template> may contain multiple template names separated by a comma. If multiple template names are specified then there is no wildcard support and the provided names should match completely with existing templates.- Parameters:
fn
- a function that initializes a builder to create theDeleteIndexTemplateRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
deleteTemplate
public DeleteTemplateResponse deleteTemplate(DeleteTemplateRequest request) throws IOException, ElasticsearchException Delete a legacy index template.- Throws:
IOException
ElasticsearchException
- See Also:
-
deleteTemplate
public final DeleteTemplateResponse deleteTemplate(Function<DeleteTemplateRequest.Builder, ObjectBuilder<DeleteTemplateRequest>> fn) throws IOException, ElasticsearchExceptionDelete a legacy index template.- Parameters:
fn
- a function that initializes a builder to create theDeleteTemplateRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
diskUsage
public DiskUsageResponse diskUsage(DiskUsageRequest request) throws IOException, ElasticsearchException Analyze the index disk usage. Analyze the disk usage of each field of an index or data stream. This API might not support indices created in previous Elasticsearch versions. The result of a small index can be inaccurate as some parts of an index might not be analyzed by the API.NOTE: The total size of fields of the analyzed shards of the index in the response is usually smaller than the index
store_size
value because some small metadata files are ignored and some parts of data files might not be scanned by the API. Since stored fields are stored together in a compressed format, the sizes of stored fields are also estimates and can be inaccurate. The stored size of the_id
field is likely underestimated while the_source
field is overestimated.- Throws:
IOException
ElasticsearchException
- See Also:
-
diskUsage
public final DiskUsageResponse diskUsage(Function<DiskUsageRequest.Builder, ObjectBuilder<DiskUsageRequest>> fn) throws IOException, ElasticsearchExceptionAnalyze the index disk usage. Analyze the disk usage of each field of an index or data stream. This API might not support indices created in previous Elasticsearch versions. The result of a small index can be inaccurate as some parts of an index might not be analyzed by the API.NOTE: The total size of fields of the analyzed shards of the index in the response is usually smaller than the index
store_size
value because some small metadata files are ignored and some parts of data files might not be scanned by the API. Since stored fields are stored together in a compressed format, the sizes of stored fields are also estimates and can be inaccurate. The stored size of the_id
field is likely underestimated while the_source
field is overestimated.- Parameters:
fn
- a function that initializes a builder to create theDiskUsageRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
downsample
public DownsampleResponse downsample(DownsampleRequest request) throws IOException, ElasticsearchException Downsample an index. Aggregate a time series (TSDS) index and store pre-computed statistical summaries (min
,max
,sum
,value_count
andavg
) for each metric field grouped by a configured time interval. For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index. All documents within an hour interval are summarized and stored as a single document in the downsample index.NOTE: Only indices in a time series data stream are supported. Neither field nor document level security can be defined on the source index. The source index must be read only (
index.blocks.write: true
).- Throws:
IOException
ElasticsearchException
- See Also:
-
downsample
public final DownsampleResponse downsample(Function<DownsampleRequest.Builder, ObjectBuilder<DownsampleRequest>> fn) throws IOException, ElasticsearchExceptionDownsample an index. Aggregate a time series (TSDS) index and store pre-computed statistical summaries (min
,max
,sum
,value_count
andavg
) for each metric field grouped by a configured time interval. For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index. All documents within an hour interval are summarized and stored as a single document in the downsample index.NOTE: Only indices in a time series data stream are supported. Neither field nor document level security can be defined on the source index. The source index must be read only (
index.blocks.write: true
).- Parameters:
fn
- a function that initializes a builder to create theDownsampleRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
exists
Check indices. Check if one or more indices, index aliases, or data streams exist.- Throws:
IOException
ElasticsearchException
- See Also:
-
exists
public final BooleanResponse exists(Function<ExistsRequest.Builder, ObjectBuilder<ExistsRequest>> fn) throws IOException, ElasticsearchExceptionCheck indices. Check if one or more indices, index aliases, or data streams exist.- Parameters:
fn
- a function that initializes a builder to create theExistsRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
existsAlias
public BooleanResponse existsAlias(ExistsAliasRequest request) throws IOException, ElasticsearchException Check aliases.Check if one or more data stream or index aliases exist.
- Throws:
IOException
ElasticsearchException
- See Also:
-
existsAlias
public final BooleanResponse existsAlias(Function<ExistsAliasRequest.Builder, ObjectBuilder<ExistsAliasRequest>> fn) throws IOException, ElasticsearchExceptionCheck aliases.Check if one or more data stream or index aliases exist.
- Parameters:
fn
- a function that initializes a builder to create theExistsAliasRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
existsIndexTemplate
public BooleanResponse existsIndexTemplate(ExistsIndexTemplateRequest request) throws IOException, ElasticsearchException Check index templates.Check whether index templates exist.
- Throws:
IOException
ElasticsearchException
- See Also:
-
existsIndexTemplate
public final BooleanResponse existsIndexTemplate(Function<ExistsIndexTemplateRequest.Builder, ObjectBuilder<ExistsIndexTemplateRequest>> fn) throws IOException, ElasticsearchExceptionCheck index templates.Check whether index templates exist.
- Parameters:
fn
- a function that initializes a builder to create theExistsIndexTemplateRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
existsTemplate
public BooleanResponse existsTemplate(ExistsTemplateRequest request) throws IOException, ElasticsearchException Check existence of index templates. Get information about whether index templates exist. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
- Throws:
IOException
ElasticsearchException
- See Also:
-
existsTemplate
public final BooleanResponse existsTemplate(Function<ExistsTemplateRequest.Builder, ObjectBuilder<ExistsTemplateRequest>> fn) throws IOException, ElasticsearchExceptionCheck existence of index templates. Get information about whether index templates exist. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
- Parameters:
fn
- a function that initializes a builder to create theExistsTemplateRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
explainDataLifecycle
public ExplainDataLifecycleResponse explainDataLifecycle(ExplainDataLifecycleRequest request) throws IOException, ElasticsearchException Get the status for a data stream lifecycle. Get information about an index or data stream's current data stream lifecycle status, such as time since index creation, time since rollover, the lifecycle configuration managing the index, or any errors encountered during lifecycle execution.- Throws:
IOException
ElasticsearchException
- See Also:
-
explainDataLifecycle
public final ExplainDataLifecycleResponse explainDataLifecycle(Function<ExplainDataLifecycleRequest.Builder, ObjectBuilder<ExplainDataLifecycleRequest>> fn) throws IOException, ElasticsearchExceptionGet the status for a data stream lifecycle. Get information about an index or data stream's current data stream lifecycle status, such as time since index creation, time since rollover, the lifecycle configuration managing the index, or any errors encountered during lifecycle execution.- Parameters:
fn
- a function that initializes a builder to create theExplainDataLifecycleRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
fieldUsageStats
public FieldUsageStatsResponse fieldUsageStats(FieldUsageStatsRequest request) throws IOException, ElasticsearchException Get field usage stats. Get field usage information for each shard and field of an index. Field usage statistics are automatically captured when queries are running on a cluster. A shard-level search request that accesses a given field, even if multiple times during that request, is counted as a single use.The response body reports the per-shard usage count of the data structures that back the fields in the index. A given request will increment each count by a maximum value of 1, even if the request accesses the same field multiple times.
- Throws:
IOException
ElasticsearchException
- See Also:
-
fieldUsageStats
public final FieldUsageStatsResponse fieldUsageStats(Function<FieldUsageStatsRequest.Builder, ObjectBuilder<FieldUsageStatsRequest>> fn) throws IOException, ElasticsearchExceptionGet field usage stats. Get field usage information for each shard and field of an index. Field usage statistics are automatically captured when queries are running on a cluster. A shard-level search request that accesses a given field, even if multiple times during that request, is counted as a single use.The response body reports the per-shard usage count of the data structures that back the fields in the index. A given request will increment each count by a maximum value of 1, even if the request accesses the same field multiple times.
- Parameters:
fn
- a function that initializes a builder to create theFieldUsageStatsRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
flush
Flush data streams or indices. Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.
It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.
- Throws:
IOException
ElasticsearchException
- See Also:
-
flush
public final FlushResponse flush(Function<FlushRequest.Builder, ObjectBuilder<FlushRequest>> fn) throws IOException, ElasticsearchExceptionFlush data streams or indices. Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.
It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.
- Parameters:
fn
- a function that initializes a builder to create theFlushRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
flush
Flush data streams or indices. Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.
It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.
- Throws:
IOException
ElasticsearchException
- See Also:
-
forcemerge
public ForcemergeResponse forcemerge(ForcemergeRequest request) throws IOException, ElasticsearchException Force a merge. Perform the force merge operation on the shards of one or more indices. For data streams, the API forces a merge on the shards of the stream's backing indices.Merging reduces the number of segments in each shard by merging some of them together and also frees up the space used by deleted documents. Merging normally happens automatically, but sometimes it is useful to trigger a merge manually.
WARNING: We recommend force merging only a read-only index (meaning the index is no longer receiving writes). When documents are updated or deleted, the old version is not immediately removed but instead soft-deleted and marked with a "tombstone". These soft-deleted documents are automatically cleaned up during regular segment merges. But force merge can cause very large (greater than 5 GB) segments to be produced, which are not eligible for regular merges. So the number of soft-deleted documents can then grow rapidly, resulting in higher disk usage and worse search performance. If you regularly force merge an index receiving writes, this can also make snapshots more expensive, since the new documents can't be backed up incrementally.
Blocks during a force merge
Calls to this API block until the merge is complete (unless request contains
wait_for_completion=false
). If the client connection is lost before completion then the force merge process will continue in the background. Any new requests to force merge the same indices will also block until the ongoing force merge is complete.Running force merge asynchronously
If the request contains
wait_for_completion=false
, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to get the status of the task. However, you can not cancel this task as the force merge task is not cancelable. Elasticsearch creates a record of this task as a document at_tasks/<task_id>
. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.Force merging multiple indices
You can force merge multiple indices with a single request by targeting:
- One or more data streams that contain multiple backing indices
- Multiple indices
- One or more aliases
- All data streams and indices in a cluster
Each targeted shard is force-merged separately using the force_merge threadpool. By default each node only has a single
force_merge
thread which means that the shards on that node are force-merged one at a time. If you expand theforce_merge
threadpool on a node then it will force merge its shards in parallelForce merge makes the storage for the shard being merged temporarily increase, as it may require free space up to triple its size in case
max_num_segments parameter
is set to1
, to rewrite all segments into a new one.Data streams and time-based indices
Force-merging is useful for managing a data stream's older backing indices and other time-based indices, particularly after a rollover. In these cases, each index only receives indexing traffic for a certain period of time. Once an index receive no more writes, its shards can be force-merged to a single segment. This can be a good idea because single-segment shards can sometimes use simpler and more efficient data structures to perform searches. For example:
POST /.ds-my-data-stream-2099.03.07-000001/_forcemerge?max_num_segments=1
- Throws:
IOException
ElasticsearchException
- See Also:
-
forcemerge
public final ForcemergeResponse forcemerge(Function<ForcemergeRequest.Builder, ObjectBuilder<ForcemergeRequest>> fn) throws IOException, ElasticsearchExceptionForce a merge. Perform the force merge operation on the shards of one or more indices. For data streams, the API forces a merge on the shards of the stream's backing indices.Merging reduces the number of segments in each shard by merging some of them together and also frees up the space used by deleted documents. Merging normally happens automatically, but sometimes it is useful to trigger a merge manually.
WARNING: We recommend force merging only a read-only index (meaning the index is no longer receiving writes). When documents are updated or deleted, the old version is not immediately removed but instead soft-deleted and marked with a "tombstone". These soft-deleted documents are automatically cleaned up during regular segment merges. But force merge can cause very large (greater than 5 GB) segments to be produced, which are not eligible for regular merges. So the number of soft-deleted documents can then grow rapidly, resulting in higher disk usage and worse search performance. If you regularly force merge an index receiving writes, this can also make snapshots more expensive, since the new documents can't be backed up incrementally.
Blocks during a force merge
Calls to this API block until the merge is complete (unless request contains
wait_for_completion=false
). If the client connection is lost before completion then the force merge process will continue in the background. Any new requests to force merge the same indices will also block until the ongoing force merge is complete.Running force merge asynchronously
If the request contains
wait_for_completion=false
, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to get the status of the task. However, you can not cancel this task as the force merge task is not cancelable. Elasticsearch creates a record of this task as a document at_tasks/<task_id>
. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.Force merging multiple indices
You can force merge multiple indices with a single request by targeting:
- One or more data streams that contain multiple backing indices
- Multiple indices
- One or more aliases
- All data streams and indices in a cluster
Each targeted shard is force-merged separately using the force_merge threadpool. By default each node only has a single
force_merge
thread which means that the shards on that node are force-merged one at a time. If you expand theforce_merge
threadpool on a node then it will force merge its shards in parallelForce merge makes the storage for the shard being merged temporarily increase, as it may require free space up to triple its size in case
max_num_segments parameter
is set to1
, to rewrite all segments into a new one.Data streams and time-based indices
Force-merging is useful for managing a data stream's older backing indices and other time-based indices, particularly after a rollover. In these cases, each index only receives indexing traffic for a certain period of time. Once an index receive no more writes, its shards can be force-merged to a single segment. This can be a good idea because single-segment shards can sometimes use simpler and more efficient data structures to perform searches. For example:
POST /.ds-my-data-stream-2099.03.07-000001/_forcemerge?max_num_segments=1
- Parameters:
fn
- a function that initializes a builder to create theForcemergeRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
forcemerge
Force a merge. Perform the force merge operation on the shards of one or more indices. For data streams, the API forces a merge on the shards of the stream's backing indices.Merging reduces the number of segments in each shard by merging some of them together and also frees up the space used by deleted documents. Merging normally happens automatically, but sometimes it is useful to trigger a merge manually.
WARNING: We recommend force merging only a read-only index (meaning the index is no longer receiving writes). When documents are updated or deleted, the old version is not immediately removed but instead soft-deleted and marked with a "tombstone". These soft-deleted documents are automatically cleaned up during regular segment merges. But force merge can cause very large (greater than 5 GB) segments to be produced, which are not eligible for regular merges. So the number of soft-deleted documents can then grow rapidly, resulting in higher disk usage and worse search performance. If you regularly force merge an index receiving writes, this can also make snapshots more expensive, since the new documents can't be backed up incrementally.
Blocks during a force merge
Calls to this API block until the merge is complete (unless request contains
wait_for_completion=false
). If the client connection is lost before completion then the force merge process will continue in the background. Any new requests to force merge the same indices will also block until the ongoing force merge is complete.Running force merge asynchronously
If the request contains
wait_for_completion=false
, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to get the status of the task. However, you can not cancel this task as the force merge task is not cancelable. Elasticsearch creates a record of this task as a document at_tasks/<task_id>
. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.Force merging multiple indices
You can force merge multiple indices with a single request by targeting:
- One or more data streams that contain multiple backing indices
- Multiple indices
- One or more aliases
- All data streams and indices in a cluster
Each targeted shard is force-merged separately using the force_merge threadpool. By default each node only has a single
force_merge
thread which means that the shards on that node are force-merged one at a time. If you expand theforce_merge
threadpool on a node then it will force merge its shards in parallelForce merge makes the storage for the shard being merged temporarily increase, as it may require free space up to triple its size in case
max_num_segments parameter
is set to1
, to rewrite all segments into a new one.Data streams and time-based indices
Force-merging is useful for managing a data stream's older backing indices and other time-based indices, particularly after a rollover. In these cases, each index only receives indexing traffic for a certain period of time. Once an index receive no more writes, its shards can be force-merged to a single segment. This can be a good idea because single-segment shards can sometimes use simpler and more efficient data structures to perform searches. For example:
POST /.ds-my-data-stream-2099.03.07-000001/_forcemerge?max_num_segments=1
- Throws:
IOException
ElasticsearchException
- See Also:
-
get
Get index information. Get information about one or more indices. For data streams, the API returns information about the stream’s backing indices.- Throws:
IOException
ElasticsearchException
- See Also:
-
get
public final GetIndexResponse get(Function<GetIndexRequest.Builder, ObjectBuilder<GetIndexRequest>> fn) throws IOException, ElasticsearchExceptionGet index information. Get information about one or more indices. For data streams, the API returns information about the stream’s backing indices.- Parameters:
fn
- a function that initializes a builder to create theGetIndexRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
getAlias
public GetAliasResponse getAlias(GetAliasRequest request) throws IOException, ElasticsearchException Get aliases. Retrieves information for one or more data stream or index aliases.- Throws:
IOException
ElasticsearchException
- See Also:
-
getAlias
public final GetAliasResponse getAlias(Function<GetAliasRequest.Builder, ObjectBuilder<GetAliasRequest>> fn) throws IOException, ElasticsearchExceptionGet aliases. Retrieves information for one or more data stream or index aliases.- Parameters:
fn
- a function that initializes a builder to create theGetAliasRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
getAlias
Get aliases. Retrieves information for one or more data stream or index aliases.- Throws:
IOException
ElasticsearchException
- See Also:
-
getDataLifecycle
public GetDataLifecycleResponse getDataLifecycle(GetDataLifecycleRequest request) throws IOException, ElasticsearchException Get data stream lifecycles.Get the data stream lifecycle configuration of one or more data streams.
- Throws:
IOException
ElasticsearchException
- See Also:
-
getDataLifecycle
public final GetDataLifecycleResponse getDataLifecycle(Function<GetDataLifecycleRequest.Builder, ObjectBuilder<GetDataLifecycleRequest>> fn) throws IOException, ElasticsearchExceptionGet data stream lifecycles.Get the data stream lifecycle configuration of one or more data streams.
- Parameters:
fn
- a function that initializes a builder to create theGetDataLifecycleRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
getDataLifecycleStats
public GetDataLifecycleStatsResponse getDataLifecycleStats() throws IOException, ElasticsearchExceptionGet data stream lifecycle stats. Get statistics about the data streams that are managed by a data stream lifecycle.- Throws:
IOException
ElasticsearchException
- See Also:
-
getDataStream
public GetDataStreamResponse getDataStream(GetDataStreamRequest request) throws IOException, ElasticsearchException Get data streams.Get information about one or more data streams.
- Throws:
IOException
ElasticsearchException
- See Also:
-
getDataStream
public final GetDataStreamResponse getDataStream(Function<GetDataStreamRequest.Builder, ObjectBuilder<GetDataStreamRequest>> fn) throws IOException, ElasticsearchExceptionGet data streams.Get information about one or more data streams.
- Parameters:
fn
- a function that initializes a builder to create theGetDataStreamRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
getDataStream
Get data streams.Get information about one or more data streams.
- Throws:
IOException
ElasticsearchException
- See Also:
-
getFieldMapping
public GetFieldMappingResponse getFieldMapping(GetFieldMappingRequest request) throws IOException, ElasticsearchException Get mapping definitions. Retrieves mapping definitions for one or more fields. For data streams, the API retrieves field mappings for the stream’s backing indices.This API is useful if you don't need a complete mapping or if an index mapping contains a large number of fields.
- Throws:
IOException
ElasticsearchException
- See Also:
-
getFieldMapping
public final GetFieldMappingResponse getFieldMapping(Function<GetFieldMappingRequest.Builder, ObjectBuilder<GetFieldMappingRequest>> fn) throws IOException, ElasticsearchExceptionGet mapping definitions. Retrieves mapping definitions for one or more fields. For data streams, the API retrieves field mappings for the stream’s backing indices.This API is useful if you don't need a complete mapping or if an index mapping contains a large number of fields.
- Parameters:
fn
- a function that initializes a builder to create theGetFieldMappingRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
getIndexTemplate
public GetIndexTemplateResponse getIndexTemplate(GetIndexTemplateRequest request) throws IOException, ElasticsearchException Get index templates. Get information about one or more index templates.- Throws:
IOException
ElasticsearchException
- See Also:
-
getIndexTemplate
public final GetIndexTemplateResponse getIndexTemplate(Function<GetIndexTemplateRequest.Builder, ObjectBuilder<GetIndexTemplateRequest>> fn) throws IOException, ElasticsearchExceptionGet index templates. Get information about one or more index templates.- Parameters:
fn
- a function that initializes a builder to create theGetIndexTemplateRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
getIndexTemplate
Get index templates. Get information about one or more index templates.- Throws:
IOException
ElasticsearchException
- See Also:
-
getMapping
public GetMappingResponse getMapping(GetMappingRequest request) throws IOException, ElasticsearchException Get mapping definitions. For data streams, the API retrieves mappings for the stream’s backing indices.- Throws:
IOException
ElasticsearchException
- See Also:
-
getMapping
public final GetMappingResponse getMapping(Function<GetMappingRequest.Builder, ObjectBuilder<GetMappingRequest>> fn) throws IOException, ElasticsearchExceptionGet mapping definitions. For data streams, the API retrieves mappings for the stream’s backing indices.- Parameters:
fn
- a function that initializes a builder to create theGetMappingRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
getMapping
Get mapping definitions. For data streams, the API retrieves mappings for the stream’s backing indices.- Throws:
IOException
ElasticsearchException
- See Also:
-
getMigrateReindexStatus
public GetMigrateReindexStatusResponse getMigrateReindexStatus(GetMigrateReindexStatusRequest request) throws IOException, ElasticsearchException Get the migration reindexing status.Get the status of a migration reindex attempt for a data stream or index.
- Throws:
IOException
ElasticsearchException
- See Also:
-
getMigrateReindexStatus
public final GetMigrateReindexStatusResponse getMigrateReindexStatus(Function<GetMigrateReindexStatusRequest.Builder, ObjectBuilder<GetMigrateReindexStatusRequest>> fn) throws IOException, ElasticsearchExceptionGet the migration reindexing status.Get the status of a migration reindex attempt for a data stream or index.
- Parameters:
fn
- a function that initializes a builder to create theGetMigrateReindexStatusRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
getSettings
public GetIndicesSettingsResponse getSettings(GetIndicesSettingsRequest request) throws IOException, ElasticsearchException Get index settings. Get setting information for one or more indices. For data streams, it returns setting information for the stream's backing indices.- Throws:
IOException
ElasticsearchException
- See Also:
-
getSettings
public final GetIndicesSettingsResponse getSettings(Function<GetIndicesSettingsRequest.Builder, ObjectBuilder<GetIndicesSettingsRequest>> fn) throws IOException, ElasticsearchExceptionGet index settings. Get setting information for one or more indices. For data streams, it returns setting information for the stream's backing indices.- Parameters:
fn
- a function that initializes a builder to create theGetIndicesSettingsRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
getSettings
Get index settings. Get setting information for one or more indices. For data streams, it returns setting information for the stream's backing indices.- Throws:
IOException
ElasticsearchException
- See Also:
-
getTemplate
public GetTemplateResponse getTemplate(GetTemplateRequest request) throws IOException, ElasticsearchException Get index templates. Get information about one or more index templates.IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
- Throws:
IOException
ElasticsearchException
- See Also:
-
getTemplate
public final GetTemplateResponse getTemplate(Function<GetTemplateRequest.Builder, ObjectBuilder<GetTemplateRequest>> fn) throws IOException, ElasticsearchExceptionGet index templates. Get information about one or more index templates.IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
- Parameters:
fn
- a function that initializes a builder to create theGetTemplateRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
getTemplate
Get index templates. Get information about one or more index templates.IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
- Throws:
IOException
ElasticsearchException
- See Also:
-
migrateReindex
public MigrateReindexResponse migrateReindex(MigrateReindexRequest request) throws IOException, ElasticsearchException Reindex legacy backing indices.Reindex all legacy backing indices for a data stream. This operation occurs in a persistent task. The persistent task ID is returned immediately and the reindexing work is completed in that task.
- Throws:
IOException
ElasticsearchException
- See Also:
-
migrateReindex
public final MigrateReindexResponse migrateReindex(Function<MigrateReindexRequest.Builder, ObjectBuilder<MigrateReindexRequest>> fn) throws IOException, ElasticsearchExceptionReindex legacy backing indices.Reindex all legacy backing indices for a data stream. This operation occurs in a persistent task. The persistent task ID is returned immediately and the reindexing work is completed in that task.
- Parameters:
fn
- a function that initializes a builder to create theMigrateReindexRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
migrateReindex
Reindex legacy backing indices.Reindex all legacy backing indices for a data stream. This operation occurs in a persistent task. The persistent task ID is returned immediately and the reindexing work is completed in that task.
- Throws:
IOException
ElasticsearchException
- See Also:
-
migrateToDataStream
public MigrateToDataStreamResponse migrateToDataStream(MigrateToDataStreamRequest request) throws IOException, ElasticsearchException Convert an index alias to a data stream. Converts an index alias to a data stream. You must have a matching index template that is data stream enabled. The alias must meet the following criteria: The alias must have a write index; All indices for the alias must have a@timestamp
field mapping of adate
ordate_nanos
field type; The alias must not have any filters; The alias must not use custom routing. If successful, the request removes the alias and creates a data stream with the same name. The indices for the alias become hidden backing indices for the stream. The write index for the alias becomes the write index for the stream.- Throws:
IOException
ElasticsearchException
- See Also:
-
migrateToDataStream
public final MigrateToDataStreamResponse migrateToDataStream(Function<MigrateToDataStreamRequest.Builder, ObjectBuilder<MigrateToDataStreamRequest>> fn) throws IOException, ElasticsearchExceptionConvert an index alias to a data stream. Converts an index alias to a data stream. You must have a matching index template that is data stream enabled. The alias must meet the following criteria: The alias must have a write index; All indices for the alias must have a@timestamp
field mapping of adate
ordate_nanos
field type; The alias must not have any filters; The alias must not use custom routing. If successful, the request removes the alias and creates a data stream with the same name. The indices for the alias become hidden backing indices for the stream. The write index for the alias becomes the write index for the stream.- Parameters:
fn
- a function that initializes a builder to create theMigrateToDataStreamRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
modifyDataStream
public ModifyDataStreamResponse modifyDataStream(ModifyDataStreamRequest request) throws IOException, ElasticsearchException Update data streams. Performs one or more data stream modification actions in a single atomic operation.- Throws:
IOException
ElasticsearchException
- See Also:
-
modifyDataStream
public final ModifyDataStreamResponse modifyDataStream(Function<ModifyDataStreamRequest.Builder, ObjectBuilder<ModifyDataStreamRequest>> fn) throws IOException, ElasticsearchExceptionUpdate data streams. Performs one or more data stream modification actions in a single atomic operation.- Parameters:
fn
- a function that initializes a builder to create theModifyDataStreamRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
open
Open a closed index. For data streams, the API opens any closed backing indices.A closed index is blocked for read/write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. This allows closed indices to not have to maintain internal data structures for indexing or searching documents, resulting in a smaller overhead on the cluster.
When opening or closing an index, the master is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened or closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.
You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behavior can be turned off by using the
ignore_unavailable=true
parameter.By default, you must explicitly name the indices you are opening or closing. To open or close indices with
_all
,*
, or other wildcard expressions, change theaction.destructive_requires_name
setting tofalse
. This setting can also be changed with the cluster update settings API.Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting
cluster.indices.close.enable
tofalse
.Because opening or closing an index allocates its shards, the
wait_for_active_shards
setting on index creation applies to the_open
and_close
index actions as well.- Throws:
IOException
ElasticsearchException
- See Also:
-
open
public final OpenResponse open(Function<OpenRequest.Builder, ObjectBuilder<OpenRequest>> fn) throws IOException, ElasticsearchExceptionOpen a closed index. For data streams, the API opens any closed backing indices.A closed index is blocked for read/write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. This allows closed indices to not have to maintain internal data structures for indexing or searching documents, resulting in a smaller overhead on the cluster.
When opening or closing an index, the master is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened or closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.
You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behavior can be turned off by using the
ignore_unavailable=true
parameter.By default, you must explicitly name the indices you are opening or closing. To open or close indices with
_all
,*
, or other wildcard expressions, change theaction.destructive_requires_name
setting tofalse
. This setting can also be changed with the cluster update settings API.Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting
cluster.indices.close.enable
tofalse
.Because opening or closing an index allocates its shards, the
wait_for_active_shards
setting on index creation applies to the_open
and_close
index actions as well.- Parameters:
fn
- a function that initializes a builder to create theOpenRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
promoteDataStream
public PromoteDataStreamResponse promoteDataStream(PromoteDataStreamRequest request) throws IOException, ElasticsearchException Promote a data stream. Promote a data stream from a replicated data stream managed by cross-cluster replication (CCR) to a regular data stream.With CCR auto following, a data stream from a remote cluster can be replicated to the local cluster. These data streams can't be rolled over in the local cluster. These replicated data streams roll over only if the upstream data stream rolls over. In the event that the remote cluster is no longer available, the data stream in the local cluster can be promoted to a regular data stream, which allows these data streams to be rolled over in the local cluster.
NOTE: When promoting a data stream, ensure the local cluster has a data stream enabled index template that matches the data stream. If this is missing, the data stream will not be able to roll over until a matching index template is created. This will affect the lifecycle management of the data stream and interfere with the data stream size and retention.
- Throws:
IOException
ElasticsearchException
- See Also:
-
promoteDataStream
public final PromoteDataStreamResponse promoteDataStream(Function<PromoteDataStreamRequest.Builder, ObjectBuilder<PromoteDataStreamRequest>> fn) throws IOException, ElasticsearchExceptionPromote a data stream. Promote a data stream from a replicated data stream managed by cross-cluster replication (CCR) to a regular data stream.With CCR auto following, a data stream from a remote cluster can be replicated to the local cluster. These data streams can't be rolled over in the local cluster. These replicated data streams roll over only if the upstream data stream rolls over. In the event that the remote cluster is no longer available, the data stream in the local cluster can be promoted to a regular data stream, which allows these data streams to be rolled over in the local cluster.
NOTE: When promoting a data stream, ensure the local cluster has a data stream enabled index template that matches the data stream. If this is missing, the data stream will not be able to roll over until a matching index template is created. This will affect the lifecycle management of the data stream and interfere with the data stream size and retention.
- Parameters:
fn
- a function that initializes a builder to create thePromoteDataStreamRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
putAlias
public PutAliasResponse putAlias(PutAliasRequest request) throws IOException, ElasticsearchException Create or update an alias. Adds a data stream or index to an alias.- Throws:
IOException
ElasticsearchException
- See Also:
-
putAlias
public final PutAliasResponse putAlias(Function<PutAliasRequest.Builder, ObjectBuilder<PutAliasRequest>> fn) throws IOException, ElasticsearchExceptionCreate or update an alias. Adds a data stream or index to an alias.- Parameters:
fn
- a function that initializes a builder to create thePutAliasRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
putDataLifecycle
public PutDataLifecycleResponse putDataLifecycle(PutDataLifecycleRequest request) throws IOException, ElasticsearchException Update data stream lifecycles. Update the data stream lifecycle of the specified data streams.- Throws:
IOException
ElasticsearchException
- See Also:
-
putDataLifecycle
public final PutDataLifecycleResponse putDataLifecycle(Function<PutDataLifecycleRequest.Builder, ObjectBuilder<PutDataLifecycleRequest>> fn) throws IOException, ElasticsearchExceptionUpdate data stream lifecycles. Update the data stream lifecycle of the specified data streams.- Parameters:
fn
- a function that initializes a builder to create thePutDataLifecycleRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
putIndexTemplate
public PutIndexTemplateResponse putIndexTemplate(PutIndexTemplateRequest request) throws IOException, ElasticsearchException Create or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.Elasticsearch applies templates to new indices based on an wildcard pattern that matches the index name. Index templates are applied during data stream or index creation. For data streams, these settings and mappings are applied when the stream's backing indices are created. Settings and mappings specified in a create index API request override any settings or mappings specified in an index template. Changes to index templates do not affect existing indices, including the existing backing indices of a data stream.
You can use C-style
/* *\/
block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.Multiple matching templates
If multiple index templates match the name of a new index or data stream, the template with the highest priority is used.
Multiple templates with overlapping index patterns at the same priority are not allowed and an error will be thrown when attempting to create a template matching an existing index template at identical priorities.
Composing aliases, mappings, and settings
When multiple component templates are specified in the
composed_of
field for an index template, they are merged in the order specified, meaning that later component templates override earlier component templates. Any mappings, settings, or aliases from the parent index template are merged in next. Finally, any configuration on the index request itself is merged. Mapping definitions are merged recursively, which means that later mapping components can introduce new field mappings and update the mapping configuration. If a field mapping is already contained in an earlier component, its definition will be completely overwritten by the later one. This recursive merging strategy applies not only to field mappings, but also root options likedynamic_templates
andmeta
. If an earlier component contains adynamic_templates
block, then by default newdynamic_templates
entries are appended onto the end. If an entry already exists with the same key, then it is overwritten by the new definition.- Throws:
IOException
ElasticsearchException
- See Also:
-
putIndexTemplate
public final PutIndexTemplateResponse putIndexTemplate(Function<PutIndexTemplateRequest.Builder, ObjectBuilder<PutIndexTemplateRequest>> fn) throws IOException, ElasticsearchExceptionCreate or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.Elasticsearch applies templates to new indices based on an wildcard pattern that matches the index name. Index templates are applied during data stream or index creation. For data streams, these settings and mappings are applied when the stream's backing indices are created. Settings and mappings specified in a create index API request override any settings or mappings specified in an index template. Changes to index templates do not affect existing indices, including the existing backing indices of a data stream.
You can use C-style
/* *\/
block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.Multiple matching templates
If multiple index templates match the name of a new index or data stream, the template with the highest priority is used.
Multiple templates with overlapping index patterns at the same priority are not allowed and an error will be thrown when attempting to create a template matching an existing index template at identical priorities.
Composing aliases, mappings, and settings
When multiple component templates are specified in the
composed_of
field for an index template, they are merged in the order specified, meaning that later component templates override earlier component templates. Any mappings, settings, or aliases from the parent index template are merged in next. Finally, any configuration on the index request itself is merged. Mapping definitions are merged recursively, which means that later mapping components can introduce new field mappings and update the mapping configuration. If a field mapping is already contained in an earlier component, its definition will be completely overwritten by the later one. This recursive merging strategy applies not only to field mappings, but also root options likedynamic_templates
andmeta
. If an earlier component contains adynamic_templates
block, then by default newdynamic_templates
entries are appended onto the end. If an entry already exists with the same key, then it is overwritten by the new definition.- Parameters:
fn
- a function that initializes a builder to create thePutIndexTemplateRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
putMapping
public PutMappingResponse putMapping(PutMappingRequest request) throws IOException, ElasticsearchException Update field mappings. Add new fields to an existing data stream or index. You can also use this API to change the search settings of existing fields and add new properties to existing object fields. For data streams, these changes are applied to all backing indices by default.Add multi-fields to an existing field
Multi-fields let you index the same field in different ways. You can use this API to update the fields mapping parameter and enable multi-fields for an existing field. WARNING: If an index (or data stream) contains documents when you add a multi-field, those documents will not have values for the new multi-field. You can populate the new multi-field with the update by query API.
Change supported mapping parameters for an existing field
The documentation for each mapping parameter indicates whether you can update it for an existing field using this API. For example, you can use the update mapping API to update the
ignore_above
parameter.Change the mapping of an existing field
Except for supported mapping parameters, you can't change the mapping or field type of an existing field. Changing an existing field could invalidate data that's already indexed.
If you need to change the mapping of a field in a data stream's backing indices, refer to documentation about modifying data streams. If you need to change the mapping of a field in other indices, create a new index with the correct mapping and reindex your data into that index.
Rename a field
Renaming a field would invalidate data already indexed under the old field name. Instead, add an alias field to create an alternate field name.
- Throws:
IOException
ElasticsearchException
- See Also:
-
putMapping
public final PutMappingResponse putMapping(Function<PutMappingRequest.Builder, ObjectBuilder<PutMappingRequest>> fn) throws IOException, ElasticsearchExceptionUpdate field mappings. Add new fields to an existing data stream or index. You can also use this API to change the search settings of existing fields and add new properties to existing object fields. For data streams, these changes are applied to all backing indices by default.Add multi-fields to an existing field
Multi-fields let you index the same field in different ways. You can use this API to update the fields mapping parameter and enable multi-fields for an existing field. WARNING: If an index (or data stream) contains documents when you add a multi-field, those documents will not have values for the new multi-field. You can populate the new multi-field with the update by query API.
Change supported mapping parameters for an existing field
The documentation for each mapping parameter indicates whether you can update it for an existing field using this API. For example, you can use the update mapping API to update the
ignore_above
parameter.Change the mapping of an existing field
Except for supported mapping parameters, you can't change the mapping or field type of an existing field. Changing an existing field could invalidate data that's already indexed.
If you need to change the mapping of a field in a data stream's backing indices, refer to documentation about modifying data streams. If you need to change the mapping of a field in other indices, create a new index with the correct mapping and reindex your data into that index.
Rename a field
Renaming a field would invalidate data already indexed under the old field name. Instead, add an alias field to create an alternate field name.
- Parameters:
fn
- a function that initializes a builder to create thePutMappingRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
putSettings
public PutIndicesSettingsResponse putSettings(PutIndicesSettingsRequest request) throws IOException, ElasticsearchException Update index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default.To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation. To preserve existing settings from being updated, set the
preserve_existing
parameter totrue
.NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.
- Throws:
IOException
ElasticsearchException
- See Also:
-
putSettings
public final PutIndicesSettingsResponse putSettings(Function<PutIndicesSettingsRequest.Builder, ObjectBuilder<PutIndicesSettingsRequest>> fn) throws IOException, ElasticsearchExceptionUpdate index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default.To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation. To preserve existing settings from being updated, set the
preserve_existing
parameter totrue
.NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.
- Parameters:
fn
- a function that initializes a builder to create thePutIndicesSettingsRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
putSettings
Update index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default.To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation. To preserve existing settings from being updated, set the
preserve_existing
parameter totrue
.NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.
- Throws:
IOException
ElasticsearchException
- See Also:
-
putTemplate
public PutTemplateResponse putTemplate(PutTemplateRequest request) throws IOException, ElasticsearchException Create or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices. Elasticsearch applies templates to new indices based on an index pattern that matches the index name.IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
Composable templates always take precedence over legacy templates. If no composable template matches a new index, matching legacy templates are applied according to their order.
Index templates are only applied during index creation. Changes to index templates do not affect existing indices. Settings and mappings specified in create index API requests override any settings or mappings specified in an index template.
You can use C-style
/* *\/
block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.Indices matching multiple templates
Multiple index templates can potentially match an index, in this case, both the settings and mappings are merged into the final configuration of the index. The order of the merging can be controlled using the order parameter, with lower order being applied first, and higher orders overriding them. NOTE: Multiple matching templates with the same order value will result in a non-deterministic merging order.
- Throws:
IOException
ElasticsearchException
- See Also:
-
putTemplate
public final PutTemplateResponse putTemplate(Function<PutTemplateRequest.Builder, ObjectBuilder<PutTemplateRequest>> fn) throws IOException, ElasticsearchExceptionCreate or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices. Elasticsearch applies templates to new indices based on an index pattern that matches the index name.IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
Composable templates always take precedence over legacy templates. If no composable template matches a new index, matching legacy templates are applied according to their order.
Index templates are only applied during index creation. Changes to index templates do not affect existing indices. Settings and mappings specified in create index API requests override any settings or mappings specified in an index template.
You can use C-style
/* *\/
block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.Indices matching multiple templates
Multiple index templates can potentially match an index, in this case, both the settings and mappings are merged into the final configuration of the index. The order of the merging can be controlled using the order parameter, with lower order being applied first, and higher orders overriding them. NOTE: Multiple matching templates with the same order value will result in a non-deterministic merging order.
- Parameters:
fn
- a function that initializes a builder to create thePutTemplateRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
recovery
public RecoveryResponse recovery(RecoveryRequest request) throws IOException, ElasticsearchException Get index recovery information. Get information about ongoing and completed shard recoveries for one or more indices. For data streams, the API returns information for the stream's backing indices.All recoveries, whether ongoing or complete, are kept in the cluster state and may be reported on at any time.
Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or creating a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing.
Recovery automatically occurs during the following processes:
- When creating an index for the first time.
- When a node rejoins the cluster and starts up any missing primary shard copies using the data that it holds in its data path.
- Creation of new replica shard copies from the primary.
- Relocation of a shard copy to a different node in the same cluster.
- A snapshot restore operation.
- A clone, shrink, or split operation.
You can determine the cause of a shard recovery using the recovery or cat recovery APIs.
The index recovery API reports information about completed recoveries only for shard copies that currently exist in the cluster. It only reports the last recovery for each shard copy and does not report historical information about earlier recoveries, nor does it report information about the recoveries of shard copies that no longer exist. This means that if a shard copy completes a recovery and then Elasticsearch relocates it onto a different node then the information about the original recovery will not be shown in the recovery API.
- Throws:
IOException
ElasticsearchException
- See Also:
-
recovery
public final RecoveryResponse recovery(Function<RecoveryRequest.Builder, ObjectBuilder<RecoveryRequest>> fn) throws IOException, ElasticsearchExceptionGet index recovery information. Get information about ongoing and completed shard recoveries for one or more indices. For data streams, the API returns information for the stream's backing indices.All recoveries, whether ongoing or complete, are kept in the cluster state and may be reported on at any time.
Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or creating a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing.
Recovery automatically occurs during the following processes:
- When creating an index for the first time.
- When a node rejoins the cluster and starts up any missing primary shard copies using the data that it holds in its data path.
- Creation of new replica shard copies from the primary.
- Relocation of a shard copy to a different node in the same cluster.
- A snapshot restore operation.
- A clone, shrink, or split operation.
You can determine the cause of a shard recovery using the recovery or cat recovery APIs.
The index recovery API reports information about completed recoveries only for shard copies that currently exist in the cluster. It only reports the last recovery for each shard copy and does not report historical information about earlier recoveries, nor does it report information about the recoveries of shard copies that no longer exist. This means that if a shard copy completes a recovery and then Elasticsearch relocates it onto a different node then the information about the original recovery will not be shown in the recovery API.
- Parameters:
fn
- a function that initializes a builder to create theRecoveryRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
recovery
Get index recovery information. Get information about ongoing and completed shard recoveries for one or more indices. For data streams, the API returns information for the stream's backing indices.All recoveries, whether ongoing or complete, are kept in the cluster state and may be reported on at any time.
Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or creating a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing.
Recovery automatically occurs during the following processes:
- When creating an index for the first time.
- When a node rejoins the cluster and starts up any missing primary shard copies using the data that it holds in its data path.
- Creation of new replica shard copies from the primary.
- Relocation of a shard copy to a different node in the same cluster.
- A snapshot restore operation.
- A clone, shrink, or split operation.
You can determine the cause of a shard recovery using the recovery or cat recovery APIs.
The index recovery API reports information about completed recoveries only for shard copies that currently exist in the cluster. It only reports the last recovery for each shard copy and does not report historical information about earlier recoveries, nor does it report information about the recoveries of shard copies that no longer exist. This means that if a shard copy completes a recovery and then Elasticsearch relocates it onto a different node then the information about the original recovery will not be shown in the recovery API.
- Throws:
IOException
ElasticsearchException
- See Also:
-
refresh
Refresh an index. A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. You can change this default interval with the
index.refresh_interval
setting.Refresh requests are synchronous and do not return a response until the refresh operation completes.
Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.
If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's
refresh=wait_for
query parameter option. This option ensures the indexing operation waits for a periodic refresh before running the search.- Throws:
IOException
ElasticsearchException
- See Also:
-
refresh
public final RefreshResponse refresh(Function<RefreshRequest.Builder, ObjectBuilder<RefreshRequest>> fn) throws IOException, ElasticsearchExceptionRefresh an index. A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. You can change this default interval with the
index.refresh_interval
setting.Refresh requests are synchronous and do not return a response until the refresh operation completes.
Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.
If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's
refresh=wait_for
query parameter option. This option ensures the indexing operation waits for a periodic refresh before running the search.- Parameters:
fn
- a function that initializes a builder to create theRefreshRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
refresh
Refresh an index. A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. You can change this default interval with the
index.refresh_interval
setting.Refresh requests are synchronous and do not return a response until the refresh operation completes.
Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.
If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's
refresh=wait_for
query parameter option. This option ensures the indexing operation waits for a periodic refresh before running the search.- Throws:
IOException
ElasticsearchException
- See Also:
-
reloadSearchAnalyzers
public ReloadSearchAnalyzersResponse reloadSearchAnalyzers(ReloadSearchAnalyzersRequest request) throws IOException, ElasticsearchException Reload search analyzers. Reload an index's search analyzers and their resources. For data streams, the API reloads search analyzers and resources for the stream's backing indices.IMPORTANT: After reloading the search analyzers you should clear the request cache to make sure it doesn't contain responses derived from the previous versions of the analyzer.
You can use the reload search analyzers API to pick up changes to synonym files used in the
synonym_graph
orsynonym
token filter of a search analyzer. To be eligible, the token filter must have anupdateable
flag oftrue
and only be used in search analyzers.NOTE: This API does not perform a reload for each shard of an index. Instead, it performs a reload for each node containing index shards. As a result, the total shard count returned by the API can differ from the number of index shards. Because reloading affects every node with an index shard, it is important to update the synonym file on every data node in the cluster--including nodes that don't contain a shard replica--before using this API. This ensures the synonym file is updated everywhere in the cluster in case shards are relocated in the future.
- Throws:
IOException
ElasticsearchException
- See Also:
-
reloadSearchAnalyzers
public final ReloadSearchAnalyzersResponse reloadSearchAnalyzers(Function<ReloadSearchAnalyzersRequest.Builder, ObjectBuilder<ReloadSearchAnalyzersRequest>> fn) throws IOException, ElasticsearchExceptionReload search analyzers. Reload an index's search analyzers and their resources. For data streams, the API reloads search analyzers and resources for the stream's backing indices.IMPORTANT: After reloading the search analyzers you should clear the request cache to make sure it doesn't contain responses derived from the previous versions of the analyzer.
You can use the reload search analyzers API to pick up changes to synonym files used in the
synonym_graph
orsynonym
token filter of a search analyzer. To be eligible, the token filter must have anupdateable
flag oftrue
and only be used in search analyzers.NOTE: This API does not perform a reload for each shard of an index. Instead, it performs a reload for each node containing index shards. As a result, the total shard count returned by the API can differ from the number of index shards. Because reloading affects every node with an index shard, it is important to update the synonym file on every data node in the cluster--including nodes that don't contain a shard replica--before using this API. This ensures the synonym file is updated everywhere in the cluster in case shards are relocated in the future.
- Parameters:
fn
- a function that initializes a builder to create theReloadSearchAnalyzersRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
resolveCluster
public ResolveClusterResponse resolveCluster(ResolveClusterRequest request) throws IOException, ElasticsearchException Resolve the cluster.Resolve the specified index expressions to return information about each cluster, including the local "querying" cluster, if included. If no index expression is provided, the API will return information about all the remote clusters that are configured on the querying cluster.
This endpoint is useful before doing a cross-cluster search in order to determine which remote clusters should be included in a search.
You use the same index expression with this endpoint as you would for cross-cluster search. Index and cluster exclusions are also supported with this endpoint.
For each cluster in the index expression, information is returned about:
- Whether the querying ("local") cluster is currently connected
to each remote cluster specified in the index expression. Note that this
endpoint actively attempts to contact the remote clusters, unlike the
remote/info
endpoint. - Whether each remote cluster is configured with
skip_unavailable
astrue
orfalse
. - Whether there are any indices, aliases, or data streams on that cluster that match the index expression.
- Whether the search is likely to have errors returned when you do the cross-cluster search (including any authorization errors if you do not have permission to query the index).
- Cluster version information, including the Elasticsearch server version.
For example,
GET /_resolve/cluster/my-index-*,cluster*:my-index-*
returns information about the local cluster and all remotely configured clusters that start with the aliascluster*
. Each cluster returns information about whether it has any indices, aliases or data streams that matchmy-index-*
.Note on backwards compatibility
The ability to query without an index expression was added in version 8.18, so when querying remote clusters older than that, the local cluster will send the index expression
dummy*
to those remote clusters. Thus, if an errors occur, you may see a reference to that index expression even though you didn't request it. If it causes a problem, you can instead include an index expression like*:*
to bypass the issue.Advantages of using this endpoint before a cross-cluster search
You may want to exclude a cluster or index from a search when:
- A remote cluster is not currently connected and is configured with
skip_unavailable=false
. Running a cross-cluster search under those conditions will cause the entire search to fail. - A cluster has no matching indices, aliases or data streams for the index
expression (or your user does not have permissions to search them). For
example, suppose your index expression is
logs*,remote1:logs*
and the remote1 cluster has no indices, aliases or data streams that matchlogs*
. In that case, that cluster will return no results from that cluster if you include it in a cross-cluster search. - The index expression (combined with any query parameters you specify)
will likely cause an exception to be thrown when you do the search. In these
cases, the "error" field in the
_resolve/cluster
response will be present. (This is also where security/permission errors will be shown.) - A remote cluster is an older version that does not support the feature you want to use in your search.
Test availability of remote clusters
The
remote/info
endpoint is commonly used to test whether the "local" cluster (the cluster being queried) is connected to its remote clusters, but it does not necessarily reflect whether the remote cluster is available or not. The remote cluster may be available, while the local cluster is not currently connected to it.You can use the
_resolve/cluster
API to attempt to reconnect to remote clusters. For example withGET _resolve/cluster
orGET _resolve/cluster/*:*
. Theconnected
field in the response will indicate whether it was successful. If a connection was (re-)established, this will also cause theremote/info
endpoint to now indicate a connected status.- Throws:
IOException
ElasticsearchException
- See Also:
- Whether the querying ("local") cluster is currently connected
to each remote cluster specified in the index expression. Note that this
endpoint actively attempts to contact the remote clusters, unlike the
-
resolveCluster
public final ResolveClusterResponse resolveCluster(Function<ResolveClusterRequest.Builder, ObjectBuilder<ResolveClusterRequest>> fn) throws IOException, ElasticsearchExceptionResolve the cluster.Resolve the specified index expressions to return information about each cluster, including the local "querying" cluster, if included. If no index expression is provided, the API will return information about all the remote clusters that are configured on the querying cluster.
This endpoint is useful before doing a cross-cluster search in order to determine which remote clusters should be included in a search.
You use the same index expression with this endpoint as you would for cross-cluster search. Index and cluster exclusions are also supported with this endpoint.
For each cluster in the index expression, information is returned about:
- Whether the querying ("local") cluster is currently connected
to each remote cluster specified in the index expression. Note that this
endpoint actively attempts to contact the remote clusters, unlike the
remote/info
endpoint. - Whether each remote cluster is configured with
skip_unavailable
astrue
orfalse
. - Whether there are any indices, aliases, or data streams on that cluster that match the index expression.
- Whether the search is likely to have errors returned when you do the cross-cluster search (including any authorization errors if you do not have permission to query the index).
- Cluster version information, including the Elasticsearch server version.
For example,
GET /_resolve/cluster/my-index-*,cluster*:my-index-*
returns information about the local cluster and all remotely configured clusters that start with the aliascluster*
. Each cluster returns information about whether it has any indices, aliases or data streams that matchmy-index-*
.Note on backwards compatibility
The ability to query without an index expression was added in version 8.18, so when querying remote clusters older than that, the local cluster will send the index expression
dummy*
to those remote clusters. Thus, if an errors occur, you may see a reference to that index expression even though you didn't request it. If it causes a problem, you can instead include an index expression like*:*
to bypass the issue.Advantages of using this endpoint before a cross-cluster search
You may want to exclude a cluster or index from a search when:
- A remote cluster is not currently connected and is configured with
skip_unavailable=false
. Running a cross-cluster search under those conditions will cause the entire search to fail. - A cluster has no matching indices, aliases or data streams for the index
expression (or your user does not have permissions to search them). For
example, suppose your index expression is
logs*,remote1:logs*
and the remote1 cluster has no indices, aliases or data streams that matchlogs*
. In that case, that cluster will return no results from that cluster if you include it in a cross-cluster search. - The index expression (combined with any query parameters you specify)
will likely cause an exception to be thrown when you do the search. In these
cases, the "error" field in the
_resolve/cluster
response will be present. (This is also where security/permission errors will be shown.) - A remote cluster is an older version that does not support the feature you want to use in your search.
Test availability of remote clusters
The
remote/info
endpoint is commonly used to test whether the "local" cluster (the cluster being queried) is connected to its remote clusters, but it does not necessarily reflect whether the remote cluster is available or not. The remote cluster may be available, while the local cluster is not currently connected to it.You can use the
_resolve/cluster
API to attempt to reconnect to remote clusters. For example withGET _resolve/cluster
orGET _resolve/cluster/*:*
. Theconnected
field in the response will indicate whether it was successful. If a connection was (re-)established, this will also cause theremote/info
endpoint to now indicate a connected status.- Parameters:
fn
- a function that initializes a builder to create theResolveClusterRequest
- Throws:
IOException
ElasticsearchException
- See Also:
- Whether the querying ("local") cluster is currently connected
to each remote cluster specified in the index expression. Note that this
endpoint actively attempts to contact the remote clusters, unlike the
-
resolveCluster
Resolve the cluster.Resolve the specified index expressions to return information about each cluster, including the local "querying" cluster, if included. If no index expression is provided, the API will return information about all the remote clusters that are configured on the querying cluster.
This endpoint is useful before doing a cross-cluster search in order to determine which remote clusters should be included in a search.
You use the same index expression with this endpoint as you would for cross-cluster search. Index and cluster exclusions are also supported with this endpoint.
For each cluster in the index expression, information is returned about:
- Whether the querying ("local") cluster is currently connected
to each remote cluster specified in the index expression. Note that this
endpoint actively attempts to contact the remote clusters, unlike the
remote/info
endpoint. - Whether each remote cluster is configured with
skip_unavailable
astrue
orfalse
. - Whether there are any indices, aliases, or data streams on that cluster that match the index expression.
- Whether the search is likely to have errors returned when you do the cross-cluster search (including any authorization errors if you do not have permission to query the index).
- Cluster version information, including the Elasticsearch server version.
For example,
GET /_resolve/cluster/my-index-*,cluster*:my-index-*
returns information about the local cluster and all remotely configured clusters that start with the aliascluster*
. Each cluster returns information about whether it has any indices, aliases or data streams that matchmy-index-*
.Note on backwards compatibility
The ability to query without an index expression was added in version 8.18, so when querying remote clusters older than that, the local cluster will send the index expression
dummy*
to those remote clusters. Thus, if an errors occur, you may see a reference to that index expression even though you didn't request it. If it causes a problem, you can instead include an index expression like*:*
to bypass the issue.Advantages of using this endpoint before a cross-cluster search
You may want to exclude a cluster or index from a search when:
- A remote cluster is not currently connected and is configured with
skip_unavailable=false
. Running a cross-cluster search under those conditions will cause the entire search to fail. - A cluster has no matching indices, aliases or data streams for the index
expression (or your user does not have permissions to search them). For
example, suppose your index expression is
logs*,remote1:logs*
and the remote1 cluster has no indices, aliases or data streams that matchlogs*
. In that case, that cluster will return no results from that cluster if you include it in a cross-cluster search. - The index expression (combined with any query parameters you specify)
will likely cause an exception to be thrown when you do the search. In these
cases, the "error" field in the
_resolve/cluster
response will be present. (This is also where security/permission errors will be shown.) - A remote cluster is an older version that does not support the feature you want to use in your search.
Test availability of remote clusters
The
remote/info
endpoint is commonly used to test whether the "local" cluster (the cluster being queried) is connected to its remote clusters, but it does not necessarily reflect whether the remote cluster is available or not. The remote cluster may be available, while the local cluster is not currently connected to it.You can use the
_resolve/cluster
API to attempt to reconnect to remote clusters. For example withGET _resolve/cluster
orGET _resolve/cluster/*:*
. Theconnected
field in the response will indicate whether it was successful. If a connection was (re-)established, this will also cause theremote/info
endpoint to now indicate a connected status.- Throws:
IOException
ElasticsearchException
- See Also:
- Whether the querying ("local") cluster is currently connected
to each remote cluster specified in the index expression. Note that this
endpoint actively attempts to contact the remote clusters, unlike the
-
resolveIndex
public ResolveIndexResponse resolveIndex(ResolveIndexRequest request) throws IOException, ElasticsearchException Resolve indices. Resolve the names and/or index patterns for indices, aliases, and data streams. Multiple patterns and remote clusters are supported.- Throws:
IOException
ElasticsearchException
- See Also:
-
resolveIndex
public final ResolveIndexResponse resolveIndex(Function<ResolveIndexRequest.Builder, ObjectBuilder<ResolveIndexRequest>> fn) throws IOException, ElasticsearchExceptionResolve indices. Resolve the names and/or index patterns for indices, aliases, and data streams. Multiple patterns and remote clusters are supported.- Parameters:
fn
- a function that initializes a builder to create theResolveIndexRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
rollover
public RolloverResponse rollover(RolloverRequest request) throws IOException, ElasticsearchException Roll over to a new index. TIP: It is recommended to use the index lifecycle rollover action to automate rollovers.The rollover API creates a new index for a data stream or index alias. The API behavior depends on the rollover target.
Roll over a data stream
If you roll over a data stream, the API creates a new write index for the stream. The stream's previous write index becomes a regular backing index. A rollover also increments the data stream's generation.
Roll over an index alias with a write index
TIP: Prior to Elasticsearch 7.9, you'd typically use an index alias with a write index to manage time series data. Data streams replace this functionality, require less maintenance, and automatically integrate with data tiers.
If an index alias points to multiple indices, one of the indices must be a write index. The rollover API creates a new write index for the alias with
is_write_index
set totrue
. The API alsosets is_write_index
tofalse
for the previous write index.Roll over an index alias with one index
If you roll over an index alias that points to only one index, the API creates a new index for the alias and removes the original index from the alias.
NOTE: A rollover creates a new index and is subject to the
wait_for_active_shards
setting.Increment index names for an alias
When you roll over an index alias, you can specify a name for the new index. If you don't specify a name and the current index ends with
-
and a number, such asmy-index-000001
ormy-index-3
, the new index name increments that number. For example, if you roll over an alias with a current index ofmy-index-000001
, the rollover creates a new index namedmy-index-000002
. This number is always six characters and zero-padded, regardless of the previous index's name.If you use an index alias for time series data, you can use date math in the index name to track the rollover date. For example, you can create an alias that points to an index named
<my-index-{now/d}-000001>
. If you create the index on May 6, 2099, the index's name ismy-index-2099.05.06-000001
. If you roll over the alias on May 7, 2099, the new index's name ismy-index-2099.05.07-000002
.- Throws:
IOException
ElasticsearchException
- See Also:
-
rollover
public final RolloverResponse rollover(Function<RolloverRequest.Builder, ObjectBuilder<RolloverRequest>> fn) throws IOException, ElasticsearchExceptionRoll over to a new index. TIP: It is recommended to use the index lifecycle rollover action to automate rollovers.The rollover API creates a new index for a data stream or index alias. The API behavior depends on the rollover target.
Roll over a data stream
If you roll over a data stream, the API creates a new write index for the stream. The stream's previous write index becomes a regular backing index. A rollover also increments the data stream's generation.
Roll over an index alias with a write index
TIP: Prior to Elasticsearch 7.9, you'd typically use an index alias with a write index to manage time series data. Data streams replace this functionality, require less maintenance, and automatically integrate with data tiers.
If an index alias points to multiple indices, one of the indices must be a write index. The rollover API creates a new write index for the alias with
is_write_index
set totrue
. The API alsosets is_write_index
tofalse
for the previous write index.Roll over an index alias with one index
If you roll over an index alias that points to only one index, the API creates a new index for the alias and removes the original index from the alias.
NOTE: A rollover creates a new index and is subject to the
wait_for_active_shards
setting.Increment index names for an alias
When you roll over an index alias, you can specify a name for the new index. If you don't specify a name and the current index ends with
-
and a number, such asmy-index-000001
ormy-index-3
, the new index name increments that number. For example, if you roll over an alias with a current index ofmy-index-000001
, the rollover creates a new index namedmy-index-000002
. This number is always six characters and zero-padded, regardless of the previous index's name.If you use an index alias for time series data, you can use date math in the index name to track the rollover date. For example, you can create an alias that points to an index named
<my-index-{now/d}-000001>
. If you create the index on May 6, 2099, the index's name ismy-index-2099.05.06-000001
. If you roll over the alias on May 7, 2099, the new index's name ismy-index-2099.05.07-000002
.- Parameters:
fn
- a function that initializes a builder to create theRolloverRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
segments
public SegmentsResponse segments(SegmentsRequest request) throws IOException, ElasticsearchException Get index segments. Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the stream's backing indices.- Throws:
IOException
ElasticsearchException
- See Also:
-
segments
public final SegmentsResponse segments(Function<SegmentsRequest.Builder, ObjectBuilder<SegmentsRequest>> fn) throws IOException, ElasticsearchExceptionGet index segments. Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the stream's backing indices.- Parameters:
fn
- a function that initializes a builder to create theSegmentsRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
segments
Get index segments. Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the stream's backing indices.- Throws:
IOException
ElasticsearchException
- See Also:
-
shardStores
public ShardStoresResponse shardStores(ShardStoresRequest request) throws IOException, ElasticsearchException Get index shard stores. Get store information about replica shards in one or more indices. For data streams, the API retrieves store information for the stream's backing indices.The index shard stores API returns the following information:
- The node on which each replica shard exists.
- The allocation ID for each replica shard.
- A unique ID for each replica shard.
- Any errors encountered while opening the shard index or from an earlier failure.
By default, the API returns store information only for primary shards that are unassigned or have one or more unassigned replica shards.
- Throws:
IOException
ElasticsearchException
- See Also:
-
shardStores
public final ShardStoresResponse shardStores(Function<ShardStoresRequest.Builder, ObjectBuilder<ShardStoresRequest>> fn) throws IOException, ElasticsearchExceptionGet index shard stores. Get store information about replica shards in one or more indices. For data streams, the API retrieves store information for the stream's backing indices.The index shard stores API returns the following information:
- The node on which each replica shard exists.
- The allocation ID for each replica shard.
- A unique ID for each replica shard.
- Any errors encountered while opening the shard index or from an earlier failure.
By default, the API returns store information only for primary shards that are unassigned or have one or more unassigned replica shards.
- Parameters:
fn
- a function that initializes a builder to create theShardStoresRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
shardStores
Get index shard stores. Get store information about replica shards in one or more indices. For data streams, the API retrieves store information for the stream's backing indices.The index shard stores API returns the following information:
- The node on which each replica shard exists.
- The allocation ID for each replica shard.
- A unique ID for each replica shard.
- Any errors encountered while opening the shard index or from an earlier failure.
By default, the API returns store information only for primary shards that are unassigned or have one or more unassigned replica shards.
- Throws:
IOException
ElasticsearchException
- See Also:
-
shrink
Shrink an index. Shrink an index into a new index with fewer primary shards.Before you can shrink an index:
- The index must be read-only.
- A copy of every shard in the index must reside on the same node.
- The index must have a green health status.
To make shard allocation easier, we recommend you also remove the index's replica shards. You can later re-add replica shards as part of the shrink operation.
The requested number of primary shards in the target index must be a factor of the number of shards in the source index. For example an index with 8 primary shards can be shrunk into 4, 2 or 1 primary shards or an index with 15 primary shards can be shrunk into 5, 3 or 1. If the number of shards in the index is a prime number it can only be shrunk into a single primary shard Before shrinking, a (primary or replica) copy of every shard in the index must be present on the same node.
The current write index on a data stream cannot be shrunk. In order to shrink the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be shrunk.
A shrink operation:
- Creates a new target index with the same definition as the source index, but with a smaller number of primary shards.
- Hard-links segments from the source index into the target index. If the file system does not support hard-linking, then all segments are copied into the new index, which is a much more time consuming process. Also if using multiple data paths, shards on different data paths require a full copy of segment files if they are not on the same disk since hardlinks do not work across disks.
- Recovers the target index as though it were a closed index which had just
been re-opened. Recovers shards to the
.routing.allocation.initial_recovery._id
index setting.
IMPORTANT: Indices can only be shrunk if they satisfy the following requirements:
- The target index must not exist.
- The source index must have more primary shards than the target index.
- The number of primary shards in the target index must be a factor of the number of primary shards in the source index. The source index must have more primary shards than the target index.
- The index must not contain more than 2,147,483,519 documents in total across all shards that will be shrunk into a single shard on the target index as this is the maximum number of docs that can fit into a single shard.
- The node handling the shrink process must have sufficient free disk space to accommodate a second copy of the existing index.
- Throws:
IOException
ElasticsearchException
- See Also:
-
shrink
public final ShrinkResponse shrink(Function<ShrinkRequest.Builder, ObjectBuilder<ShrinkRequest>> fn) throws IOException, ElasticsearchExceptionShrink an index. Shrink an index into a new index with fewer primary shards.Before you can shrink an index:
- The index must be read-only.
- A copy of every shard in the index must reside on the same node.
- The index must have a green health status.
To make shard allocation easier, we recommend you also remove the index's replica shards. You can later re-add replica shards as part of the shrink operation.
The requested number of primary shards in the target index must be a factor of the number of shards in the source index. For example an index with 8 primary shards can be shrunk into 4, 2 or 1 primary shards or an index with 15 primary shards can be shrunk into 5, 3 or 1. If the number of shards in the index is a prime number it can only be shrunk into a single primary shard Before shrinking, a (primary or replica) copy of every shard in the index must be present on the same node.
The current write index on a data stream cannot be shrunk. In order to shrink the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be shrunk.
A shrink operation:
- Creates a new target index with the same definition as the source index, but with a smaller number of primary shards.
- Hard-links segments from the source index into the target index. If the file system does not support hard-linking, then all segments are copied into the new index, which is a much more time consuming process. Also if using multiple data paths, shards on different data paths require a full copy of segment files if they are not on the same disk since hardlinks do not work across disks.
- Recovers the target index as though it were a closed index which had just
been re-opened. Recovers shards to the
.routing.allocation.initial_recovery._id
index setting.
IMPORTANT: Indices can only be shrunk if they satisfy the following requirements:
- The target index must not exist.
- The source index must have more primary shards than the target index.
- The number of primary shards in the target index must be a factor of the number of primary shards in the source index. The source index must have more primary shards than the target index.
- The index must not contain more than 2,147,483,519 documents in total across all shards that will be shrunk into a single shard on the target index as this is the maximum number of docs that can fit into a single shard.
- The node handling the shrink process must have sufficient free disk space to accommodate a second copy of the existing index.
- Parameters:
fn
- a function that initializes a builder to create theShrinkRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
simulateIndexTemplate
public SimulateIndexTemplateResponse simulateIndexTemplate(SimulateIndexTemplateRequest request) throws IOException, ElasticsearchException Simulate an index. Get the index configuration that would be applied to the specified index from an existing index template.- Throws:
IOException
ElasticsearchException
- See Also:
-
simulateIndexTemplate
public final SimulateIndexTemplateResponse simulateIndexTemplate(Function<SimulateIndexTemplateRequest.Builder, ObjectBuilder<SimulateIndexTemplateRequest>> fn) throws IOException, ElasticsearchExceptionSimulate an index. Get the index configuration that would be applied to the specified index from an existing index template.- Parameters:
fn
- a function that initializes a builder to create theSimulateIndexTemplateRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
simulateTemplate
public SimulateTemplateResponse simulateTemplate(SimulateTemplateRequest request) throws IOException, ElasticsearchException Simulate an index template. Get the index configuration that would be applied by a particular index template.- Throws:
IOException
ElasticsearchException
- See Also:
-
simulateTemplate
public final SimulateTemplateResponse simulateTemplate(Function<SimulateTemplateRequest.Builder, ObjectBuilder<SimulateTemplateRequest>> fn) throws IOException, ElasticsearchExceptionSimulate an index template. Get the index configuration that would be applied by a particular index template.- Parameters:
fn
- a function that initializes a builder to create theSimulateTemplateRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
simulateTemplate
Simulate an index template. Get the index configuration that would be applied by a particular index template.- Throws:
IOException
ElasticsearchException
- See Also:
-
split
Split an index. Split an index into a new index with more primary shards.-
Before you can split an index:
-
The index must be read-only.
-
The cluster health status must be green.
You can do make an index read-only with the following request using the add index block API:
PUT /my_source_index/_block/write
The current write index on a data stream cannot be split. In order to split the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be split.
The number of times the index can be split (and the number of shards that each original shard can be split into) is determined by the
index.number_of_routing_shards
setting. The number of routing shards specifies the hashing space that is used internally to distribute documents across shards with consistent hashing. For instance, a 5 shard index withnumber_of_routing_shards
set to 30 (5 x 2 x 3) could be split by a factor of 2 or 3.A split operation:
- Creates a new target index with the same definition as the source index, but with a larger number of primary shards.
- Hard-links segments from the source index into the target index. If the file system doesn't support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
- Hashes all documents again, after low level files are created, to delete documents that belong to a different shard.
- Recovers the target index as though it were a closed index which had just been re-opened.
IMPORTANT: Indices can only be split if they satisfy the following requirements:
- The target index must not exist.
- The source index must have fewer primary shards than the target index.
- The number of primary shards in the target index must be a multiple of the number of primary shards in the source index.
- The node handling the split process must have sufficient free disk space to accommodate a second copy of the existing index.
- Throws:
IOException
ElasticsearchException
- See Also:
-
-
split
public final SplitResponse split(Function<SplitRequest.Builder, ObjectBuilder<SplitRequest>> fn) throws IOException, ElasticsearchExceptionSplit an index. Split an index into a new index with more primary shards.-
Before you can split an index:
-
The index must be read-only.
-
The cluster health status must be green.
You can do make an index read-only with the following request using the add index block API:
PUT /my_source_index/_block/write
The current write index on a data stream cannot be split. In order to split the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be split.
The number of times the index can be split (and the number of shards that each original shard can be split into) is determined by the
index.number_of_routing_shards
setting. The number of routing shards specifies the hashing space that is used internally to distribute documents across shards with consistent hashing. For instance, a 5 shard index withnumber_of_routing_shards
set to 30 (5 x 2 x 3) could be split by a factor of 2 or 3.A split operation:
- Creates a new target index with the same definition as the source index, but with a larger number of primary shards.
- Hard-links segments from the source index into the target index. If the file system doesn't support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
- Hashes all documents again, after low level files are created, to delete documents that belong to a different shard.
- Recovers the target index as though it were a closed index which had just been re-opened.
IMPORTANT: Indices can only be split if they satisfy the following requirements:
- The target index must not exist.
- The source index must have fewer primary shards than the target index.
- The number of primary shards in the target index must be a multiple of the number of primary shards in the source index.
- The node handling the split process must have sufficient free disk space to accommodate a second copy of the existing index.
- Parameters:
fn
- a function that initializes a builder to create theSplitRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
-
stats
public IndicesStatsResponse stats(IndicesStatsRequest request) throws IOException, ElasticsearchException Get index statistics. For data streams, the API retrieves statistics for the stream's backing indices.By default, the returned statistics are index-level with
primaries
andtotal
aggregations.primaries
are the values for only the primary shards.total
are the accumulated values for both primary and replica shards.To get shard-level statistics, set the
level
parameter toshards
.NOTE: When moving to another node, the shard-level statistics for a shard are cleared. Although the shard is no longer part of the node, that node retains any node-level statistics to which the shard contributed.
- Throws:
IOException
ElasticsearchException
- See Also:
-
stats
public final IndicesStatsResponse stats(Function<IndicesStatsRequest.Builder, ObjectBuilder<IndicesStatsRequest>> fn) throws IOException, ElasticsearchExceptionGet index statistics. For data streams, the API retrieves statistics for the stream's backing indices.By default, the returned statistics are index-level with
primaries
andtotal
aggregations.primaries
are the values for only the primary shards.total
are the accumulated values for both primary and replica shards.To get shard-level statistics, set the
level
parameter toshards
.NOTE: When moving to another node, the shard-level statistics for a shard are cleared. Although the shard is no longer part of the node, that node retains any node-level statistics to which the shard contributed.
- Parameters:
fn
- a function that initializes a builder to create theIndicesStatsRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
stats
Get index statistics. For data streams, the API retrieves statistics for the stream's backing indices.By default, the returned statistics are index-level with
primaries
andtotal
aggregations.primaries
are the values for only the primary shards.total
are the accumulated values for both primary and replica shards.To get shard-level statistics, set the
level
parameter toshards
.NOTE: When moving to another node, the shard-level statistics for a shard are cleared. Although the shard is no longer part of the node, that node retains any node-level statistics to which the shard contributed.
- Throws:
IOException
ElasticsearchException
- See Also:
-
updateAliases
public UpdateAliasesResponse updateAliases(UpdateAliasesRequest request) throws IOException, ElasticsearchException Create or update an alias. Adds a data stream or index to an alias.- Throws:
IOException
ElasticsearchException
- See Also:
-
updateAliases
public final UpdateAliasesResponse updateAliases(Function<UpdateAliasesRequest.Builder, ObjectBuilder<UpdateAliasesRequest>> fn) throws IOException, ElasticsearchExceptionCreate or update an alias. Adds a data stream or index to an alias.- Parameters:
fn
- a function that initializes a builder to create theUpdateAliasesRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
updateAliases
Create or update an alias. Adds a data stream or index to an alias.- Throws:
IOException
ElasticsearchException
- See Also:
-
validateQuery
public ValidateQueryResponse validateQuery(ValidateQueryRequest request) throws IOException, ElasticsearchException Validate a query. Validates a query without running it.- Throws:
IOException
ElasticsearchException
- See Also:
-
validateQuery
public final ValidateQueryResponse validateQuery(Function<ValidateQueryRequest.Builder, ObjectBuilder<ValidateQueryRequest>> fn) throws IOException, ElasticsearchExceptionValidate a query. Validates a query without running it.- Parameters:
fn
- a function that initializes a builder to create theValidateQueryRequest
- Throws:
IOException
ElasticsearchException
- See Also:
-
validateQuery
Validate a query. Validates a query without running it.- Throws:
IOException
ElasticsearchException
- See Also:
-