public interface DocumentStore
Modifier and Type | Method and Description |
---|---|
long |
copyIndex(Class<?> beanType,
String newIndex)
Copy the index to a new index.
|
long |
copyIndex(Class<?> beanType,
String newIndex,
long sinceEpochMillis)
Copy entries from an index to a new index but limiting to documents that have been
modified since the sinceEpochMillis time.
|
long |
copyIndex(Query<?> query,
String newIndex,
int bulkBatchSize)
Copy from a source index to a new index taking only the documents
matching the given query.
|
void |
createIndex(String indexName,
String alias)
Create an index given a mapping file as a resource in the classPath (similar to DDL create table).
|
void |
dropIndex(String indexName)
Drop the index from the document store (similar to DDL drop table).
|
<T> T |
find(DocQueryRequest<T> request)
Return the bean by fetching it's content from the document store.
|
<T> void |
findEach(DocQueryRequest<T> query,
java.util.function.Consumer<T> consumer)
Execute the query against the document store with the expectation of a large set of results
that are processed in a scrolling resultSet fashion.
|
void |
findEach(String indexNameType,
String rawQuery,
java.util.function.Consumer<RawDoc> consumer)
Find each processing raw documents.
|
<T> void |
findEachWhile(DocQueryRequest<T> query,
java.util.function.Predicate<T> consumer)
Execute the query against the document store with the expectation of a large set of results
that are processed in a scrolling resultSet fashion.
|
void |
findEachWhile(String indexNameType,
String rawQuery,
java.util.function.Predicate<RawDoc> consumer)
Find each processing raw documents stopping when the predicate returns false.
|
<T> List<T> |
findList(DocQueryRequest<T> request)
Execute the find list query.
|
<T> PagedList<T> |
findPagedList(DocQueryRequest<T> request)
Execute the query against the document store returning the paged list.
|
void |
indexAll(Class<?> beanType)
Update the document store for all beans of this type.
|
<T> void |
indexByQuery(Query<T> query)
Update the associated document store using the result of the query.
|
<T> void |
indexByQuery(Query<T> query,
int bulkBatchSize)
Update the associated document store index using the result of the query additionally specifying a
bulkBatchSize to use for sending the messages to ElasticSearch.
|
void |
indexSettings(String indexName,
Map<String,Object> settings)
Modify the settings on an index.
|
long |
process(List<DocStoreQueueEntry> queueEntries)
Process the queue entries sending updates to the document store or queuing them for later processing.
|
<T> void indexByQuery(Query<T> query)
This will execute the query against the database creating a document for each bean graph and sending this to the document store.
Note that the select and fetch paths of the query is set for you to match the
document structure needed based on @DocStore
and @DocStoreEmbedded
so what this query requires is the predicates only.
This query will be executed using findEach so it is safe to use a query that will fetch a lot of beans. The default bulkBatchSize is used.
query
- The query that selects object to send to the document store.<T> void indexByQuery(Query<T> query, int bulkBatchSize)
query
- The query that selects object to send to the document store.bulkBatchSize
- The batch size to use when bulk sending to the document store.void indexAll(Class<?> beanType)
This is the same as indexByQuery where the query has no predicates and so fetches all rows.
@Nullable <T> T find(DocQueryRequest<T> request)
Typically this is called indirectly by findOne() on the query.
Customer customer =
server.find(Customer.class)
.setUseDocStore(true)
.setId(42)
.findOne();
<T> List<T> findList(DocQueryRequest<T> request)
Typically this is called indirectly by findList() on the query that has setUseDocStore(true).
List<Customer> newCustomers =
server.find(Customer.class)
.setUseDocStore(true)
.where().eq("status, Customer.Status.NEW)
.findList();
<T> PagedList<T> findPagedList(DocQueryRequest<T> request)
The query should have firstRow
or maxRows
set prior to calling this method.
Typically this is called indirectly by findPagedList() on the query that has setUseDocStore(true).
PagedList<Customer> newCustomers =
server.find(Customer.class)
.setUseDocStore(true)
.where().eq("status, Customer.Status.NEW)
.setMaxRows(50)
.findPagedList();
<T> void findEach(DocQueryRequest<T> query, java.util.function.Consumer<T> consumer)
For example, with the ElasticSearch doc store this uses SCROLL.
Typically this is called indirectly by findEach() on the query that has setUseDocStore(true).
server.find(Order.class)
.setUseDocStore(true)
.where()... // perhaps add predicates
.findEach((Order order) -> {
// process the bean ...
});
<T> void findEachWhile(DocQueryRequest<T> query, java.util.function.Predicate<T> consumer)
Unlike findEach() this provides the opportunity to stop iterating through the large query.
For example, with the ElasticSearch doc store this uses SCROLL.
Typically this is called indirectly by findEachWhile() on the query that has setUseDocStore(true).
server.find(Order.class)
.setUseDocStore(true)
.where()... // perhaps add predicates
.findEachWhile(new Predicate<Order>() {
@Override
public void accept(Order bean) {
// process the bean
// return true to continue, false to stop
// boolean shouldContinue = ...
return shouldContinue;
}
});
void findEach(String indexNameType, String rawQuery, java.util.function.Consumer<RawDoc> consumer)
indexNameType
- The full index name and typerawQuery
- The query to executeconsumer
- Consumer to process each documentvoid findEachWhile(String indexNameType, String rawQuery, java.util.function.Predicate<RawDoc> consumer)
indexNameType
- The full index name and typerawQuery
- The query to executeconsumer
- Consumer to process each document until false is returnedlong process(List<DocStoreQueueEntry> queueEntries) throws IOException
IOException
void dropIndex(String indexName)
DocumentStore documentStore = server.docStore();
documentStore.dropIndex("product_copy");
void createIndex(String indexName, String alias)
DocumentStore documentStore = server.docStore();
// uses product_copy.mapping.json resource
// ... to define mappings for the index
documentStore.createIndex("product_copy", null);
indexName
- the name of the new indexalias
- the alias of the indexvoid indexSettings(String indexName, Map<String,Object> settings)
For example, this can be used be used to set elasticSearch refresh_interval on an index before a bulk update.
// refresh_interval -1 ... disable refresh while bulk loading
Map<String,Object> settings = new LinkedHashMap<>();
settings.put("refresh_interval", "-1");
documentStore.indexSettings("product", settings);
// refresh_interval 1s ... restore after bulk loading
Map<String,Object> settings = new LinkedHashMap<>();
settings.put("refresh_interval", "1s");
documentStore.indexSettings("product", settings);
indexName
- the name of the index to update settings onsettings
- the settings to set on the indexlong copyIndex(Class<?> beanType, String newIndex)
This copy process does not use the database but instead will copy from the source index to a destination index.
long copyCount = documentStore.copyIndex(Product.class, "product_copy");
beanType
- The bean type of the source indexnewIndex
- The name of the index to copy tolong copyIndex(Class<?> beanType, String newIndex, long sinceEpochMillis)
To support this the document needs to have a @WhenModified
property.
long copyCount = documentStore.copyIndex(Product.class, "product_copy", sinceMillis);
beanType
- The bean type of the source indexnewIndex
- The name of the index to copy tolong copyIndex(Query<?> query, String newIndex, int bulkBatchSize)
// predicates to select the source documents to copy
Query<Product> query = server.find(Product.class)
.where()
.ge("whenModified", new Timestamp(since))
.ge("name", "A")
.lt("name", "D")
.query();
// copy from the source index to "product_copy" index
long copyCount = documentStore.copyIndex(query, "product_copy", 1000);
query
- The query to select the source documents to copynewIndex
- The target index to copy the documents tobulkBatchSize
- The ElasticSearch bulk batch size, if 0 uses the default.Copyright © 2018. All rights reserved.