Package org.apache.lucene.search
Table Of Contents
- Search Basics
- The Query Classes
- Scoring: Introduction
- Scoring: Basics
- Changing the Scoring
- Appendix: Search Algorithm
Search Basics
Lucene offers a wide variety of Query
implementations, most
of which are in this package or the queries
module. These implementations can be combined in a wide variety of ways to provide complex
querying capabilities along with information about where matches took place in the document
collection. The Query Classes section below highlights some of the more
important Query classes. For details on implementing your own Query class, see Custom Queries -- Expert Level below.
To perform a search, applications usually call IndexSearcher.search(Query,int)
.
Once a Query has been created and submitted to the IndexSearcher
, the scoring process begins. After some
infrastructure setup, control finally passes to the Weight
implementation and its Scorer
or BulkScorer
instances. See the Algorithm section for more notes on the process.
Query Classes
TermQuery
Of the various implementations of Query
, the TermQuery
is the easiest to understand and the most often
used in applications. A TermQuery
matches all the
documents that contain the specified Term
, which is a word
that occurs in a certain Field
. Thus, a TermQuery
identifies and scores all Document
s that have a Field
with the specified string in it. Constructing a TermQuery
is as simple as:
TermQuery tq = new TermQuery(new Term("fieldName", "term"));In this example, the
Query
identifies all Document
s that have the Field
named "fieldName"
containing the word
"term"
.
BooleanQuery
Things start to get interesting when one combines multiple TermQuery
instances into a BooleanQuery
. A BooleanQuery
contains multiple BooleanClause
s, where each clause contains a sub-query
(Query
instance) and an operator (from BooleanClause.Occur
) describing how that sub-query
is combined with the other clauses:
-
SHOULD
— Use this operator when a clause can occur in the result set, but is not required. If a query is made up of all SHOULD clauses, then every document in the result set matches at least one of these clauses. -
MUST
— Use this operator when a clause is required to occur in the result set and should contribute to the score. Every document in the result set will match all such clauses. -
FILTER
— Use this operator when a clause is required to occur in the result set but should not contribute to the score. Every document in the result set will match all such clauses. -
MUST NOT
— Use this operator when a clause must not occur in the result set. No document in the result set will match any such clauses.
BooleanClause
instances. If too many clauses are added, a
TooManyClauses
exception will be
thrown during searching. This most often occurs when a Query
is rewritten into a BooleanQuery
with many
TermQuery
clauses, for example by WildcardQuery
. The default setting for the maximum number
of clauses is 1024, but this can be changed via the static method IndexSearcher.setMaxClauseCount(int)
.
Phrases
Another common search is to find documents containing certain phrases. This is handled in different ways:
-
PhraseQuery
— Matches a sequence ofTerm
s.PhraseQuery
uses a slop factor to determine how many positions may occur between any two terms in the phrase and still be considered a match. The slop is 0 by default, meaning the phrase must match exactly. -
MultiPhraseQuery
— A more general form of PhraseQuery that accepts multiple Terms for a position in the phrase. For example, this can be used to perform phrase queries that also incorporate synonyms. -
Interval queries in the Queries module
PointRangeQuery
The PointRangeQuery
matches all documents
that occur in a numeric range. For PointRangeQuery to work, you must index the values using a one
of the numeric fields (IntPoint
, LongPoint
, FloatPoint
, or DoublePoint
).
PrefixQuery
, WildcardQuery
, RegexpQuery
While the PrefixQuery
has a different
implementation, it is essentially a special case of the WildcardQuery
. The PrefixQuery
allows an application to identify all documents
with terms that begin with a certain string. The WildcardQuery
generalizes this by allowing for the use of *
(matches 0 or more
characters) and ?
(matches exactly one character) wildcards. Note that the WildcardQuery
can be quite slow. Also note that WildcardQuery
should not start with *
and
?
, as these are extremely slow. Some QueryParsers may not allow this by default, but
provide a setAllowLeadingWildcard
method to remove that protection. The RegexpQuery
is even more general than WildcardQuery,
allowing an application to identify all documents with terms that match a regular expression
pattern.
FuzzyQuery
A FuzzyQuery
matches documents that contain terms
similar to the specified term. Similarity is determined using Levenshtein distance. This type of
query can be useful when accounting for spelling variations in the collection.
Scoring — Introduction
Lucene scoring is the heart of why we all love Lucene. It is blazingly fast and it hides
almost all of the complexity from the user. In a nutshell, it works. At least, that is, until it
doesn't work, or doesn't work as one would expect it to work. Then we are left digging into
Lucene internals or asking for help on [email protected]
to figure out why
a document with five of our query terms scores lower than a different document with only one of
the query terms.
While this document won't answer your specific scoring issues, it will, hopefully, point you to the places that can help you figure out the what and why of Lucene scoring.
Lucene scoring supports a number of pluggable information retrieval models, including:
These models can be plugged in via theSimilarity
API
, and offer extension hooks and parameters for tuning. In general, Lucene first finds the
documents that need to be scored based on boolean logic in the Query specification, and then
ranks this subset of matching documents via the retrieval model. For some valuable references on
VSM and IR in general refer to Lucene Wiki IR references.
The rest of this document will cover Scoring basics and explain
how to change your Similarity
. Next, it
will cover ways you can customize the lucene internals in Custom
Queries -- Expert Level, which gives details on implementing your own Query
class and related functionality. Finally, we will finish up
with some reference material in the Appendix.
Scoring — Basics
Scoring is very much dependent on the way documents are indexed, so it is important to
understand indexing. (see Lucene
overview before continuing on with this section) Be sure to use the useful IndexSearcher.explain(Query, doc)
to understand how the score for a certain matching document
was computed.
Generally, the Query determines which documents match (a binary decision), while the Similarity determines how to assign scores to the matching documents.
Fields and Documents
In Lucene, the objects we are scoring are Document
s. A Document is a collection of Field
s. Each
Field has semantics
about how it is created and
stored (tokenized
, stored
, etc). It is important to note that Lucene
scoring works on Fields and then combines the results to return Documents. This is important
because two Documents with the exact same content, but one having the content in two Fields and
the other in one Field may return different scores for the same query due to length
normalization.
Score Boosting
Lucene allows influencing the score contribution of various parts of the query by wrapping
with BoostQuery
.
Changing Scoring — Similarity
Changing the scoring formula
Changing Similarity
is an easy way to
influence scoring, this is done at index-time with IndexWriterConfig.setSimilarity(Similarity)
and at query-time with IndexSearcher.setSimilarity(Similarity)
. Be sure to use the same Similarity at query-time as at
index-time (so that norms are encoded/decoded correctly); Lucene makes no effort to verify this.
You can influence scoring by configuring a different built-in Similarity implementation, or by tweaking its parameters, subclassing it to override behavior. Some implementations also offer a modular API which you can extend by plugging in a different component (e.g. term frequency normalizer).
Finally, you can extend the low level Similarity
directly to implement a new retrieval model.
See the org.apache.lucene.search.similarities
package documentation for information on
the built-in available scoring models and extending or changing Similarity.
Integrating field values into the score
While similarities help score a document relatively to a query, it is also common for
documents to hold features that measure the quality of a match. Such features are best integrated
into the score by indexing a FeatureField
with
the document at index-time, and then combining the similarity score and the feature score using a
linear combination. For instance the below query matches the same documents as
originalQuery
and computes scores as similarityScore + 0.7 * featureScore
:
Query originalQuery = new BooleanQuery.Builder() .add(new TermQuery(new Term("body", "apache")), Occur.SHOULD) .add(new TermQuery(new Term("body", "lucene")), Occur.SHOULD) .build(); Query featureQuery = FeatureField.newSaturationQuery("features", "pagerank"); Query query = new BooleanQuery.Builder() .add(originalQuery, Occur.MUST) .add(new BoostQuery(featureQuery, 0.7f), Occur.SHOULD) .build();
A less efficient yet more flexible way of modifying scores is to index scoring features into
doc-value fields and then combine them with the similarity score using a FunctionScoreQuery
from the queries module. For instance
the below example shows how to compute scores as similarityScore * Math.log(popularity)
using the expressions module and
assuming that values for the popularity
field have been set in a NumericDocValuesField
at index time:
// compile an expression: Expression expr = JavascriptCompiler.compile("_score * ln(popularity)"); // SimpleBindings just maps variables to DoubleValuesSource instances SimpleBindings bindings = new SimpleBindings(); bindings.add("_score", DoubleValuesSource.SCORES); bindings.add("popularity", DoubleValuesSource.fromIntField("popularity")); // create a query that matches based on 'originalQuery' but // scores using expr Query query = new FunctionScoreQuery( originalQuery, expr.getDoubleValuesSource(bindings));
Custom Queries — Expert Level
Custom queries are an expert level task, so tread carefully and be prepared to share your code if you want help.
With the warning out of the way, it is possible to change a lot more than just the Similarity when it comes to matching and scoring in Lucene. Lucene's search is a complex mechanism that is grounded by three main classes:
Query
— The abstract object representation of the user's information need.Weight
— A specialization of a Query for a given index. This typically associates a Query object with index statistics that are later used to compute document scores.Scorer
— The core class of the scoring process: for a given segment, scorers returniterators
over matches and give a way to compute thescore
of these matches.BulkScorer
— An abstract class that scores a range of documents. A default implementation simply iterates through the hits fromScorer
, but some queries such asBooleanQuery
have more efficient implementations.
The Query Class
In some sense, the Query
class is where it all begins.
Without a Query, there would be nothing to score. Furthermore, the Query class is the catalyst
for the other scoring classes as it is often responsible for creating them or coordinating the
functionality between them. The Query
class has several
methods that are important for derived classes:
createWeight(IndexSearcher searcher, ScoreMode scoreMode, float boost)
— AWeight
is the internal representation of the Query, so each Query implementation must provide an implementation of Weight. See the subsection on The Weight Interface below for details on implementing the Weight interface.rewrite(IndexSearcher searcher)
— Rewrites queries into primitive queries. Primitive queries are:TermQuery
,BooleanQuery
, and other queries that implementcreateWeight(IndexSearcher searcher,ScoreMode scoreMode, float boost)
The Weight Interface
The Weight
interface provides an internal
representation of the Query so that it can be reused. Any IndexSearcher
dependent state should be stored in the
Weight implementation, not in the Query class. The interface defines four main methods:
scorer()
— Construct a newScorer
for this Weight. See The Scorer Class below for help defining a Scorer. As the name implies, the Scorer is responsible for doing the actual scoring of documents given the Query.explain(LeafReaderContext context, int doc)
— Provide a means for explaining why a given document was scored the way it was. Typically a weight such as TermWeight that scores via aSimilarity
will make use of the Similarity's implementation:SimScorer#explain(Explanation freq, long norm)
.matches(LeafReaderContext context, int doc)
— Give information about positions and offsets of matches. This is typically useful to implement highlighting.
The Scorer Class
The Scorer
abstract class provides common scoring
functionality for all Scorer implementations and is the heart of the Lucene scoring process. The
Scorer defines the following methods which must be implemented:
iterator()
— Return aDocIdSetIterator
that can iterate over all document that matches this Query.docID()
— Returns the id of theDocument
that contains the match.score()
— Return the score of the current document. This value can be determined in any appropriate way for an application. For instance, theTermScorer
simply defers to the configured Similarity:SimScorer.score(float freq, long norm)
.getChildren()
— Returns any child subscorers underneath this scorer. This allows for users to navigate the scorer hierarchy and receive more fine-grained details on the scoring process.
The BulkScorer Class
The BulkScorer
scores a range of documents. There
is only one abstract method:
score(LeafCollector,Bits,int,int)
— Score all documents up to but not including the specified max document.
Why would I want to add my own Query?
In a nutshell, you want to add your own custom Query implementation when you think that Lucene's aren't appropriate for the task that you want to do. You might be doing some cutting edge research or you need more information back out of Lucene (similar to Doug adding SpanQuery functionality).
Appendix: Search Algorithm
This section is mostly notes on stepping through the Scoring process and serves as fertilizer for the earlier sections.
In the typical search application, a Query
is passed to
the IndexSearcher
, beginning the scoring process.
Once inside the IndexSearcher, a Collector
is used
for the scoring and sorting of the search results. These important objects are involved in a
search:
- The
Weight
object of the Query. The Weight object is an internal representation of the Query that allows the Query to be reused by the IndexSearcher. - The IndexSearcher that initiated the call.
- A
Sort
object for specifying how to sort the results if the standard score-based sort method is not desired.
Assuming we are not sorting (since sorting doesn't affect the raw Lucene score), we call one
of the search methods of the IndexSearcher, passing in the Weight
object created by IndexSearcher.createWeight(Query,ScoreMode,float)
and the number of results we want. This method
returns a TopDocs
object, which is an internal
collection of search results. The IndexSearcher creates a TopScoreDocCollector
and passes it along with the
Weight to another expert search method (for more on the Collector
mechanism, see IndexSearcher
). The
TopScoreDocCollector uses a PriorityQueue
to collect
the top results for the search.
At last, we are actually going to score some documents. The score method takes in the
Collector (most likely the TopScoreDocCollector or TopFieldCollector) and does its business. Of
course, here is where things get involved. The Scorer
that is returned by the Weight
object depends on what
type of Query was submitted. In most real world applications with multiple query terms, the
Scorer
is going to be a BooleanScorer2
created from BooleanWeight
(see the section on custom queries for info on changing this).
Assuming a BooleanScorer2, we get a internal Scorer based on the required, optional and
prohibited parts of the query. Using this internal Scorer, the BooleanScorer2 then proceeds into
a while loop based on the DocIdSetIterator.nextDoc()
method. The nextDoc() method advances to the next document matching
the query. This is an abstract method in the Scorer class and is thus overridden by all derived
implementations. If you have a simple OR query your internal Scorer is most likely a
DisjunctionSumScorer, which essentially combines the scorers from the sub scorers of the OR'd
terms.
-
ClassDescriptionAbstractKnnCollector is the default implementation for a knn collector used for gathering kNN results and providing topDocs from the gathered neighborsA
Query
that will match terms against a finite-state machine.AQuery
that blends index statistics across multiple terms.A Builder forBlendedTermQuery
.ABlendedTermQuery.RewriteMethod
that creates aDisjunctionMaxQuery
out of the sub queries.ABlendedTermQuery.RewriteMethod
defines how queries for individual terms should be merged.A clause in a BooleanQuery.Specifies how clauses are to occur in matching documents.A Query that matches documents matching boolean combinations of other queries, e.g.A builder for boolean queries.Deprecated.Add thisAttribute
to aTermsEnum
returned byMultiTermQuery.getTermsEnum(Terms,AttributeSource)
and update the boost on each returned term.Implementation class forBoostAttribute
.AQuery
wrapper that allows to give a boost to the wrapped query.This class is used to score a range of documents at once, and is returned byWeight.bulkScorer(org.apache.lucene.index.LeafReaderContext)
.Caches all docs, and optionally also scores, coming from a search, and is then able to replay them to another collector.CheckedIntConsumer<T extends Exception>LikeIntConsumer
, but may throw checked exceptions.Contains statistics for a collection (field).Throw this exception inLeafCollector.collect(int)
to prematurely terminate collection of the current leaf.Expert: Collectors are primarily meant to be used to gather raw results from a search, and implement sorting or custom result filtering, collation, etc.A manager of collectors.Helper methods for building conjunction iteratorsA query that wraps another query and simply returns a constant score equal to 1 for every document that matches the query.We return this as ourBulkScorer
so that if the CSQ wraps a query with its own optimized top-level scorer (e.g.A constant-scoringScorer
.A Weight that has a constant score equal to the boost of the wrapped query.Utility class that runs a thread to manage periodicc reopens of aReferenceManager
, with methods to wait for a specific index changes to become visible.A priority queue of DocIdSetIterators that orders by current doc ID.Wrapper used inDisiPriorityQueue
.ADocIdSetIterator
which is a disjunction of the approximations of the provided iterators.A query that generates the union of documents produced by its subqueries, and that scores each document with the maximum score for that document as produced by any subquery, plus a tie breaking increment for any additional matching subqueries.A DocIdSet contains a set of doc ids.This abstract class defines methods to iterate over a set of non-decreasing doc ids.A stream of doc IDs.Deprecated.UseFieldExistsQuery
instead.Rewrites MultiTermQueries into a filter, using DocValues for term enumeration.Per-segment, per-document double values, which can be calculated at search-timeBase class for producingDoubleValues
Expert: Find exact phrasesExpert: Describes the score computation for document and query.Expert: a FieldComparator compares hits so as to determine their sort order when collecting the top results withTopFieldCollector
.Sorts by descending relevance.Sorts by field's natural Term sort order.Provides aFieldComparator
for custom field sorting.Expert: A ScoreDoc which also contains information about how to sort the referenced document.AQuery
that matches documents that contain either aKnnFloatVectorField
,KnnByteVectorField
or a field that indexes norms or doc values.FieldValueHitQueue<T extends FieldValueHitQueue.Entry>Expert: A hit queue for sorting by hits by terms in more than one field.Extension of ScoreDoc to also store theFieldComparator
slot.Collector
delegator.Abstract decorator class of a DocIdSetIterator implementation that provides on-demand filter/validation mechanism on an underlying DocIdSetIterator.LeafCollector
delegator.A MatchesIterator that delegates all calls to another MatchesIteratorFilter aScorable
, intercepting methods and optionally changing their return valuesAFilterScorer
contains anotherScorer
, which it uses as its basic source of data, possibly transforming the data along the way or providing additional functionality.AFilterWeight
contains anotherWeight
and implements all abstract methods by calling the contained weight's method.Implements the fuzzy search query.Subclass of TermsEnum for enumerating all terms that are similar to the specified filter term.Thrown to indicate that there was an issue creating a fuzzy query for a given term.Expert: Priority queue containing hit docsDocIdSetIterator
that skips non-competitive docs thanks to the indexed impacts.A query that uses either an index structure (points or terms) or doc values in order to run a query, depending which one is more efficient.Implements search over a single IndexReader.A class holding a subset of theIndexSearcher
s leaf contexts to be executed within a single thread.Thrown when an attempt is made to add more thanIndexSearcher.TooManyClauses.getMaxClauseCount()
clauses.Thrown when a client attempts to execute a Query that has more thanIndexSearcher.TooManyClauses.getMaxClauseCount()
total clauses cumulatively in all of it's children.A range query that can take advantage of the fact that the index is sorted to speed up execution.A Query that matches documents matching combinations of subqueries.Combines scores of subscorers.The Weight for IndriAndQuery, used to normalize, score and explain these queries.The Indri implemenation of a disjunction scorer which stores the subscorers for the child queries.A Basic abstract query that all IndriQueries can extend to implement toString, equals, getClauses, and iterator.The Indri parent scorer that stores the boost so that IndriScorers can use the boost outside of the term.UsesKnnVectorsReader.search(String, byte[], KnnCollector, Bits)
to perform nearest neighbour search.KnnCollector is a knn collector used for gathering kNN results and providing topDocs from the gathered neighborsUsesKnnVectorsReader.search(String, float[], KnnCollector, Bits)
to perform nearest neighbour search.Deprecated.UseFieldExistsQuery
instead.Deprecated.useKnnFloatVectorQuery
insteadCollector decouples the score from the collected doc: the score computation is skipped entirely if it's not needed.Expert: comparator that gets instantiated on each leaf from a top-levelFieldComparator
instance.Similarity.SimScorer
on a specificLeafReader
.LiveFieldValues<S,T> Tracks live field values across NRT reader reopens.Per-segment, per-document long values, which can be calculated at search-timeBase class for producingLongValues
A ConstantLongValuesSource that always returns a constant valueAQueryCache
that evicts queries using a LRU (least-recently-used) eviction policy in order to remain under a given maximum size and number of bytes used.Cache of doc ids with a count.A query that matches all documents.Reports the positions and optionally offsets of all matching terms in a query for a single documentAn iterator over match positions (and optionally offsets) for a single document and fieldContains static functions that aid the implementation ofMatches
andMatchesIterator
interfaces.A query that matches no documents.Add thisAttribute
to a freshAttributeSource
before callingMultiTermQuery.getTermsEnum(Terms,AttributeSource)
.Implementation class forMaxNonCompetitiveBoostAttribute
.Compute maximum scores based onImpacts
and keep them in a cache in order not to run expensive similarity score computations multiple times on the same data.ACollectorManager
implements which wrap a set ofCollectorManager
asMultiCollector
acts forCollector
.A generalized version ofPhraseQuery
, with the possibility of adding more than one term at the same position that are treated as a disjunction (OR).A builder for multi-phrase queriesSlower version of UnionPostingsEnum that delegates offsets and positions, for use by MatchesIteratorTakes the logical union of multiple PostingsEnum iterators.Multiset<T>AMultiset
is a set that allows for duplicate elements.An abstractQuery
that matches documents containing a subset of terms provided by aFilteredTermsEnum
enumeration.Abstract class that defines how the query is rewritten.A rewrite method that first translates each term intoBooleanClause.Occur.SHOULD
clause in a BooleanQuery, but adjusts the frequencies used for scoring to be blended across the terms, otherwise the rarest term typically ranks highest (often not useful eg in the set of expanded terms in a FuzzyQuery).A rewrite method that first translates each term intoBooleanClause.Occur.SHOULD
clause in a BooleanQuery, but the scores are only computed as the boost.A rewrite method that first translates each term intoBooleanClause.Occur.SHOULD
clause in a BooleanQuery, and keeps the scores as computed by the query.Utility class to help extract the set of sub queries that have matched from a larger query.This is aPhraseQuery
which is optimized for n-gram phrase query.Deprecated.UseFieldExistsQuery
instead.Base class for exact and sloppy phrase matchingA Query that matches documents containing a particular sequence of terms.A builder for phrase queries.Term postings and position information for phrase matchingExpert: Weight class for phrase matchingAbstract query class to find all documents whose single or multi-dimensional point values, previously indexed with e.g.Iterator of encoded point values.Abstract class for range queries against single or multidimensional points such asIntPoint
.A Query that matches documents containing terms with a specified prefix.ControlsLeafFieldComparator
how to skip documentsThe abstract base class for queries.A cache for queries.A policy defining which filters should be cached.ARescorer
that uses a provided Query to assign scores to the first-pass hits.Allows recursion through a query treeUtility class to safely share instances of a certain type across multiple threads, while periodically refreshing them.Use to receive notification when a refresh has finished.A fast regular expression query based on theorg.apache.lucene.util.automaton
package.Re-scores the topN results (TopDocs
) from an original query.Allows access to the score of a QueryA child Scorer and its relationship to its parent.AScorer
which wraps another scorer and caches the score of the current document.Holds one hit inTopDocs
.Different modes of search.Expert: Common scoring functionality for different types of queries.A supplier ofScorer
.Base rewrite method that translates each term into a query, and keeps the scores as computed by the query.Factory class used bySearcherManager
to create new IndexSearchers.Keeps track of current plus old IndexSearchers, closing the old ones once they have timed out.Simple pruner that drops any searcher older by more than the specified seconds, than the newest searcher.Utility class to safely shareIndexSearcher
instances across multiple threads, while periodically reopening.Interface defining whether or not an object can be cached against aLeafReader
BaseCollector
implementation that is used to collect all contexts.BaseFieldComparator
implementation that is used for all contexts.Find all slop-valid position-combinations (matches) encountered while traversing/hopping the PhrasePositions.Encapsulates sort criteria for returned hits.Selects a value from the document's list to use as the representative valueType of selection to perform.SortField forSortedNumericDocValues
.A SortFieldProvider for this sort fieldSelects a value from the document's set to use as the representative valueType of selection to perform.SortField forSortedSetDocValues
.A SortFieldProvider for this sortStores information about how to sort documents by terms in an individual field.A SortFieldProvider for field sortsSpecifies the type of the terms to be sorted, or special types such as CUSTOMARescorer
that re-sorts according to a provided Sort.A query that treats multiple terms as synonyms.A builder forSynonymQuery
.Executor wrapper responsible for the execution of concurrent tasks.Specialization for a disjunction over many terms that, by default, behaves like aConstantScoreQuery
over aBooleanQuery
containing onlyBooleanClause.Occur.SHOULD
clauses.A Query that matches documents containing a term.A Query that matches documents within an range of terms.Expert: AScorer
for documents matching aTerm
.Contains statistics for a specific termTheTimeLimitingCollector
is used to timeout search requests that take longer than the maximum allowed search time limit.Thrown when elapsed search time exceeds allowed search time.Thread used to timeout search requests.Represents hits returned byIndexSearcher.search(Query,int)
.TopDocsCollector<T extends ScoreDoc>A base class for all collectors that return aTopDocs
output.Represents hits returned byIndexSearcher.search(Query,int,Sort)
.TopKnnCollector is a specific KnnCollector.Scorable leaf collectorBase rewrite method for collecting only the top terms via a priority queue.Just counts the total number of hits.Collector manager based onTotalHitCountCollector
that allows users to parallelize counting the number of hits, expected to be used mostly wrapped inMultiCollectorManager
.Description of the total number of hits of a query.How theTotalHits.value
should be interpreted.Returned byScorer.twoPhaseIterator()
to expose an approximation of aDocIdSetIterator
.AQueryCachingPolicy
that tracks usage statistics of recently-used filters in order to decide on which filters are worth caching.Expert: Calculate query weights and build query scorers.Just wraps a Scorer and performs top scoring using it.Implements the wildcard search query.
IndexSearcher.TooManyClauses