|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
FilteringTokenFilter.incrementToken()
.
AttributeSource
shall be stored
in the sink.
SpanQuery
.
true
if this collector does not
require the matching docIDs to be delivered in int sort
order (smallest to largest) to Collector.collect(int)
.
IndexSearcher
, if it
has not yet been closed
FilteredTermEnum.setEnum(org.apache.lucene.index.TermEnum)
BytesRef
BytesRef
with a pre-calculated hash code.
IndexWriter.getAnalyzer()
.
CompoundFileWriter.addFile(String)
, only for files that are found in an
external Directory
.
IndexWriter.addIndexes(Directory...)
instead
IndexReader.ReaderClosedListener
.
TeeSinkTokenFilter.SinkTokenStream
created by another TeeSinkTokenFilter
to this one.
RamUsageEstimator.NUM_BYTES_OBJECT_ALIGNMENT
.
TermVectorEntry
.
String
to this character sequence.
StringBuilder
to this character sequence.
CharTermAttribute
to this character sequence.
BytesRef
List
view.
Set
view.
AttributeSource
.AttributeSource
or AttributeImpl
.AttributeImpl
s,
and methods to add and get them.AttributeSource.AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY
.
AttributeSource.AttributeFactory
for creating new Attribute
instances.
AttributeImpl
s.CharFilter
.n
bits.
name
in Directory
d
, as written by the BitVector.write(org.apache.lucene.store.Directory, java.lang.String)
method.
BooleanQuery.getMaxClauseCount()
clauses.char[]
buffer size)
for encoding int
values.
char[]
buffer size)
for encoding long
values.
IndexInput
.IndexOutput
.ByteBlockPool.Allocator
that never recycles.ByteBlockPool.Allocator
that never recycles, but
tracks how much total RAM is in use.FieldCache
using getBytes()
and makes those values
available as other numeric types, casting as needed.Outputs
implementation where each output
is a sequence of bytes.BytesRef.EMPTY_BYTES
capacity
.
BytesRefHash
is a special purpose hash-map like data-structure
optimized for BytesRef
instances.BytesRefHash
with a ByteBlockPool
using a
ByteBlockPool.DirectAllocator
.
BytesRefHash
BytesRefHash
BytesRefHash.BytesStartArray
that tracks
memory allocation using a private AtomicLong
instance.BytesRef
exceeds the BytesRefHash
limit of
ByteBlockPool.BYTE_BLOCK_SIZE
-2.BytesRefHash.BytesStartArray
that tracks all
memory allocation using a shared AtomicLong
instance.BytesRef
iteration.ByteBlockPool
for the given ord
AtomicLong
reference holding the number of bytes used by this
BytesRefHash.BytesStartArray
.
FieldCacheSource
, already knowing that cache and field are equal.
FieldCacheSource
, without the hash-codes of the field
and the cache (those are taken care of elsewhere).
CachingWrapperFilter.DeletesMode.RECACHE
.
CharacterUtils
provides a unified interface to Character-related
operations to implement backwards compatible character operations based on a
Version
instance.CharacterUtils.fill(CharacterBuffer, Reader)
.CharArraySet.CharArraySet(Version, int, boolean)
instead
CharArraySet.CharArraySet(Version, Collection, boolean)
instead
char[]
instances.Charset
instance to prevent repeated
Charset.forName(String)
lookups
CharsRef
initialized an empty array zero-length
CharsRef
initialized with an array of the given
capacity
CharsRef
initialized with the given array, offset and
length
CharsRef
initialized with the given Strings character
array
CharStream.correctOffset(int)
functionality over Reader
.CharTokenizer
instance
CharTokenizer
instance
CharTokenizer
instance
CharTokenizer.CharTokenizer(Version, Reader)
instead. This will be
removed in Lucene 4.0.
CharTokenizer.CharTokenizer(Version, AttributeSource, Reader)
instead. This will be
removed in Lucene 4.0.
CharTokenizer.CharTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be
removed in Lucene 4.0.
CheckIndex.Status
instance detailing
the state of the index.
CheckIndex.Status
instance detailing
the state of the index.
CheckIndex.checkIndex()
detailing the health and status of the index.IllegalStateException
if it is.
ClassicTokenizer
with ClassicFilter
, LowerCaseFilter
and StopFilter
, using a list of
English stop words.ClassicAnalyzer.STOP_WORDS_SET
).
ClassicAnalyzer.ClassicAnalyzer(Version, Reader)
instead.
ClassicTokenizer
.ClassicTokenizer
.
AttributeSource
.
AttributeSource.AttributeFactory
bit
to zero.
BytesRefHash.BytesStartArray
and returns the cleared instance.
BytesRef
which maps to the given BytesRef
AttributeImpl.clear()
on each Attribute implementation.
IndexReader.clone()
instead.
MultiReader.clone()
instead.
ParallelReader.clone()
instead.
AttributeImpl
instances returned in a new
AttributeSource
instance.
acquiring
.
SearcherLifetimeManager.release(org.apache.lucene.search.IndexSearcher)
after they are
done.
CharSequence
.
KeywordTokenizer
with CollationKeyFilter
.CollationKey
, and then
encodes the CollationKey with IndexableBinaryStringTools
, to allow
it to be stored as an index term.Collector.collect(int)
on the decorated Collector
unless the allowed time has passed, in which case it throws an exception.
TwoPhaseCommit.commit()
, but takes an additional commit data to be included
w/ the commit.
i
and j
of you data.
NoMergePolicy
which indicates the index uses compound
files.
state.getBoost()*lengthNorm(numTerms)
, where
numTerms
is FieldInvertState.getLength()
if DefaultSimilarity.setDiscountOverlaps(boolean)
is false, else it's FieldInvertState.getLength()
- FieldInvertState.getNumOverlap()
.
FieldInvertState
).
MergeScheduler
that runs each merge using a
separate thread.MultiTermQuery.ConstantScoreAutoRewrite
, with ConstantScoreAutoRewrite.setTermCountCutoff(int)
set to
ConstantScoreAutoRewrite.DEFAULT_TERM_COUNT_CUTOFF
and ConstantScoreAutoRewrite.setDocCountPercent(double)
set to
ConstantScoreAutoRewrite.DEFAULT_DOC_COUNT_PERCENT
.
MultiTermQuery.SCORING_BOOLEAN_QUERY_REWRITE
except
scores are not computed.
ScoringRewrite.SCORING_BOOLEAN_QUERY_REWRITE
except
scores are not computed.
TeeSinkTokenFilter
passes all tokens to the added sinks
when itself is consumed.
len
chars of text
starting at off
are in the set
CharSequence
is in the set
len
chars of text
starting at off
are in the CharArrayMap.keySet
CharSequence
is in the CharArrayMap.keySet
overlap / maxOverlap
.
CharArrayMap
.
CharArraySet.copy(Version, Set)
instead.
CharArraySet
.
Directory
to under the new
file name dest.
Directory.copy(Directory, String, String)
for every file that
needs copying. You can use the following code:
IndexFileNameFilter filter = IndexFileNameFilter.getFilter(); for (String file : src.listAll()) { if (filter.accept(null, file)) { src.copy(dest, file, file); } }
BytesRef
at the current positions (
ByteBlockPool.byteUpto
across buffer boundaries
numBytes
bytes to the given IndexOutput
.
BytesRef
CharsRef
referenced content into this instance.
AttributeSource
to the given target AttributeSource
.
CachingCollector
which does not wrap another collector.
CachingCollector
that wraps the given collector and
caches documents and scores up to the specified RAM threshold.
CachingCollector
that wraps the given collector and
caches documents and scores up to the specified max docs threshold.
TopFieldCollector
from the given
arguments.
TopScoreDocCollector
given the number of hits to
collect and whether documents are scored in order by the input
Scorer
to TopScoreDocCollector.setScorer(Scorer)
.
TopScoreDocCollector
given the number of hits to
collect, the bottom of the previous page, and whether documents are scored in order by the input
Scorer
to TopScoreDocCollector.setScorer(Scorer)
.
AttributeImpl
for the supplied Attribute
interface class.
ReusableAnalyzerBase.TokenStreamComponents
instance for this analyzer.
ReusableAnalyzerBase.TokenStreamComponents
used to tokenize all the text in the provided Reader
.
Query
.
Query
.
Weight
implementations.
Subclasses of Searcher should use Searcher.createNormalizedWeight(org.apache.lucene.search.Query)
, instead.
ValueSourceQuery
scores.
CustomScoreQuery.getCustomScoreProvider(org.apache.lucene.index.IndexReader)
, if you want
to modify the custom score calculation of a CustomScoreQuery
.IndexReader
.
ValueSourceQuery
.
ValueSourceQuery
.
DateTools
or
NumericField
instead.
This class is included for use with existing
indices and will be removed in a future release (possibly Lucene 4.0).IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int)
instead. This method will be removed in Lucene 4.0
IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int)
instead. This method will be removed in Lucene 4.0
Similarity.decodeNormValue(byte)
instead.
other
other
other
AttributeImpl
s using the
class name of the supplied Attribute
interface class by appending Impl
to it.
Byte.toString(byte)
Double.toString(double)
Float.toString(float)
Integer.toString(int)
Long.toString(long)
IndexWriterConfig.DEFAULT_MAX_BUFFERED_DELETE_TERMS
instead
IndexWriterConfig.DEFAULT_MAX_BUFFERED_DOCS
instead.
IndexWriterConfig
IndexWriterConfig.DEFAULT_RAM_BUFFER_SIZE_MB
instead.
IndexWriterConfig.setReaderPooling(boolean)
.
Short.toString(short)
IndexWriterConfig.DEFAULT_TERM_INDEX_INTERVAL
instead.
IndexWriter.deleteDocuments(Term)
instead
IndexWriter.deleteDocuments(Term)
instead
term
.
IndexWriterConfig.DISABLE_AUTO_FLUSH
instead
i
.
Document
at the n
th position.
t
.
Searchable
's docFreq() in its own thread and waits for each search to complete and merge
the results back together.
term
.
term
.
DocIdSetIterator.NO_MORE_DOCS
if DocIdSetIterator.nextDoc()
or
DocIdSetIterator.advance(int)
were not called yet.
n
th
Document
in this index.
Document
at the n
th position.
IndexWriter.deleteDocuments(Term)
instead
IndexWriter.merge(org.apache.lucene.index.MergePolicy.OneMerge)
null
.
IndexReader.doOpenIfChanged()
instead
null
.
null
.
MultiReader.doOpenIfChanged()
instead.
ParallelReader.doOpenIfChanged()
instead.
double
value to a sortable signed long
.
ReentrantLock
to disable lockingUAX29URLEmailTokenizer.TOKEN_TYPES
instead
DocIdSet
instance for easy use, e.g.
TermPositionVector
that stores only position information.
IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int)
instead. This method will be removed in Lucene 4.0
IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int)
instead. This method will be removed in Lucene 4.0
Similarity.encodeNormValue(float)
instead.
end()
on the
input TokenStream.
NOTE: Be sure to call super.end()
first when overriding this method.
TokenStream.incrementToken()
returned false
(using the new TokenStream
API).
AlreadyClosedException
if this IndexWriter has been
closed.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
ValueSourceQuery.equals(Object)
.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
o
is equal to this.
\
.
RamUsageEstimator.sizeOf(Object)
method.
TwoPhaseCommit.prepareCommit()
all objects and only if all succeed,
it proceeds with TwoPhaseCommit.commit()
.
doc
scored against
query
.
doc
scored against
weight
.
doc
scored against
weight
.
doc
scored against
query
.
Scorer
,
but it is needed by SpanWeight to build an explanation.
instead
instead
Field
.FieldCache
).FieldCache
.Filter
that only accepts documents whose single
term value in the specified field is contained in the
provided set of allowed terms.TopFieldCollector
.FieldCache.getBytes(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getDoubles(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getFloats(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getInts(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getLongs(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldCache.getShorts(org.apache.lucene.index.IndexReader, java.lang.String)
and sorts by ascending valueFieldComparator
for custom field sorting.FieldInfo
s (accessible by number or by name).SpanQuery
objects participate in composite
single-field SpanQueries by 'lying' about their search field.null
as its
detail message.
FileFilter
, the FieldSelector allows one to make decisions about
what Fields get loaded on a Document
by IndexReader.document(int,org.apache.lucene.document.FieldSelector)
TermVectorEntry
s
This is not thread-safe.Filter
that accepts all documents that have one or more values in a
given field.FieldValueFilter
FieldValueFilter
CharacterUtils.CharacterBuffer
with characters read from the given
reader Reader
.
BytesRef
.
PagedBytes
starting at start with a
given length.
PagedBytes
starting at start, the
length is read as 1 or 2 byte vInt prefix.
FilterIndexReader
contains another IndexReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.TermDocs
implementations.TermEnum
implementations.TermPositions
implementations.CachingWrapperFilter
if you wish to cache
Filter
s.MergePolicy.MergeSpecification
if so.
ChecksumIndexOutput.prepareCommit()
FixedBitSet.getBits()
) long[], accessed with an int index,
implementing Bits and DocIdSet.CheckIndex.checkIndex()
.
Tokenizer
chain,
eg from one TokenFilter to another one.FieldCache
using getFloats()
and makes those values
available as other numeric types, casting as needed.float
value to a sortable signed int
.
numBytes
.
IndexWriter.forceMerge(int)
, except you can
specify whether the call should block until
all merging completes.
IndexWriter.forceMergeDeletes()
, except you can
specify whether the call should block until the
operation completes.
minimumSimilarity
to term
.
FuzzyQuery(term, minimumSimilarity, prefixLength, Integer.MAX_VALUE)
.
FuzzyQuery(term, minimumSimilarity, 0, Integer.MAX_VALUE)
.
FuzzyQuery(term, 0.5f, 0, Integer.MAX_VALUE)
.
reader
which share a prefix of
length prefixLength
with term
and which have a fuzzy similarity >
minSimilarity
.
len
chars of text
starting at off
CharSequence
true
if bit
is one and
false
if it is zero.
BytesRef
with the bytes for the given ord.
SetOnce.set(Object)
.
bit
to true, and
returns true if bit was already set
Float.NaN
if this
DocValues instance does not contain any value.
null
for numeric fields
Document.setBoost(float)
.
field
as a single byte and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as bytes and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
field
as bytes and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
IndexReader.getIndexCommit()
on an open IndexReader, and then call
IndexCommit.getUserData()
.
IndexReader.getIndexCommit()
and then call
IndexCommit.getUserData()
.
FieldComparator
to use for
sorting.
BytesRef
Comparator used to sort terms provided by the
iterator.
FieldComparatorSource
used for
custom sorting
IndexWriterConfig
, cloned
from the IndexWriterConfig
passed to
IndexWriter.IndexWriter(Directory, IndexWriterConfig)
.
IndexReader.getVersion()
on an opened IndexReader.
CustomScoreProvider
that calculates the custom scores
for the given IndexReader
.
null
if not yet set.
IndexableBinaryStringTools.getDecodedLength(char[], int, int)
instead. This
method will be removed in Lucene 4.0
InputStream
in a reader using a CharsetDecoder
.
File
using a CharsetDecoder
.
CharsetDecoder
.
IndexWriterConfig.getDefaultWriteLockTimeout()
instead
Directory
for the index.
Directory
of the index that hit
the exception.
PayloadProcessorProvider.getReaderProcessor(org.apache.lucene.index.IndexReader)
instead. You can still select by Directory
,
if you retrieve the underlying directory from IndexReader.directory()
.
DocIdSet
enumerating the documents that should be
permitted in search results.
field
and returns a bit set at the size of
reader.maxDoc()
, with turned on bits for each docid that
does have a value for this field.
field
as integers and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as doubles and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
field
as doubles and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
StopFilter.StopFilter(Version, TokenStream, Set)
instead
IndexableBinaryStringTools.getEncodedLength(byte[], int, int)
instead. This
method will be removed in Lucene 4.0
Document.getFieldable(java.lang.String)
instead and cast depending on
data type.
Fieldable
name.
Fieldable
s with the given name.
FieldInfos
describing all fields in
this reader.
QueryParser.getFieldQuery(String,String,boolean)
instead.
QueryParser.getFieldQuery(String,String,boolean)
.
Document.getFieldable(java.lang.String)
instead and cast depending on
data type.
FSDirectory.getDirectory()
instead.
null
if a query is wrapped.
field
as floats and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as floats and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as floats and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
QueryParser.getWildcardQuery(java.lang.String, java.lang.String)
).
Counter
TimeLimitingCollector.TimerThread
.
baseClass
in which this method is overridden/implemented
in the inheritance path between baseClass
and the given subclass subclazz
.
IndexCommit
as specified in
IndexWriterConfig.setIndexCommit(IndexCommit)
or the default, null
which specifies to open the latest index commit point.
IndexDeletionPolicy
specified in
IndexWriterConfig.setIndexDeletionPolicy(IndexDeletionPolicy)
or the default
KeepOnlyLastCommitDeletionPolicy
/
IndexReader
this searches.
FieldCache.setInfoStream(PrintStream)
CharacterUtils
implementation according to the given
Version
instance.
field
as integers and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as integers and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
field
as integers and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
CharacterUtils.CharacterBuffer.getOffset()
field
as longs and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as longs and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
field
as longs and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
IndexWriterConfig.getMaxBufferedDeleteTerms()
instead
IndexWriterConfig.getMaxBufferedDocs()
instead.
LimitTokenCountAnalyzer
to limit number of tokens.
ConcurrentMergeScheduler.setMaxMergeCount(int)
.
LogMergePolicy.getMaxMergeDocs()
directly.
LogByteSizeMergePolicy.getMaxMergeMBForForcedMerge()
Float.NaN
if this
DocValues instance does not contain any value.
IndexWriterConfig.getMergedSegmentWarmer()
instead.
LogMergePolicy.getMergeFactor()
directly.
IndexWriterConfig.getMergePolicy()
instead
IndexWriterConfig.getMergeScheduler()
instead
MergeScheduler
that was set by
IndexWriterConfig.setMergeScheduler(MergeScheduler)
MergePolicy
to avoid
selecting merges for segments already being merged.
Float.NaN
if this
DocValues instance does not contain any value.
MergeScheduler
calls this method
to retrieve the next merge requested by the
MergePolicy
Number
, null
if not yet initialized.
positionIncrement == 0
.
Analyzer.getPositionIncrementGap(java.lang.String)
, except for
Token offsets instead.
PositionBasedTermVectorMapper.TVPositionInfo.getTerms()
) of TermVectorOffsetInfo objects.
AbstractField.getIndexOptions()
instead.
IndexWriterConfig.OpenMode
set by IndexWriterConfig.setOpenMode(OpenMode)
.
null
for T
is String
}
FieldCache
parser that fits to the given sort type.
PayloadProcessorProvider
that is used during segment
merges to process payloads.
QueryParser.getWildcardQuery(java.lang.String, java.lang.String)
).
PayloadProcessorProvider.ReaderPayloadProcessor
for the given term.
null
if a filter is wrapped.
IndexWriterConfig.getRAMBufferSizeMB()
instead.
IndexWriterConfig.setRAMBufferSizeMB(double)
if enabled.
DataInput.readBytes(byte[], int, int)
.
IndexReader.open(IndexWriter,boolean)
instead.
IndexReader.open(IndexWriter,boolean)
instead. Furthermore,
this method cannot guarantee the reader (and its
sub-readers) will be opened with the
termInfosIndexDivisor setting because some of them may
have already been opened according to IndexWriterConfig.setReaderTermsIndexDivisor(int)
. You
should set the requested termInfosIndexDivisor through
IndexWriterConfig.setReaderTermsIndexDivisor(int)
and use
IndexWriter.getReader()
.
IndexWriter.getReader()
has not been called.
PayloadProcessorProvider.ReaderPayloadProcessor
for the given Directory
,
through which PayloadProcessorProvider.ReaderPayloadProcessor
s can be obtained for each
Term
, or null
if none should be used.
IndexWriterConfig.getReaderTermsIndexDivisor()
instead.
TieredMergePolicy.setReclaimDeletesWeight(double)
.
Searchable
s this searches.
segments_N
) associated
with this commit point.
PriorityQueue.initialize(int)
to fill the queue, so
that the code which uses that queue can always assume it's full and only
change the top without attempting to insert any new object.PriorityQueue.lessThan(T, T)
should always favor the
non-sentinel values).field
as shorts and returns an array
of size reader.maxDoc()
of the value each document
has in the given field.
field
as shorts and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
field
as shorts and returns an array of
size reader.maxDoc()
of the value each document has in the
given field.
IndexWriterConfig.getSimilarity()
instead
Similarity
implementation used by this
IndexWriter.
SnapshotDeletionPolicy.SnapshotDeletionPolicy(IndexDeletionPolicy, Map)
in order to
initialize snapshots at construction.
field
and returns
an array of them in natural order, along with an array telling
which element in the term array each document uses.
field
and returns an array
of size reader.maxDoc()
containing the value each document
has in the given field.
TermFreqVector
.
IndexWriterConfig.getTermIndexInterval()
MultiTermQuery
s FilteredTermEnum
IndexWriter.commit(Map)
TokenStream
LogMergePolicy.getUseCompoundFile()
IndexWriter.commit(Map)
for this commit.
true
, if the unmap workaround is enabled.
IndexCommit.getGeneration()
instead
IndexWriterConfig.getWriteLockTimeout()
BytesRefHash.BytesStartArray
PackedInts.Mutable
, but grows the
bit count of the underlying packed ints on-demand.UAX29URLEmailTokenizer.TOKEN_TYPES
instead
ValueSourceQuery.hashCode()
.
o
is equal to this.
UAX29URLEmailTokenizer.TOKEN_TYPES
instead
size
in human-readable units (GB, MB, KB or bytes).
size
in human-readable units (GB, MB, KB or bytes).
UAX29URLEmailTokenizer.TOKEN_TYPES
instead
log(numDocs/(docFreq+1)) + 1
.
Similarity.idfExplain(Term,Searcher,int)
by passing
searcher.docFreq(term)
as the docFreq.
true
if the lower endpoint is inclusive
true
if the lower endpoint is inclusive
true
if the lower endpoint is inclusive
true
if the upper endpoint is inclusive
true
if the upper endpoint is inclusive
true
if the lower endpoint is inclusive
true
if the lower endpoint is inclusive
true
if the upper endpoint is inclusive
true
if the upper endpoint is inclusive
true
if the upper endpoint is inclusive
IndexWriter
) use this method to advance the stream to
the next token.
.f
+ a number and
from .s
+ a number.
IndexDeletionPolicy
or IndexReader
.index commits
.indexOf(int)
but searches for a number of terms
at the same time.
true
if an index exists at the specified directory.
matchesExtension
), as well as generating file names from a segment name,
generation and extension (
fileNameFromGeneration
,
segmentFileName
).Directory
.IndexInput.toString()
.
getTerms
at which the term with the specified
term
appears.
IndexSearcher.IndexSearcher(IndexReader)
instead.
IndexSearcher.IndexSearcher(IndexReader)
instead.
IndexWriter
using the given
matchVersion
.
IndexWriter
using the given
matchVersion
.
IndexWriter
using the given
config.
IndexWriter
creates and maintains an index.IndexWriter.IndexWriter(Directory, IndexWriterConfig)
instead
IndexWriter.IndexWriter(Directory, IndexWriterConfig)
instead
IndexWriter.IndexWriter(Directory, IndexWriterConfig)
instead
IndexWriter.IndexWriter(Directory, IndexWriterConfig)
instead
IndexWriter.IndexWriter(Directory, IndexWriterConfig)
instead
conf
.
IndexWriter.getReader()
has been called (ie, this writer
is in near real-time mode), then after a merge
completes, this class can be invoked to warm the
reader on the newly merged segment, before the merge
commits.LimitTokenCountAnalyzer
instead.IndexWriter
.Version
as well as the default Analyzer
.
IndexWriter
:
IndexWriterConfig.OpenMode.CREATE
- creates a new index or overwrites an existing one.DataInput
wrapping a plain InputStream
.Comparator
.
Comparator
.
List
using the Comparator
.
List
in natural order.
NoMergeScheduler
Lock
is stateless.
FieldCache
using getInts()
and makes those values
available as other numeric types, casting as needed.Outputs
implementation where each output
is a sequence of ints.shift
bits.
CachingWrapperFilter
, if this DocIdSet
should be cached without copying it into a BitSet.
Similarity.coord(int,int)
is disabled in
scoring for this query instance.
IndexFileNames.STORE_INDEX_EXTENSIONS
).
true
iff the current token is a keyword, otherwise
false
/
true
iff the current token is a keyword, otherwise
false
/
true
iff the index in the named directory is
currently locked.
ASCIIFoldingFilter
which covers a superset of Latin 1.
This class is included for use with existing
indexes and will be removed in a future release (possibly Lucene 4.0).IndexReader.getSequentialSubReaders()
instead.
baseClass
and the given subclass subclazz
.
true
if no changes have occured since this searcher
ie.
true
if no changes have occured since this searcher
ie.
SEPARATE_NORMS_EXTENSION + "[0-9]+"
.
RamUsageEstimator
.
IndexReader.getTermFreqVector(int,String)
.
IndexReader.getTermFreqVector(int,String)
.
CharTokenizer.isTokenChar(int)
instead. This method will be
removed in Lucene 4.0.
Character.isLetter(int)
.
Character.isWhitespace(int)
.
CharArraySet.CharArraySetIterator
depending on the version used:
if matchVersion
≥ 3.1, it returns char[]
instances in this set.
if matchVersion
is 3.0 or older, it returns new
allocated Strings, so this method violates the Set interface.
Iterator
of contained segments in order.
DocIdSetIterator
to access the set.
UAX29URLEmailTokenizer.TOKEN_TYPES
instead
IndexDeletionPolicy
implementation that
keeps only the most recent commit and immediately removes
all prior commits after a new commit is done.CharArraySet
view on the map's keys.
KeywordAttribute
.KeywordAttribute
.
KeywordAttribute
.
IndexWriter.commit(Map)
LengthFilter.LengthFilter(boolean, TokenStream, int, int)
instead.
fieldName
matching
less than or equal to upperTerm
.
AttributeSource
.
AttributeSource.AttributeFactory
.
LetterTokenizer.LetterTokenizer(Version, Reader)
instead. This
will be removed in Lucene 4.0.
LetterTokenizer.LetterTokenizer(Version, AttributeSource, Reader)
instead.
This will be removed in Lucene 4.0.
LetterTokenizer.LetterTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0.
IndexWriter.DEFAULT_MAX_FIELD_LENGTH
Lock.obtain(long)
to try
forever to obtain the lock.
Lock.obtain(long)
waits, in milliseconds,
in between attempts to acquire the lock.
write.lock
could not be acquired.write.lock
could not be released.VerifyingLockFactory
.LogMergePolicy
that measures size of a
segment as the total byte size of the segment's files.LogMergePolicy
that measures size of a
segment as the number of documents (not taking deletions
into account).MergePolicy
that tries
to merge segments into levels of exponentially
increasing size, where each level has fewer segments than
the value of the merge factor.shift
bits.
LowerCaseFilter.LowerCaseFilter(Version, TokenStream)
instead.
AttributeSource
.
AttributeSource.AttributeFactory
.
LowerCaseTokenizer.LowerCaseTokenizer(Version, Reader)
instead. This will be
removed in Lucene 4.0.
LowerCaseTokenizer.LowerCaseTokenizer(Version, AttributeSource, Reader)
instead. This will be removed in Lucene 4.0.
LowerCaseTokenizer.LowerCaseTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0.
SimpleAnalyzer
.
Lock
.
StopFilter.makeStopSet(Version, String...)
instead
StopFilter.makeStopSet(Version, List)
instead
StopFilter.makeStopSet(Version, String[], boolean)
instead;
StopFilter.makeStopSet(Version, List, boolean)
instead
map
.
FieldSelector
based on a Map of field names to FieldSelectorResult
sCharFilter
that applies the mappings
contained in a NormalizeCharMap
to the character
stream, and correcting the resulting changes to the
offsets.CharStream
.
Reader
.
ReferenceManager.acquire()
will
return refreshed instances.
ReferenceManager.maybeRefresh()
.
IndexWriter.getNextMerge()
.
Sort
.
IndexWriter
uses an instance
implementing this interface to execute the merges
selected by a MergePolicy
.Comparator
.
Comparator
.
List
using the Comparator
.
List
in natural order.
SorterTemplate.insertionSort(int,int)
.
ConcurrentMergeScheduler.verbose()
was
called and returned true.
Directory
implementation that uses
mmap for reading, and FSDirectory.FSIndexOutput
for writing.NativeFSLockFactory
.
fieldName
matching
greater than or equal to lowerTerm
.
Collector
which allows running a search with several
Collector
s.MultiPhraseQuery.add(Term[])
.TermPositions
for multiple Term
s as
a single TermPositions
.MultipleTermPositions
instance.
Query
that matches documents
containing a subset of terms provided by a FilteredTermEnum
enumeration.BooleanClause.Occur.SHOULD
clause in a BooleanQuery, but the scores
are only computed as the boost.size
terms.
BooleanClause.Occur.SHOULD
clause in a BooleanQuery, and keeps the
scores as computed by the query.size
terms.
MultiTermQuery
, that exposes its
functionality as a Filter
.MultiTermQuery
as a Filter.
CustomScoreQuery.toString(String)
.
ThreadFactory
implementation that accepts the name prefix
of the created threads as a constructor argument.NamedThreadFactory
instance
LockFactory
using native OS file
locks.NearSpansOrdered
, but for the unordered case.true
iff this filter is negated, otherwise false
FieldCache.getBytes(IndexReader,String)
.
FieldCache.getBytes(IndexReader,String,FieldCache.ByteParser)
.
CharacterUtils.CharacterBuffer
and allocates a char[]
of the given bufferSize.
WeakIdentityMap
based on a ConcurrentHashMap
.
FieldCache.getDoubles(IndexReader,String)
.
FieldCache.getDoubles(IndexReader,String,FieldCache.DoubleParser)
.
NumericRangeFilter
, that filters a double
range using the given precisionStep
.
NumericRangeFilter
, that queries a double
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
NumericRangeQuery
, that queries a double
range using the given precisionStep
.
NumericRangeQuery
, that queries a double
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
FieldCache.getFloats(IndexReader,String)
.
FieldCache.getFloats(IndexReader,String,FieldCache.FloatParser)
.
NumericRangeFilter
, that filters a float
range using the given precisionStep
.
NumericRangeFilter
, that queries a float
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
NumericRangeQuery
, that queries a float
range using the given precisionStep
.
NumericRangeQuery
, that queries a float
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
WeakIdentityMap
based on a non-synchronized HashMap
.
FieldCache.getInts(IndexReader,String)
.
FieldCache.getInts(IndexReader,String,FieldCache.IntParser)
.
NumericRangeFilter
, that filters a int
range using the given precisionStep
.
NumericRangeFilter
, that queries a int
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
NumericRangeQuery
, that queries a int
range using the given precisionStep
.
NumericRangeQuery
, that queries a int
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
FieldCache.getLongs(IndexReader,String)
.
FieldCache.getLongs(IndexReader,String,FieldCache.LongParser)
.
NumericRangeFilter
, that filters a long
range using the given precisionStep
.
NumericRangeFilter
, that queries a long
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
NumericRangeQuery
, that queries a long
range using the given precisionStep
.
NumericRangeQuery
, that queries a long
range using the default precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
RAMFile
for storing data.
FieldCache.getShorts(IndexReader,String)
.
FieldCache.getShorts(IndexReader,String,FieldCache.ShortParser)
.
TeeSinkTokenFilter.SinkTokenStream
that receives all tokens consumed by this stream.
TeeSinkTokenFilter.SinkTokenStream
that receives all tokens consumed by this stream
that pass the supplied filter.
FieldCache.getStringIndex(org.apache.lucene.index.IndexReader, java.lang.String)
.
Thread
TopDocs
instance containing the given results.
BytesRef
in the iterator.
DocIdSetIterator.NO_MORE_DOCS
if there are no more docs in the
set.PhraseQuery
which is optimized for n-gram phrase query.FSDirectory
implementation that uses java.nio's FileChannel's
positional read, which allows multiple threads to read from the same file
without synchronizing.NativeFSLockFactory
.
NoMergePolicy
which indicates the index does not use
compound files.
DocIdSetIterator.nextDoc()
, DocIdSetIterator.advance(int)
and
DocIdSetIterator.docID()
it means there are no more docs in the iterator.
IndexDeletionPolicy
which keeps all index commits around, never
deleting them.LockFactory
to disable locking entirely.MergePolicy
which never returns merges to execute (hence it's
name).MergeScheduler
which never executes any merges.Outputs
implementation; use this if
you just want to build an FSA.CharTokenizer.normalize(int)
instead. This method will be
removed in Lucene 4.0.
Character.toLowerCase(int)
.
MappingCharFilter
.RAMDirectory
around any provided delegate directory, to
be used during NRT search.NRTManager.NRTManager(TrackingIndexWriter,SearcherFactory)
,
but you can also specify whether each reopened searcher must
apply deletes.
NRTManager
.NumericUtils
instead, which
provides a sortable binary representation (prefix encoded) of numeric
values.
To index and efficiently query numeric values use NumericField
and NumericRangeQuery
.
This class is included for use with existing
indices and will be removed in a future release (possibly Lucene 4.0).UAX29URLEmailTokenizer.TOKEN_TYPES
instead
NumericUtils
, e.g.
NumericUtils
, e.g.
NumericUtils.intToPrefixCoded(int)
, e.g.
NumericUtils.longToPrefixCoded(long)
, e.g.
Field
that enables indexing
of numeric values for efficient range filtering and
sorting.precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
precisionStep
.
precisionStep
.
NumericField
.Filter
that only accepts numeric values within
a specified range.Query
that matches numeric values within a
specified range.TokenStream
for indexing numeric values that can be used by NumericRangeQuery
or NumericRangeFilter
.precisionStep
NumericUtils.PRECISION_STEP_DEFAULT
(4).
precisionStep
.
precisionStep
using the given AttributeSource
.
precisionStep
using the given
AttributeSource.AttributeFactory
.
NumericUtils.splitIntRange(org.apache.lucene.util.NumericUtils.IntRangeBuilder, int, int, int)
.NumericUtils.splitLongRange(org.apache.lucene.util.NumericUtils.LongRangeBuilder, int, long, long)
.IndexReader.open(Directory)
instead
IndexWriter
.
IndexCommit
.
IndexReader.open(IndexCommit)
instead
IndexReader.open(Directory)
instead
IndexReader.open(Directory,int)
instead
IndexReader.open(IndexCommit)
instead
IndexReader.open(IndexCommit,int)
instead
IndexCommit
and termInfosIndexDivisor.
FSDirectory.open(File)
, but allows you to
also specify a custom LockFactory
.
DocIdSetIterator
.IndexReader.openIfChanged(IndexReader)
instead
readOnly=true
reader; else, return null.
IndexWriter
versus what the provided reader is
searching, then open and return a new read-only
IndexReader searching both committed and uncommitted
changes from the writer; else, return null (though, the
current implementation never returns null).
Fieldcache
using getStringIndex().Version.LUCENE_31
Version.LUCENE_34
Attribute
s for text analysis.CollationKeyFilter
converts each token into its binary CollationKey
using the
provided Collator
, and then encode the CollationKey
as a String using
IndexableBinaryStringTools
, to allow it to be
stored as an index term.Document
for indexing and searching.DataOutput
wrapping a plain OutputStream
.Outputs
implementation, holding two other outputs.IndexSearcher
, instead.Searchable
which searches searchables with the default
executor service (a cached thread pool).
Searchable
which searches searchables with the specified ExecutorService.
Query
.
CheckIndex.checkIndex(List)
) was called with non-null
argument).
PayloadProcessorProvider.PayloadProcessor.processPayload(byte[], int, int)
.
SpanNearQuery
except that it factors
in the value of the payloads located at each of the positions where the
TermSpans
occurs.PayloadProcessorProvider.ReaderPayloadProcessor
to be used for a Directory
.PayloadProcessorProvider.ReaderPayloadProcessor
instead.PayloadProcessorProvider.ReaderPayloadProcessor
for a given Term
which allows
processing the payloads of different terms differently.SpanTermQuery
except that it factors
in the value of the payload located at each of the positions where the
Term
occurs.SnapshotDeletionPolicy
which adds a persistence layer so that
snapshots can be maintained across the life of an application.PersistentSnapshotDeletionPolicy
wraps another
IndexDeletionPolicy
to enable flexible snapshotting.
TokenStream
, used in phrase
searching.PositionLengthAttribute
.Outputs
implementation where each output
is a non-negative long value.Collector
implementation which wraps another
Collector
and makes sure only documents with
scores > 0 are collected.NumericField
, NumericTokenStream
,
NumericRangeQuery
, and NumericRangeFilter
as default
prefix
.
TwoPhaseCommit.commit()
, but takes an additional commit data to be included
w/ the commit.
PayloadFunction
to score the payloads, but
can be overridden to do other things.
SearcherLifetimeManager.Pruner
to prune entries.
1/sqrt(sumOfSquaredWeights)
.
query
.
Comparator
.
Comparator
.
List
using the Comparator
.
List
in natural order.
Directory
implementation.Directory
.
RAMDirectory
instance from a different
Directory
implementation.
IndexInput
implementation.IndexOutput
implementation.RamUsageEstimator.sizeOf(Object)
method that has no intern checking, too.
RamUsageEstimator.sizeOf(Object)
method.
asList().subList(first, last)
instead.
term
, including deleted documents (which
are normally filtered out).
SegmentInfos.getVersion()
.
IndexReader
s.null
for numeric fields
follow
arc and read the first arc of its target;
this changes the provided arc
(2nd arg) in-place and returns
it.
follow
arc and reads the last
arc of its target; this changes the provided
arc
(2nd arg) in-place and returns it.
Directory
.
ByteBlockPool.Allocator
implementation that recycles unused byte
blocks in a buffer and reuses them in subsequent calls to
RecyclingByteBlockAllocator.getByteBlock()
.RecyclingByteBlockAllocator
RecyclingByteBlockAllocator
with a
DummyConcurrentLock
instance.
RecyclingByteBlockAllocator
with a block size of
ByteBlockPool.BYTE_BLOCK_SIZE
, upper buffered docs limit of
RecyclingByteBlockAllocator.DEFAULT_BUFFERED_BLOCKS
(64) and a
DummyConcurrentLock
instance.
AttributeImpl
/AttributeSource
passing the class name of the Attribute
, a key and the actual value.
AttributeImpl.reflectWith(AttributeReflector)
method:
iff prependAttClass=true
: "AttributeClass#key=value,AttributeClass#key=value"
iff prependAttClass=false
: "key=value,key=value"
AttributeSource.reflectWith(AttributeReflector)
method:
iff prependAttClass=true
: "AttributeClass#key=value,AttributeClass#key=value"
iff prependAttClass=false
: "key=value,key=value"
AttributeReflector
.
AttributeReflector
.
Token.clear()
,
CharTermAttributeImpl.copyBuffer(char[], int, int)
,
Token.setStartOffset(int)
,
Token.setEndOffset(int)
,
Token.setType(java.lang.String)
Token.clear()
,
CharTermAttributeImpl.copyBuffer(char[], int, int)
,
Token.setStartOffset(int)
,
Token.setEndOffset(int)
Token.setType(java.lang.String)
on Token.DEFAULT_TYPE
Token.clear()
,
CharTermAttributeImpl.append(CharSequence)
,
Token.setStartOffset(int)
,
Token.setEndOffset(int)
Token.setType(java.lang.String)
Token.clear()
,
CharTermAttributeImpl.append(CharSequence, int, int)
,
Token.setStartOffset(int)
,
Token.setEndOffset(int)
Token.setType(java.lang.String)
Token.clear()
,
CharTermAttributeImpl.append(CharSequence)
,
Token.setStartOffset(int)
,
Token.setEndOffset(int)
Token.setType(java.lang.String)
on Token.DEFAULT_TYPE
Token.clear()
,
CharTermAttributeImpl.append(CharSequence, int, int)
,
Token.setStartOffset(int)
,
Token.setEndOffset(int)
Token.setType(java.lang.String)
on Token.DEFAULT_TYPE
BytesRefHash
after a previous BytesRefHash.clear()
call.
ReferenceManager.acquire()
.
SearcherLifetimeManager.acquire(long)
.
IndexReader.ReaderClosedListener
.
NRTManager.addWaitingListener(org.apache.lucene.search.NRTManager.WaitingListener)
.
IndexReader.openIfChanged(IndexReader)
instead
IndexReader.openIfChanged(IndexReader)
instead
IndexReader.openIfChanged(IndexReader,IndexCommit)
instead
IndexReader.openIfChanged(IndexReader,IndexWriter,boolean)
instead
TeeSinkTokenFilter.SinkTokenStream.reset()
.
TokenStream
reuse.ReusableAnalyzerBase.TokenStreamComponents
instance.
ReusableAnalyzerBase.TokenStreamComponents
instance.
ReusableAnalyzerBase.createComponents(String, Reader)
to obtain an
instance of ReusableAnalyzerBase.TokenStreamComponents
.
FieldCache
using getStringIndex()
and reverses the order.MultiTermQuery.getEnum(IndexReader)
.
IndexWriter
without committing
any changes that have occurred since the last commit
(or since it was opened, if commit hasn't been called).
Lock.With.doBody()
while lock is obtained.
Scorer
which wraps another scorer and caches the score of the
current document.TopDocs
.Scorer.Scorer(Weight)
instead.
Scorer.Scorer(Weight)
instead.
Scorer
which scores documents in/out-of order according
to scoreDocsInOrder
.
BooleanClause.Occur.SHOULD
clause in a
BooleanQuery, and keeps the scores as computed by the
query.
BooleanClause.Occur.SHOULD
clause in a
BooleanQuery, and keeps the scores as computed by the
query.
BooleanClause.Occur.SHOULD
clause in a BooleanQuery, and keeps the
scores as computed by the query.
n
hits for query
.
n
hits for query
, applying filter
if non-null.
IndexSearcher.search(Weight, Filter, int, Sort)
, but you choose
whether or not the fields in the returned FieldDoc
instances should
be set by specifying fillFields.
Searchable
in its own thread and waits for each search to complete and merge
the results back together.
n
hits for query
, applying filter
if non-null.
n
hits for query
.
n
hits for query
where all results are after a previous
result (after
).
n
hits for query
, applying filter
if non-null,
where all results are after a previous result (after
).
SearcherManager
and NRTManager
to
create new IndexSearchers.SearcherLifetimeManager.prune(org.apache.lucene.search.SearcherLifetimeManager.Pruner)
.IndexSearcher
instances across multiple
threads, while periodically reopening.IndexWriter
.
Directory
.
TermEnum
.
CheckIndex.Status.SegmentInfoStatus
instances, detailing status of each segment.
MergeScheduler
that simply does each merge
sequentially, using the current thread.bit
to one.
true
to allow leading wildcard characters.
Field
names to load and the Set of Field
names to load lazily.
TimeLimitingCollector.setBaseline(long)
using Counter.get()
on the clock passed to the construcutor.
b
.
IndexSearcher.search(Query,Filter,int,Sort)
).
IndexWriterConfig.setDefaultWriteLockTimeout(long)
instead
double
value.
double
value.
true
, this TokenFilter will preserve
positions of the incoming tokens (ie, accumulate and
set position increments of the removed tokens).
true
to enable position increments in result query.
float
value.
float
value.
IndexDeletionPolicy
implementation to be
specified.
IndexWriter
to use by this merge policy.
FieldCacheSanityChecker
.
int
value.
int
value.
true
.
true
.
long
value.
long
value.
IndexWriterConfig.setMaxBufferedDeleteTerms(int)
instead.
IndexWriterConfig.setMaxBufferedDocs(int)
instead.
LimitTokenCountAnalyzer
instead. Note that the
behvaior slightly changed - the analyzer limits the number of
tokens per token stream created, while this setting limits the
total number of tokens to index. This only matters if you index
many multi-valued fields though.
LogMergePolicy.setMaxMergeDocs(int)
directly.
LogByteSizeMergePolicy.setMaxMergeMBForForcedMerge(double)
IndexWriterConfig.setMergedSegmentWarmer(org.apache.lucene.index.IndexWriter.IndexReaderWarmer)
instead.
LogMergePolicy.setMergeFactor(int)
directly.
IndexWriterConfig.setMergePolicy(MergePolicy)
instead.
MergePolicy
is invoked whenever there are changes to the
segments in the index.
IndexWriterConfig.setMergeScheduler(MergeScheduler)
instead
MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT
when creating a PrefixQuery, WildcardQuery or RangeQuery.
AbstractField.setIndexOptions(FieldInfo.IndexOptions)
instead.
SetOnce.set(Object)
.
SetOnce.set(Object)
is called more than once.IndexWriterConfig.OpenMode
of the index.
PayloadProcessorProvider
to use when merging payloads.
i
as pivot value
PagedBytes.PagedBytesDataInput.getPosition()
.
IndexWriterConfig.setRAMBufferSizeMB(double)
instead.
DataInput.readBytes(byte[], int, int)
.
IndexWriter.getReader()
.
IndexWriterConfig.setReaderTermsIndexDivisor(int)
instead.
IndexWriter.getReader()
.
Collector.collect(int)
.
IndexWriterConfig.setSimilarity(Similarity)
instead
Similarity
implementation used by this IndexWriter.
IndexWriterConfig.setTermIndexInterval(int)
LogMergePolicy.setUseCompoundFile(boolean)
.
IndexInput
, that is
mentioned in the bug report.
IndexWriterConfig.setWriteLockTimeout(long)
instead
TopDocs.merge(org.apache.lucene.search.Sort, int, org.apache.lucene.search.TopDocs[])
FieldCache
using getShorts()
and makes those values
available as other numeric types, casting as needed.Similarity
or DefaultSimilarity
instead.Similarity
that delegates all methods to another.
Analyzer
that filters LetterTokenizer
with LowerCaseFilter
SimpleAnalyzer
SimpleAnalyzer.SimpleAnalyzer(Version)
instead
FSDirectory
using java.io.RandomAccessFile.NativeFSLockFactory
.
LockFactory
using File.createNewFile()
.LockFactory
for a single in-process instance,
meaning all locking will take place through this one instance.SingleTermEnum
.
BytesRef
values in this BytesRefHash
.
includeDocStores
is true), or the size of all files except the store
files otherwise.
TermDocs.skipTo(int)
.
1 / (distance + 1)
.
IndexDeletionPolicy
that wraps around any other
IndexDeletionPolicy
and adds the ability to hold and later release
snapshots of an index.SnapshotDeletionPolicy
wraps another IndexDeletionPolicy
to
enable flexible snapshotting.
int
back to a float
.
long
back to a double
.
TermVectorEntry
s.FieldCache.Parser
.
FieldCache.Parser
.
UAX29URLEmailTokenizer.TOKEN_TYPES
instead
match
whose end
position is less than or equal to end
.
MultiTermQuery
as a SpanQuery
,
so it can be nested within other SpanQuery classes.BooleanClause.Occur.SHOULD
clause in a BooleanQuery, and keeps the
scores as computed by the query.size
terms.
include
which
have no overlap with spans from exclude
.
YES
, rejected NO
,
or rejected and enumeration should advance to the next document NO_AND_ADVANCE
.SpanPositionCheckQuery.getMatch()
lies between a start and end positionquery
.
IndexReader
tries to make changes to the index (via IndexReader.deleteDocument(int)
, IndexReader.undeleteAll()
or IndexReader.setNorm(int, java.lang.String, byte)
)
but changes have already been committed to the index
since this reader was instantiated.StandardTokenizer
with StandardFilter
, LowerCaseFilter
and StopFilter
, using a list of
English stop words.StandardAnalyzer.STOP_WORDS_SET
).
StandardAnalyzer.StandardAnalyzer(Version, Reader)
instead.
StandardTokenizer
.StandardFilter.StandardFilter(Version, TokenStream)
instead.
StandardTokenizer
.
AttributeSource
.
AttributeSource.AttributeFactory
LetterTokenizer
with LowerCaseFilter
and StopFilter
.StopAnalyzer.ENGLISH_STOP_WORDS_SET
.
StopFilter.StopFilter(Version, TokenStream, Set)
instead
StopFilter.StopFilter(Version, TokenStream, Set)
instead
StopFilter.StopFilter(Version, TokenStream, Set)
instead
NumberTools.longToString(long)
CharArraySet.iterator()
, which returns char[]
instances.
timeToString
or
dateToString
back to a time, represented as a
Date object.
NumberTools.longToString(long)
back to a
long.
timeToString
or
dateToString
back to a time, represented as the
number of milliseconds since January 1, 1970, 00:00:00 GMT.
n
within its
sub-index.
n
in the
array used to construct this searcher/reader.
n
in the array
used to construct this searcher.
i
and j
in your data
Directory.sync(Collection)
instead.
For easy migration you can change your code to call
sync(Collections.singleton(name))
AttributeSource
states to store in the sink.CharTermAttribute
instead.term
.
TermDocs
enumerator.
term
.
TermPositions
enumerator.
TermFreqVector
to provide additional information about
positions in which each of the terms is found.t
.
collator
parameter will cause every single
index Term in the Field referenced by lowerTerm and/or upperTerm to be
examined.
lowerTerm
but less/equal than upperTerm
.
lowerTerm
but less/equal than upperTerm
.
lowerTerm
but less/equal than upperTerm
.
TermVectorEntry
s first by frequency and then by
the term (case-sensitive)IndexReader.getTermFreqVector(int,String)
.TermPositionVector
's
offset information.sqrt(freq)
.
TimeLimitingCollector
is used to timeout search requests that
take longer than the maximum allowed search time limit.Collector
with a specified timeout.
FST
to a GraphViz's dot
language description
for visualization.
Token
as instance for the basic attributes
and for all other attributes calls the given delegate factory.Token
as instance for the basic attributes
and for all other attributes calls the given delegate factory.
Token
as implementation for the basic
attributes and return the default impl (with "Impl" appended) for all other
attributes.
Tokenizer.Tokenizer(Reader)
instead.
Tokenizer.Tokenizer(AttributeSource.AttributeFactory, Reader)
instead.
Tokenizer.Tokenizer(AttributeSource, Reader)
instead.
ReusableAnalyzerBase.createComponents(String, Reader)
to obtain an
instance of ReusableAnalyzerBase.TokenStreamComponents
and returns the sink of the
components.
TokenStream
enumerates the sequence of tokens, either from
Field
s of a Document
or from query text.Attribute
instances.
NumericTokenStream
for indexing the numeric value.
Searcher.search(Query,Filter,int)
and Searcher.search(Query,int)
.TopDocs
output.Collector
that sorts by SortField
using
FieldComparator
s.Searcher.search(Query,Filter,int,Sort)
.Collector
implementation that collects the top-scoring hits,
returning them as a TopDocs
.size
terms.
CharSequence
interface.
field
assumed to be the
default field and omitted.
Object.toString()
.true
iff the refCount was
successfully incremented, otherwise false
.
TwoPhaseCommitTool.execute(TwoPhaseCommit...)
when an
object fails to commit().TwoPhaseCommitTool.execute(TwoPhaseCommit...)
when an
object fails to prepareCommit().TwoPhaseCommit
, which delegates all calls to the
wrapped object, passing the specified commitData.UAX29URLEmailTokenizer
with StandardFilter
,
LowerCaseFilter
and
StopFilter
, using a list of
English stop words.UAX29URLEmailAnalyzer.STOP_WORDS_SET
).
UAX29URLEmailTokenizer.UAX29URLEmailTokenizer(Version, Reader)
instead.
UAX29URLEmailTokenizer.UAX29URLEmailTokenizer(Version, Reader)
instead.
UAX29URLEmailTokenizer.UAX29URLEmailTokenizer(Version, AttributeSource, Reader)
instead.
UAX29URLEmailTokenizer.UAX29URLEmailTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead.
AttributeSource
.
AttributeSource.AttributeFactory
Integer.MAX_VALUE
.
true
, if this platform supports unmapping mmapped files.
CharArrayMap
.
CharArraySet
.
term
and then adding the new
document.
term
and then adding the new
document.
MergePolicy
is used for upgrading all existing segments of
an index when calling IndexWriter.forceMerge(int)
.MergePolicy
and intercept forceMerge requests to
only upgrade segments written with previous Lucene versions.
Outputs
implementation where each output
is one or two non-negative long values.UAX29URLEmailTokenizer.TOKEN_TYPES
instead
UnicodeUtil.UTF8toUTF16(byte[], int, int, CharsRef)
Util.shortestPaths(org.apache.lucene.util.fst.FST, org.apache.lucene.util.fst.FST.Arc, java.util.Comparator, int)
.ValueSource
.LockFactory
that wraps another LockFactory
and verifies that each lock obtain/release
is "correct" (never results in two processes holding the
lock at the same time).baseClass
and method declaration.
Scorer
subclasses should implement this method if the subclass
itself contains multiple scorers to support gathering details for
sub-scorers via Scorer.ScorerVisitor
WeakHashMap
and
IdentityHashMap
.Weight
implementations.
Subclasses of Query
should use Query.createWeight(org.apache.lucene.search.Searcher)
, instead.
WhitespaceTokenizer
.WhitespaceAnalyzer
WhitespaceAnalyzer.WhitespaceAnalyzer(Version)
instead
AttributeSource
.
AttributeSource.AttributeFactory
.
WhitespaceTokenizer.WhitespaceTokenizer(Version, Reader)
instead. This will
be removed in Lucene 4.0.
WhitespaceTokenizer.WhitespaceTokenizer(Version, AttributeSource, Reader)
instead. This will be removed in Lucene 4.0.
WhitespaceTokenizer.WhitespaceTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0.
WildcardTermEnum
.
UAX29URLEmailTokenizer.TOKEN_TYPES
instead
Collector
s with a MultiCollector
.
name
in Directory
d
, in a format that can be read by the constructor BitVector.BitVector(Directory, String)
.
IndexWriterConfig.WRITE_LOCK_TIMEOUT
instead
DataOutput.writeString(java.lang.String)
DataOutput.writeString(java.lang.String)
DataOutput
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |