|
Deprecated Classes |
org.apache.lucene.search.payloads.BoostingTermQuery
See PayloadTermQuery |
org.apache.lucene.search.ConstantScoreRangeQuery
Use TermRangeQuery for term ranges or
NumericRangeQuery for numeric ranges instead.
This class will be removed in Lucene 3.0. |
org.apache.lucene.document.DateField
If you build a new index, use DateTools or
NumericField instead.
This class is included for use with existing
indices and will be removed in a future release. |
org.apache.lucene.search.FieldSortedHitQueue
see FieldValueHitQueue |
org.apache.lucene.store.FSDirectory.FSIndexInput
Use SimpleFSDirectory.SimpleFSIndexInput instead |
org.apache.lucene.store.FSDirectory.FSIndexInput.Descriptor
|
org.apache.lucene.store.FSDirectory.FSIndexOutput
Use SimpleFSDirectory.SimpleFSIndexOutput instead |
org.apache.lucene.search.Hit
Use TopScoreDocCollector and TopDocs instead. Hits will be removed in Lucene 3.0. |
org.apache.lucene.search.HitCollector
Please use Collector instead. |
org.apache.lucene.search.HitCollectorWrapper
Please migrate custom HitCollectors to the new Collector
class. This class will be removed when HitCollector is
removed. |
org.apache.lucene.search.HitIterator
Use TopScoreDocCollector and TopDocs instead. Hits will be removed in Lucene 3.0. |
org.apache.lucene.search.Hits
see Searcher.search(Query, int), Searcher.search(Query, Filter, int)
and Searcher.search(Query, Filter, int, Sort):
TopDocs topDocs = searcher.search(query, numHits);
ScoreDoc[] hits = topDocs.scoreDocs;
for (int i = 0; i < hits.length; i++) {
int docId = hits[i].doc;
Document d = searcher.doc(docId);
// do something with current hit
...
|
org.apache.lucene.index.IndexModifier
Please use IndexWriter instead. |
org.apache.lucene.analysis.ISOLatin1AccentFilter
in favor of ASCIIFoldingFilter which covers a superset
of Latin 1. This class will be removed in Lucene 3.0. |
org.apache.lucene.search.function.MultiValueSource
This class is temporary, to ease the
migration to segment-based searching. Please change your
code to not pass composite readers to these APIs. |
org.apache.lucene.document.NumberTools
For new indexes use NumericUtils instead, which
provides a sortable binary representation (prefix encoded) of numeric
values.
To index and efficiently query numeric values use NumericField
and NumericRangeQuery.
This class is included for use with existing
indices and will be removed in a future release. |
org.apache.lucene.search.QueryFilter
use a CachingWrapperFilter with QueryWrapperFilter |
org.apache.lucene.search.RangeFilter
Use TermRangeFilter for term ranges or
NumericRangeFilter for numeric ranges instead.
This class will be removed in Lucene 3.0. |
org.apache.lucene.search.RangeQuery
Use TermRangeQuery for term ranges or
NumericRangeQuery for numeric ranges instead.
This class will be removed in Lucene 3.0. |
org.apache.lucene.analysis.SinkTokenizer
Use TeeSinkTokenFilter instead |
org.apache.lucene.search.SortComparator
Please use FieldComparatorSource instead. |
org.apache.lucene.analysis.TeeTokenFilter
Use TeeSinkTokenFilter instead |
org.apache.lucene.search.TimeLimitedCollector
Use TimeLimitingCollector instead, which extends the new
Collector. This class will be removed in 3.0. |
org.apache.lucene.search.TopDocCollector
Please use TopScoreDocCollector
instead, which has better performance. |
org.apache.lucene.search.TopFieldDocCollector
Please use TopFieldCollector instead. |
|
Deprecated Methods |
org.apache.lucene.index.IndexWriter.abort()
Please use IndexWriter.rollback() instead. |
org.apache.lucene.queryParser.QueryParser.addClause(Vector, int, int, Query)
use QueryParser.addClause(List, int, int, Query) instead. |
org.apache.lucene.index.IndexWriter.addIndexes(Directory[])
Use IndexWriter.addIndexesNoOptimize(org.apache.lucene.store.Directory[]) instead,
then separately call IndexWriter.optimize() afterwards if
you need to. |
org.apache.lucene.util.PriorityQueue.adjustTop()
use PriorityQueue.updateTop() which returns the new top element and
saves an additional call to PriorityQueue.top(). |
org.apache.lucene.document.Field.binaryValue()
This method must allocate a new byte[] if
the AbstractField.getBinaryOffset() is non-zero
or AbstractField.getBinaryLength() is not the
full length of the byte[]. Please use AbstractField.getBinaryValue() instead, which simply
returns the byte[]. |
org.apache.lucene.search.MultiTermQueryWrapperFilter.bits(IndexReader)
Use MultiTermQueryWrapperFilter.getDocIdSet(IndexReader) instead. |
org.apache.lucene.search.Filter.bits(IndexReader)
Use Filter.getDocIdSet(IndexReader) instead. |
org.apache.lucene.search.QueryWrapperFilter.bits(IndexReader)
Use QueryWrapperFilter.getDocIdSet(IndexReader) instead. |
org.apache.lucene.search.CachingSpanFilter.bits(IndexReader)
Use CachingSpanFilter.getDocIdSet(IndexReader) instead. |
org.apache.lucene.search.CachingWrapperFilter.bits(IndexReader)
Use CachingWrapperFilter.getDocIdSet(IndexReader) instead. |
org.apache.lucene.index.CheckIndex.check(Directory, boolean)
Please instantiate a CheckIndex and then use CheckIndex.checkIndex() instead |
org.apache.lucene.index.CheckIndex.check(Directory, boolean, List)
Please instantiate a CheckIndex and then use CheckIndex.checkIndex(List) instead |
org.apache.lucene.search.FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator.doc()
use FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator.docID() instead. |
org.apache.lucene.search.ConstantScoreQuery.ConstantScorer.doc()
use ConstantScoreQuery.ConstantScorer.docID() instead. |
org.apache.lucene.search.ScoreCachingWrappingScorer.doc()
use ScoreCachingWrappingScorer.docID() instead. |
org.apache.lucene.search.FilteredDocIdSetIterator.doc()
use FilteredDocIdSetIterator.docID() instead. |
org.apache.lucene.search.DocIdSetIterator.doc()
use DocIdSetIterator.docID() instead. |
org.apache.lucene.search.spans.SpanScorer.doc()
use SpanScorer.docID() instead. |
org.apache.lucene.util.OpenBitSetIterator.doc()
use OpenBitSetIterator.docID() instead. |
org.apache.lucene.index.IndexWriter.docCount()
Please use IndexWriter.maxDoc() (same as this
method) or IndexWriter.numDocs() (also takes deletions
into account), instead. |
org.apache.lucene.index.FilterIndexReader.doCommit()
|
org.apache.lucene.index.IndexReader.doCommit()
Please implement instead. |
org.apache.lucene.index.MultiReader.doCommit()
|
org.apache.lucene.index.SegmentReader.doCommit()
|
org.apache.lucene.index.ParallelReader.doCommit()
|
org.apache.lucene.search.Scorer.explain(int)
Please use IndexSearcher.explain(org.apache.lucene.search.Weight, int)
or Weight.explain(org.apache.lucene.index.IndexReader, int) instead. |
org.apache.lucene.document.Document.fields()
use Document.getFields() instead |
org.apache.lucene.index.IndexWriter.flush()
please call IndexWriter.commit()) instead |
org.apache.lucene.index.SegmentReader.get(SegmentInfo)
|
org.apache.lucene.search.BooleanQuery.getAllowDocsOutOfOrder()
this is not needed anymore, as
Weight.scoresDocsOutOfOrder() is used. |
org.apache.lucene.search.FieldCache.getAuto(IndexReader, String)
Please specify the exact type, instead.
Especially, guessing does not work with the new
NumericField type. |
org.apache.lucene.search.SpanFilterResult.getBits()
Use SpanFilterResult.getDocIdSet() |
org.apache.lucene.queryParser.QueryParser.getBooleanQuery(Vector)
use QueryParser.getBooleanQuery(List) instead |
org.apache.lucene.queryParser.QueryParser.getBooleanQuery(Vector, boolean)
use QueryParser.getBooleanQuery(List, boolean) instead |
org.apache.lucene.queryParser.CharStream.getColumn()
|
org.apache.lucene.index.IndexReader.getCurrentVersion(File)
Use IndexReader.getCurrentVersion(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.getCurrentVersion(String)
Use IndexReader.getCurrentVersion(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.search.FieldCache.getCustom(IndexReader, String, SortComparator)
Please implement FieldComparatorSource directly, instead. |
org.apache.lucene.analysis.standard.StandardAnalyzer.getDefaultReplaceInvalidAcronym()
This will be removed (hardwired to true) in 3.0 |
org.apache.lucene.store.FSDirectory.getDirectory(File)
Use FSDirectory.open(File) |
org.apache.lucene.store.FSDirectory.getDirectory(File, boolean)
Use IndexWriter's create flag, instead, to
create a new index. |
org.apache.lucene.store.FSDirectory.getDirectory(File, LockFactory)
Use FSDirectory.open(File, LockFactory) |
org.apache.lucene.store.FSDirectory.getDirectory(String)
Use FSDirectory.open(File) |
org.apache.lucene.store.FSDirectory.getDirectory(String, boolean)
Use IndexWriter's create flag, instead, to
create a new index. |
org.apache.lucene.store.FSDirectory.getDirectory(String, LockFactory)
Use FSDirectory.open(File, LockFactory) |
org.apache.lucene.index.IndexReader.getDisableFakeNorms()
This currently defaults to false (to remain
back-compatible), but in 3.0 it will be hardwired to
true, meaning the norms() methods will return null for
fields that had disabled norms. |
org.apache.lucene.store.FSDirectory.getDisableLocks()
Use a constructor that takes a LockFactory and
supply NoLockFactory.getNoLockFactory(). |
org.apache.lucene.search.ExtendedFieldCache.getDoubles(IndexReader, String, ExtendedFieldCache.DoubleParser)
Will be removed in 3.0, this is for binary compatibility only |
org.apache.lucene.analysis.StopFilter.getEnablePositionIncrementsDefault()
Please specify this when you create the StopFilter |
org.apache.lucene.search.SortField.getFactory()
use SortField.getComparatorSource() |
org.apache.lucene.index.IndexReader.getFieldCacheKey()
|
org.apache.lucene.queryParser.CharStream.getLine()
|
org.apache.lucene.search.ExtendedFieldCache.getLongs(IndexReader, String, ExtendedFieldCache.LongParser)
Will be removed in 3.0, this is for binary compatibility only |
org.apache.lucene.index.IndexWriter.getMaxSyncPauseSeconds()
This will be removed in 3.0, when
autoCommit=true is removed from IndexWriter. |
org.apache.lucene.document.Fieldable.getOmitTf()
Renamed to AbstractField.getOmitTermFreqAndPositions() |
org.apache.lucene.document.AbstractField.getOmitTf()
Renamed to AbstractField.getOmitTermFreqAndPositions() |
org.apache.lucene.analysis.TokenStream.getOnlyUseNewAPI()
This setting will no longer be needed in Lucene 3.0 as
the old API will be removed. |
org.apache.lucene.search.MultiTermQuery.getTerm()
check sub class for possible term access - getTerm does not
make sense for all MultiTermQuerys and will be removed. |
org.apache.lucene.search.spans.SpanNotQuery.getTerms()
use extractTerms instead |
org.apache.lucene.search.spans.SpanOrQuery.getTerms()
use extractTerms instead |
org.apache.lucene.search.spans.FieldMaskingSpanQuery.getTerms()
use FieldMaskingSpanQuery.extractTerms(Set) instead. |
org.apache.lucene.search.spans.SpanTermQuery.getTerms()
use extractTerms instead |
org.apache.lucene.search.spans.SpanQuery.getTerms()
use extractTerms instead |
org.apache.lucene.search.spans.SpanFirstQuery.getTerms()
use extractTerms instead |
org.apache.lucene.search.spans.SpanNearQuery.getTerms()
use extractTerms instead |
org.apache.lucene.search.SortField.getUseLegacySearch()
will be removed in Lucene 3.0. |
org.apache.lucene.queryParser.QueryParser.getUseOldRangeQuery()
Please use QueryParser.getMultiTermRewriteMethod() instead. |
org.apache.lucene.search.BooleanQuery.getUseScorer14()
Use BooleanQuery.getAllowDocsOutOfOrder() instead. |
org.apache.lucene.search.Similarity.idf(Collection, Searcher)
see Similarity.idfExplain(Collection, Searcher) |
org.apache.lucene.search.Similarity.idf(Term, Searcher)
see Similarity.idfExplain(Term, Searcher) |
org.apache.lucene.index.IndexReader.indexExists(File)
Use IndexReader.indexExists(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.indexExists(String)
Use IndexReader.indexExists(Directory) instead
This method will be removed in the 3.0 release. |
org.apache.lucene.util.PriorityQueue.insert(Object)
use PriorityQueue.insertWithOverflow(Object) instead, which
encourages objects reuse. |
org.apache.lucene.index.IndexReader.isLocked(Directory)
Please use IndexWriter.isLocked(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexWriter.isLocked(String)
Use IndexWriter.isLocked(Directory) |
org.apache.lucene.index.IndexReader.isLocked(String)
Use IndexReader.isLocked(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.analysis.standard.StandardAnalyzer.isReplaceInvalidAcronym()
This will be removed (hardwired to true) in 3.0 |
org.apache.lucene.analysis.standard.StandardTokenizer.isReplaceInvalidAcronym()
Remove in 3.X and make true the only valid value |
org.apache.lucene.index.IndexReader.lastModified(File)
Use IndexReader.lastModified(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.lastModified(String)
Use IndexReader.lastModified(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.store.Directory.list()
For some Directory implementations (FSDirectory, and its subclasses), this method
silently filters its results to include only index
files. Please use Directory.listAll() instead, which
does no filtering. |
org.apache.lucene.analysis.KeywordTokenizer.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.CharTokenizer.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.TokenStream.next()
The returned Token is a "full private copy" (not re-used across
calls to TokenStream.next()) but will be slower than calling
TokenStream.next(Token) or using the new TokenStream.incrementToken()
method with the new AttributeSource API. |
org.apache.lucene.analysis.CachingTokenFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.ISOLatin1AccentFilter.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.standard.StandardTokenizer.next()
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.search.FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator.next()
use FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator.nextDoc() instead. |
org.apache.lucene.search.ConstantScoreQuery.ConstantScorer.next()
use ConstantScoreQuery.ConstantScorer.nextDoc() instead. |
org.apache.lucene.search.ScoreCachingWrappingScorer.next()
use ScoreCachingWrappingScorer.nextDoc() instead. |
org.apache.lucene.search.FilteredDocIdSetIterator.next()
use FilteredDocIdSetIterator.nextDoc() instead. |
org.apache.lucene.search.DocIdSetIterator.next()
use DocIdSetIterator.nextDoc() instead. This will be removed in 3.0 |
org.apache.lucene.search.spans.SpanScorer.next()
use SpanScorer.nextDoc() instead. |
org.apache.lucene.util.OpenBitSetIterator.next()
use OpenBitSetIterator.nextDoc() instead. |
org.apache.lucene.analysis.KeywordTokenizer.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.CharTokenizer.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.TokenStream.next(Token)
The new TokenStream.incrementToken() and AttributeSource
APIs should be used instead. |
org.apache.lucene.analysis.CachingTokenFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.ISOLatin1AccentFilter.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.analysis.standard.StandardTokenizer.next(Token)
Will be removed in Lucene 3.0. This method is final, as it should
not be overridden. Delegates to the backwards compatibility layer. |
org.apache.lucene.index.IndexReader.open(Directory)
Use IndexReader.open(Directory, boolean) instead
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.open(Directory, IndexDeletionPolicy)
Use IndexReader.open(Directory, IndexDeletionPolicy, boolean) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.open(File)
Use IndexReader.open(Directory, boolean) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.open(File, boolean)
Use IndexReader.open(Directory, boolean) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.open(IndexCommit)
Use IndexReader.open(IndexCommit, boolean) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.open(IndexCommit, IndexDeletionPolicy)
Use IndexReader.open(IndexCommit, IndexDeletionPolicy, boolean) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.open(String)
Use IndexReader.open(Directory, boolean) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.index.IndexReader.open(String, boolean)
Use IndexReader.open(Directory, boolean) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.queryParser.MultiFieldQueryParser.parse(String[], String[], Analyzer)
Use MultiFieldQueryParser.parse(Version,String[],String[],Analyzer) instead |
org.apache.lucene.queryParser.MultiFieldQueryParser.parse(String[], String[], BooleanClause.Occur[], Analyzer)
Used MultiFieldQueryParser.parse(Version, String[], String[], BooleanClause.Occur[], Analyzer) instead |
org.apache.lucene.queryParser.MultiFieldQueryParser.parse(String, String[], BooleanClause.Occur[], Analyzer)
Use MultiFieldQueryParser.parse(Version, String, String[], BooleanClause.Occur[], Analyzer) instead |
org.apache.lucene.util.PriorityQueue.put(Object)
use PriorityQueue.add(Object) which returns the new top object,
saving an additional call to PriorityQueue.top(). |
org.apache.lucene.store.IndexInput.readChars(char[], int, int)
-- please use readString or readBytes
instead, and construct the string
from those utf8 bytes |
org.apache.lucene.store.Directory.renameFile(String, String)
|
org.apache.lucene.store.FSDirectory.renameFile(String, String)
|
org.apache.lucene.store.RAMDirectory.renameFile(String, String)
|
org.apache.lucene.analysis.standard.StandardAnalyzer.reusableTokenStream(String, Reader)
Use StandardAnalyzer.tokenStream(java.lang.String, java.io.Reader) instead |
org.apache.lucene.search.Scorer.score(HitCollector)
use Scorer.score(Collector) instead. |
org.apache.lucene.search.Scorer.score(HitCollector, int)
use Scorer.score(Collector, int, int) instead. |
org.apache.lucene.search.Similarity.scorePayload(String, byte[], int, int)
See Similarity.scorePayload(int, String, int, int, byte[], int, int) |
org.apache.lucene.search.Searcher.search(Query)
Hits will be removed in Lucene 3.0. Use
Searcher.search(Query, Filter, int) instead. |
org.apache.lucene.search.Searcher.search(Query, Filter)
Hits will be removed in Lucene 3.0. Use
Searcher.search(Query, Filter, int) instead. |
org.apache.lucene.search.Searcher.search(Query, Filter, HitCollector)
use Searcher.search(Query, Filter, Collector) instead. |
org.apache.lucene.search.Searcher.search(Query, Filter, Sort)
Hits will be removed in Lucene 3.0. Use
Searcher.search(Query, Filter, int, Sort) instead. |
org.apache.lucene.search.Searcher.search(Query, HitCollector)
use Searcher.search(Query, Collector) instead. |
org.apache.lucene.search.Searcher.search(Query, Sort)
Hits will be removed in Lucene 3.0. Use
Searcher.search(Query, Filter, int, Sort) instead. |
org.apache.lucene.search.Searchable.search(Weight, Filter, HitCollector)
use Searchable.search(Weight, Filter, Collector) instead. |
org.apache.lucene.search.Searcher.search(Weight, Filter, HitCollector)
use Searcher.search(Weight, Filter, Collector) instead. |
org.apache.lucene.search.BooleanQuery.setAllowDocsOutOfOrder(boolean)
this is not needed anymore, as
Weight.scoresDocsOutOfOrder() is used. |
org.apache.lucene.analysis.standard.StandardAnalyzer.setDefaultReplaceInvalidAcronym(boolean)
This will be removed (hardwired to true) in 3.0 |
org.apache.lucene.index.IndexReader.setDisableFakeNorms(boolean)
This currently defaults to false (to remain
back-compatible), but in 3.0 it will be hardwired to
true, meaning the norms() methods will return null for
fields that had disabled norms. |
org.apache.lucene.store.FSDirectory.setDisableLocks(boolean)
Use a FSDirectory.open(File, LockFactory) or a constructor
that takes a LockFactory and supply
NoLockFactory.getNoLockFactory(). This setting does not work
with FSDirectory.open(File) only the deprecated getDirectory
respect this setting. |
org.apache.lucene.analysis.StopFilter.setEnablePositionIncrementsDefault(boolean)
Please specify this when you create the StopFilter |
org.apache.lucene.index.IndexWriter.setMaxSyncPauseSeconds(double)
This will be removed in 3.0, when
autoCommit=true is removed from IndexWriter. |
org.apache.lucene.document.Fieldable.setOmitTf(boolean)
Renamed to AbstractField.setOmitTermFreqAndPositions(boolean) |
org.apache.lucene.document.AbstractField.setOmitTf(boolean)
Renamed to AbstractField.setOmitTermFreqAndPositions(boolean) |
org.apache.lucene.analysis.TokenStream.setOnlyUseNewAPI(boolean)
This setting will no longer be needed in Lucene 3.0 as the old
API will be removed. |
org.apache.lucene.analysis.Analyzer.setOverridesTokenStreamMethod(Class)
This is only present to preserve
back-compat of classes that subclass a core analyzer
and override tokenStream but not reusableTokenStream |
org.apache.lucene.analysis.standard.StandardAnalyzer.setReplaceInvalidAcronym(boolean)
This will be removed (hardwired to true) in 3.0 |
org.apache.lucene.analysis.standard.StandardTokenizer.setReplaceInvalidAcronym(boolean)
Remove in 3.X and make true the only valid value
See https://issues.apache.org/jira/browse/LUCENE-1068 |
org.apache.lucene.search.Sort.setSort(String)
Please specify the type explicitly by
first creating a SortField and then use Sort.setSort(SortField) |
org.apache.lucene.search.Sort.setSort(String[])
Please specify the type explicitly by
first creating SortFields and then use Sort.setSort(SortField[]) |
org.apache.lucene.search.Sort.setSort(String, boolean)
Please specify the type explicitly by
first creating a SortField and then use Sort.setSort(SortField) |
org.apache.lucene.index.IndexReader.setTermInfosIndexDivisor(int)
Please use IndexReader.open(Directory, IndexDeletionPolicy, boolean, int) to specify the required TermInfos index divisor instead. |
org.apache.lucene.analysis.Token.setTermText(String)
use Token.setTermBuffer(char[], int, int) or
Token.setTermBuffer(String) or
Token.setTermBuffer(String, int, int). |
org.apache.lucene.search.SortField.setUseLegacySearch(boolean)
will be removed in Lucene 3.0. |
org.apache.lucene.queryParser.QueryParser.setUseOldRangeQuery(boolean)
Please use QueryParser.setMultiTermRewriteMethod(org.apache.lucene.search.MultiTermQuery.RewriteMethod) instead. |
org.apache.lucene.search.BooleanQuery.setUseScorer14(boolean)
Use BooleanQuery.setAllowDocsOutOfOrder(boolean) instead. |
org.apache.lucene.document.Field.setValue(TokenStream)
use Field.setTokenStream(org.apache.lucene.analysis.TokenStream) |
org.apache.lucene.store.IndexInput.skipChars(int)
this method operates on old "modified utf8" encoded
strings |
org.apache.lucene.search.FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator.skipTo(int)
use FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.FieldCacheTermsFilterDocIdSetIterator.advance(int) instead. |
org.apache.lucene.search.ConstantScoreQuery.ConstantScorer.skipTo(int)
use ConstantScoreQuery.ConstantScorer.advance(int) instead. |
org.apache.lucene.search.ScoreCachingWrappingScorer.skipTo(int)
use ScoreCachingWrappingScorer.advance(int) instead. |
org.apache.lucene.search.FilteredDocIdSetIterator.skipTo(int)
use FilteredDocIdSetIterator.advance(int) instead. |
org.apache.lucene.search.DocIdSetIterator.skipTo(int)
use DocIdSetIterator.advance(int) instead. This will be removed in 3.0 |
org.apache.lucene.search.spans.SpanScorer.skipTo(int)
use SpanScorer.advance(int) instead. |
org.apache.lucene.util.OpenBitSetIterator.skipTo(int)
use OpenBitSetIterator.advance(int) instead. |
org.apache.lucene.index.TermEnum.skipTo(Term)
This method is not performant and will be removed in Lucene 3.0.
Use IndexReader.terms(Term) to create a new TermEnum positioned at a
given term. |
org.apache.lucene.analysis.Token.termText()
This method now has a performance penalty
because the text is stored internally in a char[]. If
possible, use Token.termBuffer() and Token.termLength() directly instead. If you really need a
String, use Token.term() |
org.apache.lucene.index.IndexReader.unlock(Directory)
Please use IndexWriter.unlock(Directory) instead.
This method will be removed in the 3.0 release. |
org.apache.lucene.store.IndexOutput.writeChars(char[], int, int)
-- please pre-convert to utf8 bytes instead or use IndexOutput.writeString(java.lang.String) |
org.apache.lucene.store.IndexOutput.writeChars(String, int, int)
-- please pre-convert to utf8 bytes
instead or use IndexOutput.writeString(java.lang.String) |
|
Deprecated Constructors |
org.apache.lucene.store.FSDirectory.FSIndexInput.Descriptor(File, String)
|
org.apache.lucene.store.FSDirectory.FSIndexInput(File)
|
org.apache.lucene.store.FSDirectory.FSIndexInput(File, int)
|
org.apache.lucene.store.FSDirectory.FSIndexOutput(File)
|
org.apache.lucene.store.FSDirectory()
|
org.apache.lucene.index.IndexReader(Directory)
- use IndexReader() |
org.apache.lucene.search.IndexSearcher(Directory)
Use IndexSearcher.IndexSearcher(Directory, boolean) instead |
org.apache.lucene.search.IndexSearcher(String)
Use IndexSearcher.IndexSearcher(Directory, boolean) instead |
org.apache.lucene.search.IndexSearcher(String, boolean)
Use IndexSearcher.IndexSearcher(Directory, boolean) instead |
org.apache.lucene.index.IndexWriter(Directory, Analyzer)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(Directory, Analyzer, boolean)
This constructor will be removed in the 3.0
release, and call IndexWriter.commit() when needed.
Use IndexWriter.IndexWriter(Directory,Analyzer,boolean,MaxFieldLength) instead. |
org.apache.lucene.index.IndexWriter(Directory, boolean, Analyzer)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(Directory, boolean, Analyzer, boolean)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,boolean,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(Directory, boolean, Analyzer, boolean, IndexDeletionPolicy)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,boolean,IndexDeletionPolicy,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(Directory, boolean, Analyzer, IndexDeletionPolicy)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,IndexDeletionPolicy,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(File, Analyzer)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(File, Analyzer, boolean)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,boolean,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(File, Analyzer, boolean, IndexWriter.MaxFieldLength)
Use IndexWriter.IndexWriter(Directory,
Analyzer, boolean, MaxFieldLength) |
org.apache.lucene.index.IndexWriter(File, Analyzer, IndexWriter.MaxFieldLength)
Use IndexWriter.IndexWriter(Directory,
Analyzer, MaxFieldLength) |
org.apache.lucene.index.IndexWriter(String, Analyzer)
This constructor will be removed in the 3.0
release, and call IndexWriter.commit() when needed.
Use IndexWriter.IndexWriter(Directory,Analyzer,MaxFieldLength) instead. |
org.apache.lucene.index.IndexWriter(String, Analyzer, boolean)
This constructor will be removed in the 3.0 release.
Use IndexWriter.IndexWriter(Directory,Analyzer,boolean,MaxFieldLength)
instead, and call IndexWriter.commit() when needed. |
org.apache.lucene.index.IndexWriter(String, Analyzer, boolean, IndexWriter.MaxFieldLength)
Use IndexWriter.IndexWriter(Directory, Analyzer,
boolean, MaxFieldLength) |
org.apache.lucene.index.IndexWriter(String, Analyzer, IndexWriter.MaxFieldLength)
Use IndexWriter.IndexWriter(Directory, Analyzer, MaxFieldLength) |
org.apache.lucene.index.MergePolicy.MergeException(String)
Use MergePolicy.MergeException.MergePolicy.MergeException(String,Directory) instead |
org.apache.lucene.index.MergePolicy.MergeException(Throwable)
Use MergePolicy.MergeException.MergePolicy.MergeException(Throwable,Directory) instead |
org.apache.lucene.queryParser.MultiFieldQueryParser(String[], Analyzer)
Please use MultiFieldQueryParser.MultiFieldQueryParser(Version, String[], Analyzer) instead |
org.apache.lucene.queryParser.MultiFieldQueryParser(String[], Analyzer, Map)
Please use MultiFieldQueryParser.MultiFieldQueryParser(Version, String[], Analyzer, Map) instead |
org.apache.lucene.search.MultiTermQuery(Term)
check sub class for possible term access - the Term does not
make sense for all MultiTermQuerys and will be removed. |
org.apache.lucene.store.NIOFSDirectory.NIOFSIndexInput(File, int)
Please use ctor taking chunkSize |
org.apache.lucene.queryParser.QueryParser(String, Analyzer)
Use QueryParser.QueryParser(Version, String, Analyzer) instead |
org.apache.lucene.store.RAMDirectory(File)
Use RAMDirectory.RAMDirectory(Directory) instead |
org.apache.lucene.store.RAMDirectory(String)
Use RAMDirectory.RAMDirectory(Directory) instead |
org.apache.lucene.store.SimpleFSDirectory.SimpleFSIndexInput(File)
Please use ctor taking chunkSize |
org.apache.lucene.store.SimpleFSDirectory.SimpleFSIndexInput(File, int)
Please use ctor taking chunkSize |
org.apache.lucene.search.Sort(String)
Please specify the type explicitly by
first creating a SortField and then use Sort.Sort(SortField) |
org.apache.lucene.search.Sort(String[])
Please specify the type explicitly by
first creating SortFields and then use Sort.Sort(SortField[]) |
org.apache.lucene.search.Sort(String, boolean)
Please specify the type explicitly by
first creating a SortField and then use Sort.Sort(SortField) |
org.apache.lucene.search.SortField(String)
Please specify the exact type instead. |
org.apache.lucene.search.SortField(String, boolean)
Please specify the exact type instead. |
org.apache.lucene.search.SortField(String, SortComparatorSource)
use SortField (String field, FieldComparatorSource comparator) |
org.apache.lucene.search.SortField(String, SortComparatorSource, boolean)
use SortField (String field, FieldComparatorSource comparator, boolean reverse) |
org.apache.lucene.search.SpanFilterResult(BitSet, List)
Use SpanFilterResult.SpanFilterResult(DocIdSet, List) instead |
org.apache.lucene.analysis.standard.StandardAnalyzer()
Use StandardAnalyzer.StandardAnalyzer(Version) instead. |
org.apache.lucene.analysis.standard.StandardAnalyzer(boolean)
Remove in 3.X and make true the only valid value |
org.apache.lucene.analysis.standard.StandardAnalyzer(File)
Use StandardAnalyzer.StandardAnalyzer(Version, File)
instead |
org.apache.lucene.analysis.standard.StandardAnalyzer(File, boolean)
Remove in 3.X and make true the only valid value |
org.apache.lucene.analysis.standard.StandardAnalyzer(Reader)
Use StandardAnalyzer.StandardAnalyzer(Version, Reader)
instead |
org.apache.lucene.analysis.standard.StandardAnalyzer(Reader, boolean)
Remove in 3.X and make true the only valid value |
org.apache.lucene.analysis.standard.StandardAnalyzer(Set)
Use StandardAnalyzer.StandardAnalyzer(Version, Set)
instead |
org.apache.lucene.analysis.standard.StandardAnalyzer(Set, boolean)
Remove in 3.X and make true the only valid value |
org.apache.lucene.analysis.standard.StandardAnalyzer(String[])
Use StandardAnalyzer.StandardAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.standard.StandardAnalyzer(String[], boolean)
Remove in 3.X and make true the only valid value |
org.apache.lucene.analysis.standard.StandardTokenizer(AttributeSource.AttributeFactory, Reader, boolean)
Use StandardTokenizer.StandardTokenizer(Version, org.apache.lucene.util.AttributeSource.AttributeFactory, Reader) instead |
org.apache.lucene.analysis.standard.StandardTokenizer(AttributeSource, Reader, boolean)
Use StandardTokenizer.StandardTokenizer(Version, AttributeSource, Reader) instead |
org.apache.lucene.analysis.standard.StandardTokenizer(Reader)
Use StandardTokenizer.StandardTokenizer(Version,
Reader) instead |
org.apache.lucene.analysis.standard.StandardTokenizer(Reader, boolean)
Use StandardTokenizer.StandardTokenizer(Version, Reader) instead |
org.apache.lucene.analysis.StopAnalyzer()
Use StopAnalyzer.StopAnalyzer(Version) instead |
org.apache.lucene.analysis.StopAnalyzer(boolean)
Use StopAnalyzer.StopAnalyzer(Version) instead |
org.apache.lucene.analysis.StopAnalyzer(File)
Use StopAnalyzer.StopAnalyzer(Version, File) instead |
org.apache.lucene.analysis.StopAnalyzer(File, boolean)
Use StopAnalyzer.StopAnalyzer(Version, File) instead |
org.apache.lucene.analysis.StopAnalyzer(Reader)
Use StopAnalyzer.StopAnalyzer(Version, Reader) instead |
org.apache.lucene.analysis.StopAnalyzer(Reader, boolean)
Use StopAnalyzer.StopAnalyzer(Version, Reader) instead |
org.apache.lucene.analysis.StopAnalyzer(Set)
Use StopAnalyzer.StopAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.StopAnalyzer(Set, boolean)
Use StopAnalyzer.StopAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.StopAnalyzer(String[])
Use StopAnalyzer.StopAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.StopAnalyzer(String[], boolean)
Use StopAnalyzer.StopAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.StopFilter(boolean, TokenStream, String[])
Use StopFilter.StopFilter(boolean, TokenStream, Set) instead. |
org.apache.lucene.analysis.StopFilter(boolean, TokenStream, String[], boolean)
Use StopFilter.StopFilter(boolean, TokenStream, Set, boolean) instead. |
org.apache.lucene.analysis.StopFilter(TokenStream, Set)
Use StopFilter.StopFilter(boolean, TokenStream, Set) instead |
org.apache.lucene.analysis.StopFilter(TokenStream, Set, boolean)
Use StopFilter.StopFilter(boolean, TokenStream, Set, boolean) instead |
org.apache.lucene.analysis.StopFilter(TokenStream, String[])
Use StopFilter.StopFilter(boolean, TokenStream, String[]) instead |
org.apache.lucene.analysis.StopFilter(TokenStream, String[], boolean)
Use StopFilter.StopFilter(boolean, TokenStream, String[], boolean) instead |