com.atlassian.confluence.search.didyoumean.lucene.tokenizers
Class BodyNGramTokenizer
java.lang.Object
org.apache.lucene.analysis.TokenStream
org.apache.lucene.analysis.Tokenizer
com.atlassian.confluence.search.didyoumean.lucene.tokenizers.BodyNGramTokenizer
public class BodyNGramTokenizer
- extends org.apache.lucene.analysis.Tokenizer
Adaptation of NGramTokenizer
that returns all n-grams excluding the
edge n-grams.
Fields inherited from class org.apache.lucene.analysis.Tokenizer |
input |
Method Summary |
org.apache.lucene.analysis.Token |
next()
Returns the next token in the stream, or null at EOS. |
Methods inherited from class org.apache.lucene.analysis.Tokenizer |
close |
Methods inherited from class org.apache.lucene.analysis.TokenStream |
reset |
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
DEFAULT_MIN_NGRAM_SIZE
public static final int DEFAULT_MIN_NGRAM_SIZE
- See Also:
- Constant Field Values
DEFAULT_MAX_NGRAM_SIZE
public static final int DEFAULT_MAX_NGRAM_SIZE
- See Also:
- Constant Field Values
BodyNGramTokenizer
public BodyNGramTokenizer(Reader input,
int minGram,
int maxGram)
- Creates NGramTokenizer with given min and max n-grams.
- Parameters:
input
- Reader holding the input to be tokenizedminGram
- the smallest n-gram to generatemaxGram
- the largest n-gram to generate
BodyNGramTokenizer
public BodyNGramTokenizer(Reader input)
- Creates NGramTokenizer with default min and max n-grams.
- Parameters:
input
- Reader holding the input to be tokenized
next
public final org.apache.lucene.analysis.Token next()
throws IOException
- Returns the next token in the stream, or null at EOS.
- Specified by:
next
in class org.apache.lucene.analysis.TokenStream
- Throws:
IOException