Package

com.johnsnowlabs.nlp

annotators

Permalink

package annotators

Visibility
  1. Public
  2. All

Type Members

  1. class ChunkTokenizer extends Tokenizer

    Permalink

  2. class ChunkTokenizerModel extends TokenizerModel

    Permalink

  3. class Chunker extends AnnotatorModel[Chunker] with HasSimpleAnnotate[Chunker]

    Permalink

    This annotator matches a pattern of part-of-speech tags in order to return meaningful phrases from document

    This annotator matches a pattern of part-of-speech tags in order to return meaningful phrases from document

    See https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/ChunkerTestSpec.scala for reference on how to use this API.

  4. class DateMatcher extends AnnotatorModel[DateMatcher] with HasSimpleAnnotate[DateMatcher] with DateMatcherUtils

    Permalink

    Matches standard date formats into a provided format Reads from different forms of date and time expressions and converts them to a provided date format.

    Matches standard date formats into a provided format Reads from different forms of date and time expressions and converts them to a provided date format. Extracts only ONE date per sentence. Use with sentence detector for more matches.

    Reads the following kind of dates:

    1978-01-28, 1984/04/02,1/02/1980, 2/28/79, The 31st of April in the year 2008, "Fri, 21 Nov 1997" , "Jan 21, ‘97" , Sun, Nov 21, jan 1st, next thursday, last wednesday, today, tomorrow, yesterday, next week, next month, next year, day after, the day before, 0600h, 06:00 hours, 6pm, 5:30 a.m., at 5, 12:59, 23:59, 1988/11/23 6pm, next week at 7.30, 5 am tomorrow

    See https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/DateMatcherTestSpec.scala for further reference on how to use this API

  5. trait DateMatcherUtils extends Params

    Permalink
  6. class DocumentNormalizer extends AnnotatorModel[DocumentNormalizer] with HasSimpleAnnotate[DocumentNormalizer]

    Permalink

    Annotator which normalizes raw text from tagged text, e.g.

    Annotator which normalizes raw text from tagged text, e.g. scraped web pages or xml documents, from document type columns into Sentence. Removes all dirty characters from text following one or more input regex patterns. Can apply not wanted character removal with a specific policy. Can apply lower case normalization.

    See DocumentNormalizer test class for examples examples of usage.

  7. class Lemmatizer extends AnnotatorApproach[LemmatizerModel]

    Permalink

    Class to find standarized lemmas from words.

    Class to find standarized lemmas from words. Uses a user-provided or default dictionary.

    Retrieves lemmas out of words with the objective of returning a base dictionary word. Retrieves the significant part of a word.

    lemmaDict: A dictionary of predefined lemmas must be provided

    See https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/LemmatizerTestSpec.scala for examples of how to use this API

  8. class LemmatizerModel extends AnnotatorModel[LemmatizerModel] with HasSimpleAnnotate[LemmatizerModel]

    Permalink

    Class to find standarized lemmas from words.

    Class to find standarized lemmas from words. Uses a user-provided or default dictionary.

    Retrieves lemmas out of words with the objective of returning a base dictionary word. Retrieves the significant part of a word

    See https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/LemmatizerTestSpec.scala for examples of how to use this API

  9. class MultiDateMatcher extends AnnotatorModel[MultiDateMatcher] with HasSimpleAnnotate[MultiDateMatcher] with DateMatcherUtils

    Permalink

    Matches standard date formats into a provided format

  10. class NGramGenerator extends AnnotatorModel[NGramGenerator] with HasSimpleAnnotate[NGramGenerator]

    Permalink

    A feature transformer that converts the input array of strings (annotatorType TOKEN) into an array of n-grams (annotatorType CHUNK).

    A feature transformer that converts the input array of strings (annotatorType TOKEN) into an array of n-grams (annotatorType CHUNK). Null values in the input array are ignored. It returns an array of n-grams where each n-gram is represented by a space-separated string of words.

    When the input is empty, an empty array is returned. When the input array length is less than n (number of elements per n-gram), no n-grams are returned.

    See https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/NGramGeneratorTestSpec.scala for reference on how to use this API.

  11. class Normalizer extends AnnotatorApproach[NormalizerModel]

    Permalink

    Annotator that cleans out tokens.

    Annotator that cleans out tokens. Requires stems, hence tokens. Removes all dirty characters from text following a regex pattern and transforms words based on a provided dictionary

    See https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/NormalizerTestSpec.scala for examples on how to use the API

  12. class NormalizerModel extends AnnotatorModel[NormalizerModel] with HasSimpleAnnotate[NormalizerModel]

    Permalink

    Annotator that cleans out tokens.

    Annotator that cleans out tokens. Requires stems, hence tokens.

    Removes all dirty characters from text following a regex pattern and transforms words based on a provided dictionary

    See https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/NormalizerTestSpec.scala for examples on how to use the API

  13. trait ReadablePretrainedLemmatizer extends ParamsAndFeaturesReadable[LemmatizerModel] with HasPretrained[LemmatizerModel]

    Permalink
  14. trait ReadablePretrainedStopWordsCleanerModel extends ParamsAndFeaturesReadable[StopWordsCleaner] with HasPretrained[StopWordsCleaner]

    Permalink
  15. trait ReadablePretrainedTextMatcher extends ParamsAndFeaturesReadable[TextMatcherModel] with HasPretrained[TextMatcherModel]

    Permalink
  16. trait ReadablePretrainedTokenizer extends ParamsAndFeaturesReadable[TokenizerModel] with HasPretrained[TokenizerModel]

    Permalink
  17. class RecursiveTokenizer extends AnnotatorApproach[RecursiveTokenizerModel] with ParamsAndFeaturesWritable

    Permalink

  18. class RecursiveTokenizerModel extends AnnotatorModel[RecursiveTokenizerModel] with HasSimpleAnnotate[RecursiveTokenizerModel] with ParamsAndFeaturesWritable

    Permalink

  19. class RegexMatcher extends AnnotatorApproach[RegexMatcherModel]

    Permalink

    Uses a reference file to match a set of regular expressions and put them inside a provided key.

    Uses a reference file to match a set of regular expressions and put them inside a provided key. File must be comma separated.

    Matches regular expressions and maps them to specified values optionally provided

    Rules are provided from external source file

  20. class RegexMatcherModel extends AnnotatorModel[RegexMatcherModel] with HasSimpleAnnotate[RegexMatcherModel]

    Permalink

    Matches regular expressions and maps them to specified values optionally provided Rules are provided from external source file

  21. class RegexTokenizer extends AnnotatorModel[RegexTokenizer] with HasSimpleAnnotate[RegexTokenizer]

    Permalink

    A tokenizer that splits text by regex pattern.

    A tokenizer that splits text by regex pattern.

    See also

    RegexTokenizer

  22. class Stemmer extends AnnotatorModel[Stemmer] with HasSimpleAnnotate[Stemmer]

    Permalink

    Hard stemming of words for cut-of into standard word references.

    Hard stemming of words for cut-of into standard word references. See https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/StemmerTestSpec.scala for examples of how to use this API

  23. class StopWordsCleaner extends AnnotatorModel[StopWordsCleaner] with HasSimpleAnnotate[StopWordsCleaner]

    Permalink

    This annotator excludes from a sequence of strings (e.g.

    This annotator excludes from a sequence of strings (e.g. the output of a Tokenizer, Normalizer, Lemmatizer, and Stemmer) and drops all the stop words from the input sequences.

    See https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/StopWordsCleanerTestSpec.scala for example of how to use this API.

  24. class TextMatcher extends AnnotatorApproach[TextMatcherModel] with ParamsAndFeaturesWritable

    Permalink

    Annotator to match entire phrases (by token) provided in a file against a Document

    Annotator to match entire phrases (by token) provided in a file against a Document

    See https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/TextMatcherTestSpec.scala for reference on how to use this API

  25. class TextMatcherModel extends AnnotatorModel[TextMatcherModel] with HasSimpleAnnotate[TextMatcherModel]

    Permalink

    Extracts entities out of provided phrases

  26. class Token2Chunk extends AnnotatorModel[Token2Chunk] with HasSimpleAnnotate[Token2Chunk]

    Permalink

  27. class Tokenizer extends AnnotatorApproach[TokenizerModel]

    Permalink

    Tokenizes raw text in document type columns into TokenizedSentence .

    Tokenizes raw text in document type columns into TokenizedSentence .

    This class represents a non fitted tokenizer. Fitting it will cause the internal RuleFactory to construct the rules for tokenizing from the input configuration.

    Identifies tokens with tokenization open standards. A few rules will help customizing it if defaults do not fit user needs.

    See Tokenizer test class for examples examples of usage.

  28. class TokenizerModel extends AnnotatorModel[TokenizerModel] with HasSimpleAnnotate[TokenizerModel]

    Permalink

    Tokenizes raw text into word pieces, tokens.

    Tokenizes raw text into word pieces, tokens. Identifies tokens with tokenization open standards. A few rules will help customizing it if defaults do not fit user needs.

    This class represents an already fitted Tokenizer model.

    See Tokenizer test class for examples examples of usage.

Value Members

  1. object ChunkTokenizer extends DefaultParamsReadable[ChunkTokenizer] with Serializable

    Permalink
  2. object ChunkTokenizerModel extends ParamsAndFeaturesReadable[ChunkTokenizerModel] with Serializable

    Permalink
  3. object Chunker extends DefaultParamsReadable[Chunker] with Serializable

    Permalink
  4. object DateMatcher extends DefaultParamsReadable[DateMatcher] with Serializable

    Permalink
  5. object DocumentNormalizer extends DefaultParamsReadable[DocumentNormalizer] with Serializable

    Permalink
  6. object EnglishStemmer

    Permalink
  7. object Lemmatizer extends DefaultParamsReadable[Lemmatizer] with Serializable

    Permalink
  8. object LemmatizerModel extends ReadablePretrainedLemmatizer with Serializable

    Permalink
  9. object MultiDateMatcher extends DefaultParamsReadable[MultiDateMatcher] with Serializable

    Permalink
  10. object NGramGenerator extends ParamsAndFeaturesReadable[NGramGenerator] with Serializable

    Permalink
  11. object Normalizer extends DefaultParamsReadable[Normalizer] with Serializable

    Permalink
  12. object NormalizerModel extends ParamsAndFeaturesReadable[NormalizerModel] with Serializable

    Permalink
  13. object RegexMatcher extends DefaultParamsReadable[RegexMatcher] with Serializable

    Permalink
  14. object RegexMatcherModel extends ParamsAndFeaturesReadable[RegexMatcherModel] with Serializable

    Permalink
  15. object Stemmer extends DefaultParamsReadable[Stemmer] with Serializable

    Permalink
  16. object StopWordsCleaner extends ParamsAndFeaturesReadable[StopWordsCleaner] with ReadablePretrainedStopWordsCleanerModel with Serializable

    Permalink
  17. object TextMatcher extends DefaultParamsReadable[TextMatcher] with Serializable

    Permalink
  18. object TextMatcherModel extends ReadablePretrainedTextMatcher with Serializable

    Permalink
  19. object Token2Chunk extends DefaultParamsReadable[Token2Chunk] with Serializable

    Permalink
  20. object Tokenizer extends DefaultParamsReadable[Tokenizer] with Serializable

    Permalink
  21. object TokenizerModel extends ReadablePretrainedTokenizer with Serializable

    Permalink
  22. package btm

    Permalink
  23. package classifier

    Permalink
  24. package common

    Permalink
  25. package keyword

    Permalink
  26. package ld

    Permalink
  27. package ner

    Permalink
  28. package param

    Permalink
  29. package parser

    Permalink
  30. package pos

    Permalink
  31. package sbd

    Permalink
  32. package sda

    Permalink
  33. package sentence_detector_dl

    Permalink
  34. package seq2seq

    Permalink
  35. package spell

    Permalink
  36. package ws

    Permalink

Ungrouped