normalization regex patterns which match will be removed from token
Cleans out tokens
Cleans out tokens
Regular expressions list for normalization, defaults [^A-Za-z]
input annotations columns currently used
Lowercase tokens, default true
Gets annotation column name going to generate
Gets annotation column name going to generate
Whether to convert string to lowercase or not while checking
Input Annotator Type : TOKEN
Input Annotator Type : TOKEN
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
whether to convert strings to lowercase
Output Annotator Type : TOKEN
Output Annotator Type : TOKEN
Regular expressions list for normalization, defaults [^A-Za-z]
Overrides required annotators column if different than default
Overrides required annotators column if different than default
Lowercase tokens, default true
Overrides annotation column name when transforming
Overrides annotation column name when transforming
Txt file with delimited words to be transformed into something else
Txt file with delimited words to be transformed into something else
Whether to convert string to lowercase or not while checking
delimited file with list of custom words to be manually corrected
whether or not to be case sensitive to match slangs.
whether or not to be case sensitive to match slangs. Defaults to false.
requirement for pipeline transformation validation.
requirement for pipeline transformation validation. It is called on fit()
required internal uid for saving annotator
required internal uid for saving annotator
takes a Dataset and checks to see if all the required annotation types are present.
takes a Dataset and checks to see if all the required annotation types are present.
to be validated
True if all the required types are present, else false
Required input and expected output annotator types
Annotator that cleans out tokens. Requires stems, hence tokens. Removes all dirty characters from text following a regex pattern and transforms words based on a provided dictionary
See https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/NormalizerTestSpec.scala for examples on how to use the API