Class/Object

com.johnsnowlabs.nlp.annotators

Lemmatizer

Related Docs: object Lemmatizer | package annotators

Permalink

class Lemmatizer extends AnnotatorApproach[LemmatizerModel]

Class to find lemmas out of words with the objective of returning a base dictionary word. Retrieves the significant part of a word. A dictionary of predefined lemmas must be provided with setDictionary. The dictionary can be set in either in the form of a delimited text file or directly as an ExternalResource. Pretrained models can be loaded with LemmatizerModel.pretrained.

For available pretrained models please see the Models Hub. For extended examples of usage, see the Spark NLP Workshop and the LemmatizerTestSpec.

Example

In this example, the lemma dictionary lemmas_small.txt has the form of

...
pick	->	pick	picks	picking	picked
peck	->	peck	pecking	pecked	pecks
pickle	->	pickle	pickles	pickled	pickling
pepper	->	pepper	peppers	peppered	peppering
...

where each key is delimited by -> and values are delimited by \t

import spark.implicits._
import com.johnsnowlabs.nlp.DocumentAssembler
import com.johnsnowlabs.nlp.annotator.Tokenizer
import com.johnsnowlabs.nlp.annotator.SentenceDetector
import com.johnsnowlabs.nlp.annotators.Lemmatizer
import org.apache.spark.ml.Pipeline

val documentAssembler = new DocumentAssembler()
  .setInputCol("text")
  .setOutputCol("document")

val sentenceDetector = new SentenceDetector()
  .setInputCols(Array("document"))
  .setOutputCol("sentence")

val tokenizer = new Tokenizer()
  .setInputCols(Array("sentence"))
  .setOutputCol("token")

val lemmatizer = new Lemmatizer()
  .setInputCols(Array("token"))
  .setOutputCol("lemma")
  .setDictionary("src/test/resources/lemma-corpus-small/lemmas_small.txt", "->", "\t")

val pipeline = new Pipeline()
  .setStages(Array(
    documentAssembler,
    sentenceDetector,
    tokenizer,
    lemmatizer
  ))

val data = Seq("Peter Pipers employees are picking pecks of pickled peppers.")
  .toDF("text")

val result = pipeline.fit(data).transform(data)
result.selectExpr("lemma.result").show(false)
+------------------------------------------------------------------+
|result                                                            |
+------------------------------------------------------------------+
|[Peter, Pipers, employees, are, pick, peck, of, pickle, pepper, .]|
+------------------------------------------------------------------+
See also

LemmatizerModel for the instantiated model and pretrained models.

Linear Supertypes
AnnotatorApproach[LemmatizerModel], CanBeLazy, DefaultParamsWritable, MLWritable, HasOutputAnnotatorType, HasOutputAnnotationCol, HasInputAnnotationCols, Estimator[LemmatizerModel], PipelineStage, Logging, Params, Serializable, Serializable, Identifiable, AnyRef, Any
Ordering
  1. Grouped
  2. Alphabetic
  3. By Inheritance
Inherited
  1. Lemmatizer
  2. AnnotatorApproach
  3. CanBeLazy
  4. DefaultParamsWritable
  5. MLWritable
  6. HasOutputAnnotatorType
  7. HasOutputAnnotationCol
  8. HasInputAnnotationCols
  9. Estimator
  10. PipelineStage
  11. Logging
  12. Params
  13. Serializable
  14. Serializable
  15. Identifiable
  16. AnyRef
  17. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new Lemmatizer()

    Permalink
  2. new Lemmatizer(uid: String)

    Permalink

    uid

    required internal uid provided by constructor

Type Members

  1. type AnnotatorType = String

    Permalink
    Definition Classes
    HasOutputAnnotatorType

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def $[T](param: Param[T]): T

    Permalink
    Attributes
    protected
    Definition Classes
    Params
  4. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  5. def _fit(dataset: Dataset[_], recursiveStages: Option[PipelineModel]): LemmatizerModel

    Permalink
    Attributes
    protected
    Definition Classes
    AnnotatorApproach
  6. def arraysZip: UserDefinedFunction

    Permalink
  7. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  8. def beforeTraining(spark: SparkSession): Unit

    Permalink
    Definition Classes
    AnnotatorApproach
  9. final def checkSchema(schema: StructType, inputAnnotatorType: String): Boolean

    Permalink
    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  10. final def clear(param: Param[_]): Lemmatizer.this.type

    Permalink
    Definition Classes
    Params
  11. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  12. final def copy(extra: ParamMap): Estimator[LemmatizerModel]

    Permalink
    Definition Classes
    AnnotatorApproach → Estimator → PipelineStage → Params
  13. def copyValues[T <: Params](to: T, extra: ParamMap): T

    Permalink
    Attributes
    protected
    Definition Classes
    Params
  14. final def defaultCopy[T <: Params](extra: ParamMap): T

    Permalink
    Attributes
    protected
    Definition Classes
    Params
  15. val description: String

    Permalink

    Retrieves the significant part of a word

    Retrieves the significant part of a word

    Definition Classes
    LemmatizerAnnotatorApproach
  16. val dictionary: ExternalResourceParam

    Permalink

    External dictionary to be used by the lemmatizer, which needs 'keyDelimiter' and 'valueDelimiter' for parsing the resource

    External dictionary to be used by the lemmatizer, which needs 'keyDelimiter' and 'valueDelimiter' for parsing the resource

    Example

    ...
    pick	->	pick	picks	picking	picked
    peck	->	peck	pecking	pecked	pecks
    pickle	->	pickle	pickles	pickled	pickling
    pepper	->	pepper	peppers	peppered	peppering
    ...

    where each key is delimited by -> and values are delimited by \t

  17. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  18. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  19. def explainParam(param: Param[_]): String

    Permalink
    Definition Classes
    Params
  20. def explainParams(): String

    Permalink
    Definition Classes
    Params
  21. final def extractParamMap(): ParamMap

    Permalink
    Definition Classes
    Params
  22. final def extractParamMap(extra: ParamMap): ParamMap

    Permalink
    Definition Classes
    Params
  23. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  24. final def fit(dataset: Dataset[_]): LemmatizerModel

    Permalink
    Definition Classes
    AnnotatorApproach → Estimator
  25. def fit(dataset: Dataset[_], paramMaps: Array[ParamMap]): Seq[LemmatizerModel]

    Permalink
    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" )
  26. def fit(dataset: Dataset[_], paramMap: ParamMap): LemmatizerModel

    Permalink
    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" )
  27. def fit(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): LemmatizerModel

    Permalink
    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" ) @varargs()
  28. final def get[T](param: Param[T]): Option[T]

    Permalink
    Definition Classes
    Params
  29. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  30. final def getDefault[T](param: Param[T]): Option[T]

    Permalink
    Definition Classes
    Params
  31. def getDictionary: ExternalResource

    Permalink

    External dictionary to be used by the lemmatizer

  32. def getInputCols: Array[String]

    Permalink

    returns

    input annotations columns currently used

    Definition Classes
    HasInputAnnotationCols
  33. def getLazyAnnotator: Boolean

    Permalink
    Definition Classes
    CanBeLazy
  34. final def getOrDefault[T](param: Param[T]): T

    Permalink
    Definition Classes
    Params
  35. final def getOutputCol: String

    Permalink

    Gets annotation column name going to generate

    Gets annotation column name going to generate

    Definition Classes
    HasOutputAnnotationCol
  36. def getParam(paramName: String): Param[Any]

    Permalink
    Definition Classes
    Params
  37. final def hasDefault[T](param: Param[T]): Boolean

    Permalink
    Definition Classes
    Params
  38. def hasParam(paramName: String): Boolean

    Permalink
    Definition Classes
    Params
  39. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  40. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  41. def initializeLogIfNecessary(isInterpreter: Boolean): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  42. val inputAnnotatorTypes: Array[AnnotatorType]

    Permalink

    Input annotator type : TOKEN

    Input annotator type : TOKEN

    Definition Classes
    LemmatizerHasInputAnnotationCols
  43. final val inputCols: StringArrayParam

    Permalink

    columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified

    columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified

    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  44. final def isDefined(param: Param[_]): Boolean

    Permalink
    Definition Classes
    Params
  45. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  46. final def isSet(param: Param[_]): Boolean

    Permalink
    Definition Classes
    Params
  47. def isTraceEnabled(): Boolean

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  48. val lazyAnnotator: BooleanParam

    Permalink
    Definition Classes
    CanBeLazy
  49. def log: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  50. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  51. def logDebug(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  52. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  53. def logError(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  54. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  55. def logInfo(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  56. def logName: String

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  57. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  58. def logTrace(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  59. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  60. def logWarning(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  61. def msgHelper(schema: StructType): String

    Permalink
    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  62. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  63. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  64. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  65. def onTrained(model: LemmatizerModel, spark: SparkSession): Unit

    Permalink
    Definition Classes
    AnnotatorApproach
  66. val optionalInputAnnotatorTypes: Array[String]

    Permalink
    Definition Classes
    HasInputAnnotationCols
  67. val outputAnnotatorType: AnnotatorType

    Permalink

    Output annotator type : TOKEN

    Output annotator type : TOKEN

    Definition Classes
    LemmatizerHasOutputAnnotatorType
  68. final val outputCol: Param[String]

    Permalink
    Attributes
    protected
    Definition Classes
    HasOutputAnnotationCol
  69. lazy val params: Array[Param[_]]

    Permalink
    Definition Classes
    Params
  70. def save(path: String): Unit

    Permalink
    Definition Classes
    MLWritable
    Annotations
    @Since( "1.6.0" ) @throws( ... )
  71. final def set(paramPair: ParamPair[_]): Lemmatizer.this.type

    Permalink
    Attributes
    protected
    Definition Classes
    Params
  72. final def set(param: String, value: Any): Lemmatizer.this.type

    Permalink
    Attributes
    protected
    Definition Classes
    Params
  73. final def set[T](param: Param[T], value: T): Lemmatizer.this.type

    Permalink
    Definition Classes
    Params
  74. final def setDefault(paramPairs: ParamPair[_]*): Lemmatizer.this.type

    Permalink
    Attributes
    protected
    Definition Classes
    Params
  75. final def setDefault[T](param: Param[T], value: T): Lemmatizer.this.type

    Permalink
    Attributes
    protected
    Definition Classes
    Params
  76. def setDictionary(path: String, keyDelimiter: String, valueDelimiter: String, readAs: Format = ReadAs.TEXT, options: Map[String, String] = Map("format" -> "text")): Lemmatizer.this.type

    Permalink

    External dictionary to be used by the lemmatizer, which needs keyDelimiter and valueDelimiter for parsing the resource

  77. def setDictionary(value: ExternalResource): Lemmatizer.this.type

    Permalink

    External dictionary already in the form of ExternalResource, for which the Map member options has entries defined for "keyDelimiter" and "valueDelimiter".

    External dictionary already in the form of ExternalResource, for which the Map member options has entries defined for "keyDelimiter" and "valueDelimiter".

    Example

    val resource = ExternalResource(
      "src/test/resources/regex-matcher/rules.txt",
      ReadAs.TEXT,
      Map("keyDelimiter" -> "->", "valueDelimiter" -> "\t")
    )
    val lemmatizer = new Lemmatizer()
      .setInputCols(Array("token"))
      .setOutputCol("lemma")
      .setDictionary(resource)
  78. final def setInputCols(value: String*): Lemmatizer.this.type

    Permalink
    Definition Classes
    HasInputAnnotationCols
  79. final def setInputCols(value: Array[String]): Lemmatizer.this.type

    Permalink

    Overrides required annotators column if different than default

    Overrides required annotators column if different than default

    Definition Classes
    HasInputAnnotationCols
  80. def setLazyAnnotator(value: Boolean): Lemmatizer.this.type

    Permalink
    Definition Classes
    CanBeLazy
  81. final def setOutputCol(value: String): Lemmatizer.this.type

    Permalink

    Overrides annotation column name when transforming

    Overrides annotation column name when transforming

    Definition Classes
    HasOutputAnnotationCol
  82. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  83. def toString(): String

    Permalink
    Definition Classes
    Identifiable → AnyRef → Any
  84. def train(dataset: Dataset[_], recursivePipeline: Option[PipelineModel]): LemmatizerModel

    Permalink
    Definition Classes
    LemmatizerAnnotatorApproach
  85. final def transformSchema(schema: StructType): StructType

    Permalink

    requirement for pipeline transformation validation.

    requirement for pipeline transformation validation. It is called on fit()

    Definition Classes
    AnnotatorApproach → PipelineStage
  86. def transformSchema(schema: StructType, logging: Boolean): StructType

    Permalink
    Attributes
    protected
    Definition Classes
    PipelineStage
    Annotations
    @DeveloperApi()
  87. val uid: String

    Permalink

    required internal uid provided by constructor

    required internal uid provided by constructor

    Definition Classes
    Lemmatizer → Identifiable
  88. def validate(schema: StructType): Boolean

    Permalink

    takes a Dataset and checks to see if all the required annotation types are present.

    takes a Dataset and checks to see if all the required annotation types are present.

    schema

    to be validated

    returns

    True if all the required types are present, else false

    Attributes
    protected
    Definition Classes
    AnnotatorApproach
  89. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  90. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  91. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  92. def write: MLWriter

    Permalink
    Definition Classes
    DefaultParamsWritable → MLWritable

Inherited from CanBeLazy

Inherited from DefaultParamsWritable

Inherited from MLWritable

Inherited from HasOutputAnnotatorType

Inherited from HasOutputAnnotationCol

Inherited from HasInputAnnotationCols

Inherited from Estimator[LemmatizerModel]

Inherited from PipelineStage

Inherited from Logging

Inherited from Params

Inherited from Serializable

Inherited from Serializable

Inherited from Identifiable

Inherited from AnyRef

Inherited from Any

Annotator types

Required input and expected output annotator types

Parameters

A list of (hyper-)parameter keys this annotator can take. Users can set and get the parameter values through setters and getters, respectively.

Parameter setters

Parameter getters

Members