com.johnsnowlabs.nlp.annotators.keyword.yake
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
takes a document and annotations and produces new annotations of this annotator's annotation type
takes a document and annotations and produces new annotations of this annotator's annotation type
Annotations that correspond to inputAnnotationCols generated by previous annotators if any
any number of annotations processed for every input annotation. Not necessary one to one relationship
Calculate token scores given statistics
Calculate token scores given statistics
Refer YAKE Paper
T Position = ln ( ln ( 3 + Median(Sentence Index)) T Case = max(TF(U(t)) , TF(A(t))) / ln(TF(t)) TF Norm =TF(t) / (MeanTF + 1 ∗ σ) T Rel = 1 + ( DL + DR ) * TF(t)/MaxTF T Sentence = SF(t)/# Sentences TS = ( TPos ∗ TRel ) / ( TCase + (( TFNorm + TSent ) / TRel ))
Basic stats
Left Co Occurrence
Right Co Occurrence
requirement for annotators copies
requirement for annotators copies
Wraps annotate to happen inside SparkSQL user defined functions in order to act with org.apache.spark.sql.Column
Wraps annotate to happen inside SparkSQL user defined functions in order to act with org.apache.spark.sql.Column
udf function to be applied to inputCols using this annotator's annotate function as part of ML transformation
Override for additional custom schema checks
Override for additional custom schema checks
Calculates basic statistics like total Sentences in the document and assign a tag for each token
Calculates basic statistics like total Sentences in the document and assign a tag for each token
Document to annotate as array of tokens with sentence metadata
Dataframe with columns SentenceID, token, totalSentences, tag
Generate candidate keywords
Generate candidate keywords
sentences as a list
candidate keywords
Calculate Co Occurrence for left to right given a window size
Calculate Co Occurrence for left to right given a window size
DataFrame with tokens
Co Occurrence for token x from left to right as a Map
input annotations columns currently used
Extract keywords
Extract keywords
candidate keywords
tokens with scores
keywords
Gets annotation column name going to generate
Gets annotation column name going to generate
Separate sentences given tokens with sentence metadata
Separate sentences given tokens with sentence metadata
Tokens with sentence metadata
separated sentences
Input Annotator Types: TOKEN
Input Annotator Types: TOKEN
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
Maximum N-grams a keyword should have (Default: 3
).
Maximum N-grams a keyword should have (Default: 3
).
Minimum N-grams a keyword should have (Default: 1
).
Minimum N-grams a keyword should have (Default: 1
).
Number of Keywords to extract (Default: 30
).
Number of Keywords to extract (Default: 30
).
Output Annotator Types: KEYWORD
Output Annotator Types: KEYWORD
Execute the YAKE algorithm for each sentence
Execute the YAKE algorithm for each sentence
token array to annotate
annotated token array
Overrides required annotators column if different than default
Overrides required annotators column if different than default
Overrides annotation column name when transforming
Overrides annotation column name when transforming
the words to be filtered out (Default: English stop words from MLlib)
the words to be filtered out (Default: English stop words from MLlib)
Threshold to filter keywords (Default: -1
).
Threshold to filter keywords (Default: -1
). By default it is disabled.
Each keyword will be given a keyword score greater than 0. (The lower the score better the keyword).
This sets the upper bound for the keyword score.
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
Dataset[Row]
requirement for pipeline transformation validation.
requirement for pipeline transformation validation. It is called on fit()
takes a Dataset and checks to see if all the required annotation types are present.
takes a Dataset and checks to see if all the required annotation types are present.
to be validated
True if all the required types are present, else false
Window size for Co-Occurrence (Default: 3
).
Window size for Co-Occurrence (Default: 3
).
Yake will construct a co-occurrence matrix. You can set the window size for the co-occurrence matrix construction
with this parameter.
Example: windowSize=2
will look at two words to both left and right of a candidate word.
A list of (hyper-)parameter keys this annotator can take. Users can set and get the parameter values through setters and getters, respectively.
Required input and expected output annotator types
Yake is an Unsupervised, Corpus-Independent, Domain and Language-Independent and Single-Document keyword extraction algorithm.
Extracting keywords from texts has become a challenge for individuals and organizations as the information grows in complexity and size. The need to automate this task so that text can be processed in a timely and adequate manner has led to the emergence of automatic keyword extraction tools. Yake is a novel feature-based system for multi-lingual keyword extraction, which supports texts of different sizes, domain or languages. Unlike other approaches, Yake does not rely on dictionaries nor thesauri, neither is trained against any corpora. Instead, it follows an unsupervised approach which builds upon features extracted from the text, making it thus applicable to documents written in different languages without the need for further knowledge. This can be beneficial for a large number of tasks and a plethora of situations where access to training corpora is either limited or restricted. The algorithm makes use of the position of a sentence and token. Therefore, to use the annotator, the text should be first sent through a Sentence Boundary Detector and then a tokenizer.
See the parameters section for tweakable parameters to get the best result from the annotator.
Note that each keyword will be given a keyword score greater than 0 (The lower the score better the keyword). Therefore to filter the keywords, an upper bound for the score can be set with
setThreshold
.For extended examples of usage, see the Spark NLP Workshop and the YakeTestSpec.
Sources :
Campos, R., Mangaravite, V., Pasquali, A., Jatowt, A., Jorge, A., Nunes, C. and Jatowt, A. (2020). YAKE! Keyword Extraction from Single Documents using Multiple Local Features. In Information Sciences Journal. Elsevier, Vol 509, pp 257-289
Paper abstract:
As the amount of generated information grows, reading and summarizing texts of large collections turns into a challenging task. Many documents do not come with descriptive terms, thus requiring humans to generate keywords on-the-fly. The need to automate this kind of task demands the development of keyword extraction systems with the ability to automatically identify keywords within the text. One approach is to resort to machine-learning algorithms. These, however, depend on large annotated text corpora, which are not always available. An alternative solution is to consider an unsupervised approach. In this article, we describe YAKE!, a light-weight unsupervised automatic keyword extraction method which rests on statistical text features extracted from single documents to select the most relevant keywords of a text. Our system does not need to be trained on a particular set of documents, nor does it depend on dictionaries, external corpora, text size, language, or domain. To demonstrate the merits and significance of YAKE!, we compare it against ten state-of-the-art unsupervised approaches and one supervised method. Experimental results carried out on top of twenty datasets show that YAKE! significantly outperforms other unsupervised methods on texts of different sizes, languages, and domains.
Example