com.johnsnowlabs.nlp.annotators.sbd.pragmatic
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
Uses the model interface to prepare the context and extract the boundaries
Uses the model interface to prepare the context and extract the boundaries
Annotations that correspond to inputAnnotationCols generated by previous annotators if any
One to many annotation relationship depending on how many sentences there are in the document
requirement for annotators copies
requirement for annotators copies
characters used to explicitly mark sentence bounds
characters used to explicitly mark sentence bounds
whether take lists into consideration at sentence detection
whether take lists into consideration at sentence detection
Wraps annotate to happen inside SparkSQL user defined functions in order to act with org.apache.spark.sql.Column
Wraps annotate to happen inside SparkSQL user defined functions in order to act with org.apache.spark.sql.Column
udf function to be applied to inputCols using this annotator's annotate function as part of ML transformation
whether to explode each sentence into a different row, for better parallelization.
whether to explode each sentence into a different row, for better parallelization. Defaults to false.
Override for additional custom schema checks
Override for additional custom schema checks
Custom sentence separator text
Custom sentence separator text
Whether to take lists into consideration at sentence detection.
Whether to take lists into consideration at sentence detection. Defaults to true.
Whether to split sentences into different Dataset rows.
Whether to split sentences into different Dataset rows. Useful for higher parallelism in fat rows. Defaults to false.
input annotations columns currently used
Get the maximum allowed length for each sentence
Get the maximum allowed length for each sentence
Get the minimum allowed length for each sentence
Get the minimum allowed length for each sentence
Gets annotation column name going to generate
Gets annotation column name going to generate
Length at which sentences will be forcibly split
Length at which sentences will be forcibly split
Whether to consider abbreviation strategies for better accuracy but slower performance.
Whether to consider abbreviation strategies for better accuracy but slower performance. Defaults to true.
Use only custom bounds without considering those of Pragmatic Segmenter.
Use only custom bounds without considering those of Pragmatic Segmenter. Defaults to false. Needs customBounds.
Input annotator type : DOCUMENT
Input annotator type : DOCUMENT
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
Set the maximum allowed length for each sentence
Set the maximum allowed length for each sentence
Set the minimum allowed length for each sentence
Set the minimum allowed length for each sentence
Output annotator type : DOCUMENT
Output annotator type : DOCUMENT
Custom sentence separator text
Custom sentence separator text
Whether to take lists into consideration at sentence detection.
Whether to take lists into consideration at sentence detection. Defaults to true.
Whether to split sentences into different Dataset rows.
Whether to split sentences into different Dataset rows. Useful for higher parallelism in fat rows. Defaults to false.
Overrides required annotators column if different than default
Overrides required annotators column if different than default
Set the maximum allowed length for each sentence
Set the maximum allowed length for each sentence
Set the minimum allowed length for each sentence
Set the minimum allowed length for each sentence
Overrides annotation column name when transforming
Overrides annotation column name when transforming
Length at which sentences will be forcibly split
Length at which sentences will be forcibly split
Whether to consider abbreviation strategies for better accuracy but slower performance.
Whether to consider abbreviation strategies for better accuracy but slower performance. Defaults to true.
Use only custom bounds without considering those of Pragmatic Segmenter.
Use only custom bounds without considering those of Pragmatic Segmenter. Defaults to false. Needs customBounds.
length at which sentences will be forcibly split.
length at which sentences will be forcibly split.
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
Dataset[Row]
requirement for pipeline transformation validation.
requirement for pipeline transformation validation. It is called on fit()
internal constructor requirement for serialization of params
internal constructor requirement for serialization of params
whether to apply abbreviations at sentence detection
whether to apply abbreviations at sentence detection
whether to only utilize custom bounds for sentence detection
whether to only utilize custom bounds for sentence detection
takes a Dataset and checks to see if all the required annotation types are present.
takes a Dataset and checks to see if all the required annotation types are present.
to be validated
True if all the required types are present, else false
Required input and expected output annotator types
Annotator that detects sentence boundaries using any provided approach
See https://github.com/JohnSnowLabs/spark-nlp/tree/master/src/test/scala/com/johnsnowlabs/nlp/annotators/sbd/pragmatic for further reference on how to use this API