edu.arizona.sista.processors.shallownlp

ShallowNLPProcessor

Related Doc: package shallownlp

class ShallowNLPProcessor extends Processor

A Processor using only shallow analysis: tokenization, lemmatization, POS tagging, and NER. All implemented using Stanford's CoreNLP tools. User: mihais Date: 2/25/15

Linear Supertypes
Processor, AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. ShallowNLPProcessor
  2. Processor
  3. AnyRef
  4. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new ShallowNLPProcessor(internStrings: Boolean = true)

Value Members

  1. final def !=(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  4. def annotate(doc: Document): Document

    Definition Classes
    Processor
  5. def annotate(text: String, keepText: Boolean = false): Document

    Definition Classes
    Processor
  6. def annotateFromSentences(sentences: Iterable[String], keepText: Boolean = false): Document

    Definition Classes
    Processor
  7. def annotateFromTokens(sentences: Iterable[Iterable[String]], keepText: Boolean = false): Document

    Definition Classes
    Processor
  8. def arrayOrNone[T](b: ArrayBuffer[T])(implicit arg0: ClassTag[T]): Option[Array[T]]

  9. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  10. def basicSanityCheck(doc: Document, checkAnnotation: Boolean = true): Option[Annotation]

  11. lazy val chunker: CRFChunker

  12. def chunking(doc: Document): Unit

    Shallow parsing; modifies the document in place

    Shallow parsing; modifies the document in place

    Definition Classes
    ShallowNLPProcessorProcessor
  13. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  14. def discourse(doc: Document): Unit

    Discourse parsing; modifies the document in place

    Discourse parsing; modifies the document in place

    Definition Classes
    ShallowNLPProcessorProcessor
  15. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  16. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  17. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  18. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  19. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  20. def in(s: String): String

  21. val internStrings: Boolean

  22. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  23. def labelSemanticRoles(doc: Document): Unit

    SRL; modifies the document in place

    SRL; modifies the document in place

    Definition Classes
    ShallowNLPProcessorProcessor
  24. def lemmatize(doc: Document): Unit

    Lematization; modifies the document in place

    Lematization; modifies the document in place

    Definition Classes
    ShallowNLPProcessorProcessor
  25. lazy val lemmatizer: StanfordCoreNLP

  26. def mkChunker: CRFChunker

  27. def mkDocument(text: String, keepText: Boolean): Document

    Constructs a document of tokens from free text; includes sentence splitting and tokenization

    Constructs a document of tokens from free text; includes sentence splitting and tokenization

    Definition Classes
    ShallowNLPProcessorProcessor
  28. def mkDocumentFromSentences(sentences: Iterable[String], keepText: Boolean, charactersBetweenSentences: Int = 1): Document

    Constructs a document of tokens from an array of untokenized sentences

    Constructs a document of tokens from an array of untokenized sentences

    Definition Classes
    ShallowNLPProcessorProcessor
  29. def mkDocumentFromTokens(sentences: Iterable[Iterable[String]], keepText: Boolean, charactersBetweenSentences: Int = 1, charactersBetweenTokens: Int = 1): Document

    Constructs a document of tokens from an array of tokenized sentences

    Constructs a document of tokens from an array of tokenized sentences

    Definition Classes
    ShallowNLPProcessorProcessor
  30. def mkLemmatizer: StanfordCoreNLP

  31. def mkNer: StanfordCoreNLP

  32. def mkPosTagger: StanfordCoreNLP

  33. def mkSentence(annotation: CoreMap): Sentence

  34. def mkTokenizerWithSentenceSplitting: StanfordCoreNLP

  35. def mkTokenizerWithoutSentenceSplitting: StanfordCoreNLP

  36. def namedEntitySanityCheck(doc: Document): Option[Annotation]

  37. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  38. lazy val ner: StanfordCoreNLP

  39. final def notify(): Unit

    Definition Classes
    AnyRef
  40. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  41. def parse(doc: Document): Unit

    Syntactic parsing; modifies the document in place

    Syntactic parsing; modifies the document in place

    Definition Classes
    ShallowNLPProcessorProcessor
  42. lazy val posTagger: StanfordCoreNLP

  43. def postprocessTags(annotation: Annotation): Unit

    Hook to allow postprocessing of CoreNLP POS tagging *in place*, overwriting original POS tags This is useful for domain-specific corrections

    Hook to allow postprocessing of CoreNLP POS tagging *in place*, overwriting original POS tags This is useful for domain-specific corrections

    annotation

    The CoreNLP annotation

  44. def postprocessTokens(sentence: CoreMap): List[CoreLabel]

    Attributes
    protected
  45. def postprocessTokens(originalTokens: Array[CoreLabel]): Array[CoreLabel]

    Hook to allow postprocessing of CoreNLP tokenization This is useful for domain-specific corrections, such as the ones in BioNLPProcessor If you change the tokens, make sure to store them back in the sentence!

    Hook to allow postprocessing of CoreNLP tokenization This is useful for domain-specific corrections, such as the ones in BioNLPProcessor If you change the tokens, make sure to store them back in the sentence!

    originalTokens

    Input CoreNLP sentence

    returns

    The modified tokens

  46. def preprocessSentences(origSentences: Iterable[String]): Iterable[String]

    Runs preprocessText on each sentence

    Runs preprocessText on each sentence

    Definition Classes
    Processor
  47. def preprocessText(origText: String): String

    Hook to allow the preprocessing of input text This is useful for domain-specific corrections, such as the ones in BioNLPProcessor, where we remove Table and Fig references Note that this is allowed to change character offsets

    Hook to allow the preprocessing of input text This is useful for domain-specific corrections, such as the ones in BioNLPProcessor, where we remove Table and Fig references Note that this is allowed to change character offsets

    origText

    The original input text

    returns

    The preprocessed text

    Definition Classes
    Processor
  48. def preprocessTokens(origSentences: Iterable[Iterable[String]]): Iterable[Iterable[String]]

    Runs preprocessText on each token

    Runs preprocessText on each token

    Definition Classes
    Processor
  49. def recognizeNamedEntities(doc: Document): Unit

    NER; modifies the document in place

    NER; modifies the document in place

    Definition Classes
    ShallowNLPProcessorProcessor
  50. def resolveCoreference(doc: Document): Unit

    Coreference resolution; modifies the document in place

    Coreference resolution; modifies the document in place

    Definition Classes
    ShallowNLPProcessorProcessor
  51. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  52. def tagPartsOfSpeech(doc: Document): Unit

    Part of speech tagging This modifies the document in place, which is not too elegant.

    Part of speech tagging This modifies the document in place, which is not too elegant. But there are two reasons for this: (a) Some annotators (e.g., Stanford's CoreNLP) require some state (i.e., their Annotation object) to be passed between operations; (b) This is more efficient during annotate() where all the possible operations are chained.

    Definition Classes
    ShallowNLPProcessorProcessor
  53. def toString(): String

    Definition Classes
    AnyRef → Any
  54. lazy val tokenizerWithSentenceSplitting: StanfordCoreNLP

  55. lazy val tokenizerWithoutSentenceSplitting: StanfordCoreNLP

  56. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  57. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  58. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Processor

Inherited from AnyRef

Inherited from Any

Ungrouped