Batch size (Default: 64
)
ConfigProto from tensorflow, serialized into byte array.
ConfigProto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString()
Dropout coefficient (Default: 0.5f
)
Whether to output to annotators log folder (Default: false
)
input annotations columns currently used
Gets annotation column name going to generate
Gets annotation column name going to generate
Input Annotator Types: SENTENCE_EMBEDDINGS
Input Annotator Types: SENTENCE_EMBEDDINGS
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
Column with label per each document
Learning Rate (Default: 5e-3f
)
Maximum number of epochs to train (Default: 10
)
Output Annotator Types: CATEGORY
Output Annotator Types: CATEGORY
Folder path to save training logs (Default: ""
)
Random seed for shuffling the dataset
Overrides required annotators column if different than default
Overrides required annotators column if different than default
Overrides annotation column name when transforming
Overrides annotation column name when transforming
Random seed
The minimum threshold for the final result otherwise it will be either neutral or the value set in thresholdLabel (Default: 0.6f
)
In case the score is less than threshold, what should be the label (Default: "neutral"
)
requirement for pipeline transformation validation.
requirement for pipeline transformation validation. It is called on fit()
required uid for storing annotator to disk
required uid for storing annotator to disk
takes a Dataset and checks to see if all the required annotation types are present.
takes a Dataset and checks to see if all the required annotation types are present.
to be validated
True if all the required types are present, else false
Choose the proportion of training dataset to be validated against the model on each Epoch (Default: 0.0f
).
Choose the proportion of training dataset to be validated against the model on each Epoch (Default: 0.0f
).
The value should be between 0.0 and 1.0 and by default it is 0.0 and off.
Level of verbosity during training (Default: Verbose.Silent.id
)
A list of (hyper-)parameter keys this annotator can take. Users can set and get the parameter values through setters and getters, respectively.
Required input and expected output annotator types
Trains a SentimentDL, an annotator for multi-class sentiment analysis.
In natural language processing, sentiment analysis is the task of classifying the affective state or subjective view of a text. A common example is if either a product review or tweet can be interpreted positively or negatively.
For the instantiated/pretrained models, see SentimentDLModel.
Notes:
"positive"
or0
, negative sentiment as"negative"
or1
.inputCol
.For extended examples of usage, see the Spark NLP Workshop and the SentimentDLTestSpec.
Example
In this example,
sentiment.csv
is in the formThe model can then be trained with
MultiClassifierDLApproach for general multi-class classification
ClassifierDLApproach for general single-class classification