Publication | Closed Access
Two-Level LSTM for Sentiment Analysis With Lexicon Embedding and Polar Flipping
35
Citations
32
References
2020
Year
Llm Fine-tuningEngineeringMachine LearningCross-lingual RepresentationMultimodal Sentiment AnalysisSentimental PolaritySentiment AnalysisLexicon EmbeddingCorpus LinguisticsText MiningWord EmbeddingsNatural Language ProcessingPolar FlippingData ScienceComputational LinguisticsTwo-level LstmLanguage StudiesContent AnalysisMachine TranslationNlp TaskDeep LearningSemantic ParsingLinguisticsPo Tagging
Sentiment analysis is a key component in various text mining applications. Numerous sentiment classification techniques, including conventional and deep-learning-based methods, have been proposed in the literature. In most existing methods, a high-quality training set is assumed to be given. Nevertheless, constructing a high-quality training set that consists of highly accurate labels is challenging in real applications. This difficulty stems from the fact that text samples usually contain complex sentiment representations, and their annotation is subjective. We address this challenge in this study by leveraging a new labeling strategy and utilizing a two-level long short-term memory network to construct a sentiment classifier. Lexical cues are useful for sentiment analysis, and they have been utilized in conventional studies. For example, polar and negation words play important roles in sentiment analysis. A new encoding strategy, that is, ρ -hot encoding, is proposed to alleviate the drawbacks of one-hot encoding and, thus, effectively incorporate useful lexical cues. Moreover, the sentimental polarity of a word may change in different sentences due to label noise or context. A flipping model is proposed to model the polar flipping of words in a sentence. We compile three Chinese datasets on the basis of our label strategy and proposed methodology. Experiments demonstrate that the proposed method outperforms state-of-the-art algorithms on both benchmark English data and our compiled Chinese data.
| Year | Citations | |
|---|---|---|
1997 | 93.8K | |
2014 | 23.7K | |
2014 | 13.5K | |
2004 | 7.6K | |
2000 | 5.3K | |
2014 | 3.5K | |
2011 | 3.2K | |
2010 | 2.7K | |
2016 | 2.3K | |
2005 | 2.1K |
Page 1
Page 1