Emotion Classification in Texts over Graph Neural Networks: Semantic Representation is better than Syntactic


Ameer I., Bolucu N., Sidorov G., Can B.

IEEE Access, cilt.11, ss.56921-56934, 2023 (SCI-Expanded) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 11
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1109/access.2023.3281544
  • Dergi Adı: IEEE Access
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, Directory of Open Access Journals
  • Sayfa Sayıları: ss.56921-56934
  • Anahtar Kelimeler: dependency, Emotion classification, GAT, semantic, social media, syntactic, UCCA
  • Hacettepe Üniversitesi Adresli: Evet

Özet

Social media is a widely used platform that provides a huge amount of user-generated content that can be processed to extract information about users’ emotions. This has numerous benefits, such as understanding how individuals feel about certain news or events. It can be challenging to categorize emotions from text created on social media, especially when trying to identify several different emotions from a short text length, as in a multi-label classification problem. Most previous work on emotion classification has focused on deep neural networks such as Convolutional Neural Networks and Recurrent Neural Networks. However, none of these networks have used semantic and syntactic knowledge to classify multiple emotions from a text. In this study, semantic and syntactic aware graph attention networks were proposed to classify emotions from a text with multiple labels. We integrated semantic information in the graph attention network in the form of Universal Conceptual Cognitive Annotation and syntactic information in the form of dependency trees. Our extensive experimental results showed that our two models, UCCA-GAT (accuracy = 71.2) and Dep-GAT (accuracy = 68.7), were able to outperform the state-of-the-art performance on both the challenging SemEval-2018 E-c: Detecting Emotions (multi-label classification) English dataset (accuracy = 58.8) and GoEmotions dataset (accuracy = 65.9).