DeBERTa-GRU: Sentiment Analysis for Large Language Model

Adel Assiri, Abdu Gumaei, Faisal Mehmood, Touqeer Abbas, Sami Ullah

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Modern technological advancements have made social media an essential component of daily life. Social media allow individuals to share thoughts, emotions, and ideas. Sentiment analysis plays the function of evaluating whether the sentiment of the text is positive, negative, neutral, or any other personal emotion to understand the sentiment context of the text. Sentiment analysis is essential in business and society because it impacts strategic decision-making. Sentiment analysis involves challenges due to lexical variation, an unlabeled dataset, and text distance correlations. The execution time increases due to the sequential processing of the sequence models. However, the calculation times for the Transformer models are reduced because of the parallel processing. This study uses a hybrid deep learning strategy to combine the strengths of the Transformer and Sequence models while ignoring their limitations. In particular, the proposed model integrates the Decoding-enhanced with Bidirectional Encoder Representations from Transformers (BERT) attention (DeBERTa) and the Gated Recurrent Unit (GRU) for sentiment analysis. Using the Decoding-enhanced BERT technique, the words are mapped into a compact, semantic word embedding space, and the Gated Recurrent Unit model can capture the distance contextual semantics correctly. The proposed hybrid model achieves F1-scores of 97% on the Twitter Large Language Model (LLM) dataset, which is much higher than the performance of new techniques.

Original languageEnglish
Pages (from-to)4219-4236
Number of pages18
JournalComputers, Materials and Continua
Volume79
Issue number3
DOIs
StatePublished - 2024

Keywords

  • DeBERTa
  • GRU
  • LSTM
  • Naive Bayes
  • large language model
  • sentiment analysis

Fingerprint

Dive into the research topics of 'DeBERTa-GRU: Sentiment Analysis for Large Language Model'. Together they form a unique fingerprint.

Cite this