Search In this Thesis
   Search In this Thesis  
العنوان
EMOTION DETECTION from TEXT /
المؤلف
.Zidan, Mahinda Mahmoud Samy Ahmed Zaki
هيئة الاعداد
باحث / ماهندا محمود سامي احمد زكى زيدان
مشرف / إبراهيم محمود الحناوي
مشرف / أحمد رأفت عباس
مشرف / محمود سامي عثمان
الموضوع
Department of Computer Science.
تاريخ النشر
2022.
عدد الصفحات
87 p. :
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
Computer Science Applications
الناشر
تاريخ الإجازة
5/5/2022
مكان الإجازة
جامعة الزقازيق - كلية الحاسبات والمعلومات - علوم الحاسب
الفهرس
Only 14 pages are availabe for public view

from 102

from 102

Abstract

Due to the widespread usage of social media in our recent daily lifestyles, sentiment analysis becomes an important field in pattern recognition and Natural Language Processing (NLP). In this field, users’ feedback data on a specific issue are evaluated and analyzed. Detecting emotions within the text is therefore considered one of the important challenges of the current NLP research. Emotions have been widely studied in psychology and behavioral science as they are an integral part of the human nature. Emotions describe a state of mind of distinct behaviors, feelings, thoughts, and experiences. This work provides a comprehensive overview of the prominent machine learning models applied in emotion analysis. It explores various emotion analysis taxonomies, in addition to the constraints of prevalent deep learning architectures. The thesis also reviews some of the previously presented contributions in emotion analysis with a focus on deep learning methodologies as well as the most common datasets. It presents a comprehensive comparison between several emotion analysis models. This work demonstrates the effectiveness of learning-based techniques in tackling emotion analysis challenges.
The main objective of this work is to propose a new model named BERT-CNN to detect emotions from text. This model is formed by a combination of the Bidirectional Encoder Representations from Transformer (BERT) and the Convolutional Neural networks (CNN) for textual classification. This model embraces the BERT to train the word semantic representation language model. According to the word context, the semantic vector is dynamically generated and then placed into the CNN to predict the output. Results of a comparative study proved that the BERT-CNN model overcomes the state-of-art baseline performance produced by different models in the literature using the semeval
2019 task3 dataset and ISEAR datasets. The BERT-CNN model achieves an accuracy of 94.7% and an F1-score of 94% for semeval2019 task3 dataset and an accuracy of 75.8% and an F1-score of 76% for ISEAR dataset.