TY - JOUR
T1 - FERDCNN
T2 - an efficient method for facial expression recognition through deep convolutional neural networks
AU - Rashad, Metwally
AU - Alebiary, Doaa
AU - Aldawsari, Mohammed
AU - Elsawy, Ahmed
AU - AbuEl-Atta, Ahmed H.
N1 - Publisher Copyright:
Copyright 2024 Rashad et al. Distributed under Creative Commons CC-BY 4.0
PY - 2024
Y1 - 2024
N2 - Facial expression recognition (FER) has caught the research community’s attention recently because it can affect many real-life applications. Multiple studies have focused on automatic FER, most of which use a machine learning methodology, FER has continued to be a difficult and exciting issue in computer vision. Deep learning has recently drawn increased attention as a solution to several practical issues, including facial expression recognition. This article introduces an efficient method for FER (FERDCNN) verified on five different pre-trained deep CNN (DCNN) models (AlexNet, GoogleNet, ResNet-18, ResNet-50, and ResNet-101). In the proposed method, firstly the input image has been pre-processed using face detection, resizing, gamma correction, and histogram equalization techniques. Secondly, the images go through DCNN to extract deep features. Finally, support vector machine (SVM) and transfer learning are used to classify generated features. Recent methods have been employed to evaluate and contrast the performance of the proposed approach on two publicly standard databases namely, CK+ and JAFFE on the seven classes of fundamental emotions, including anger, disgust, fear, happiness, sadness, and surprise beside neutrality for CK+ and contempt for JAFFE. The suggested method tested Four different traditional supervised classifiers with deep features, Experimental found that AlexNet excels as a feature extractor, while SVM demonstrates superiority as a classifier because of this combination achieving the highest accuracy rates of 99.0% and 95.16% for the CK+ database and the JAFFE datasets, respectively.
AB - Facial expression recognition (FER) has caught the research community’s attention recently because it can affect many real-life applications. Multiple studies have focused on automatic FER, most of which use a machine learning methodology, FER has continued to be a difficult and exciting issue in computer vision. Deep learning has recently drawn increased attention as a solution to several practical issues, including facial expression recognition. This article introduces an efficient method for FER (FERDCNN) verified on five different pre-trained deep CNN (DCNN) models (AlexNet, GoogleNet, ResNet-18, ResNet-50, and ResNet-101). In the proposed method, firstly the input image has been pre-processed using face detection, resizing, gamma correction, and histogram equalization techniques. Secondly, the images go through DCNN to extract deep features. Finally, support vector machine (SVM) and transfer learning are used to classify generated features. Recent methods have been employed to evaluate and contrast the performance of the proposed approach on two publicly standard databases namely, CK+ and JAFFE on the seven classes of fundamental emotions, including anger, disgust, fear, happiness, sadness, and surprise beside neutrality for CK+ and contempt for JAFFE. The suggested method tested Four different traditional supervised classifiers with deep features, Experimental found that AlexNet excels as a feature extractor, while SVM demonstrates superiority as a classifier because of this combination achieving the highest accuracy rates of 99.0% and 95.16% for the CK+ database and the JAFFE datasets, respectively.
KW - Deep convolutional neural networks
KW - Facial expression recognition
KW - Transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85207928161&partnerID=8YFLogxK
U2 - 10.7717/peerj-cs.2272
DO - 10.7717/peerj-cs.2272
M3 - Article
AN - SCOPUS:85207928161
SN - 2376-5992
VL - 10
JO - PeerJ Computer Science
JF - PeerJ Computer Science
M1 - e2272
ER -