TY - JOUR
T1 - Hand Gesture Recognition for Characters Understanding Using Convex Hull Landmarks and Geometric Features
AU - Ansar, Hira
AU - Mudawi, Naif Al
AU - Alotaibi, Saud S.
AU - Alazeb, Abdulwahab
AU - Alabdullah, Bayan Ibrahimm
AU - Alonazi, Mohammed
AU - Park, Jeongmin
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2023
Y1 - 2023
N2 - With the latest advancements, hand gesture recognition is becoming an effective way of communication and gaining popularity from a research point of view. Hearing impaired people around the world need assistance, while sign language is only understood by a few people around the globe. It becomes challenging for untrained people to communicate easily, research community has tried to train systems with a variety of models to facilitate communication with hearing impaired people and also human-computer interaction. Researchers have detected gestures with numerous recognition rates; however, the recognition rate still needs improvement. As the images captured via cameras possess multiple issues, the light intensity variation makes it a challenging task to extract gestures from such images, extra information in captured images, such as noise hinders the computation time, and complex backgrounds make the extraction of gestures difficult. A novel approach is proposed in this paper for character detection and recognition. The proposed system is divided into five steps for hand gesture recognition. Firstly, images are pre-processed to reduce noise and intensity is adjusted. The pre-processed images region of interest is detected via directional images. After hand extraction, landmarks are extracted via a convex hull. Each gesture is used to extract geometric features for the proposed hand gesture recognition (HGR) system. The extracted features helped in gesture detection and recognition via the Convolutional Neural Network (CNN) classifier. The proposed approach experimentation result demonstrated over the MNIST dataset achieved a gesture recognition rate of 93.2% and 90.2% with one-third and two-third training validation systems, respectively. Also, the proposed system performance is validated on the ASL dataset, giving accuracy of 91.6% and 88.14% with one-third and two-third training validation systems, respectively. The proposed system is also compared with other conventional systems. Different emerging domains such as human-computer interaction (HCI), human-robot interaction (HRI), and virtual reality (VR) are applicable to the proposed system to fill the communication gap.
AB - With the latest advancements, hand gesture recognition is becoming an effective way of communication and gaining popularity from a research point of view. Hearing impaired people around the world need assistance, while sign language is only understood by a few people around the globe. It becomes challenging for untrained people to communicate easily, research community has tried to train systems with a variety of models to facilitate communication with hearing impaired people and also human-computer interaction. Researchers have detected gestures with numerous recognition rates; however, the recognition rate still needs improvement. As the images captured via cameras possess multiple issues, the light intensity variation makes it a challenging task to extract gestures from such images, extra information in captured images, such as noise hinders the computation time, and complex backgrounds make the extraction of gestures difficult. A novel approach is proposed in this paper for character detection and recognition. The proposed system is divided into five steps for hand gesture recognition. Firstly, images are pre-processed to reduce noise and intensity is adjusted. The pre-processed images region of interest is detected via directional images. After hand extraction, landmarks are extracted via a convex hull. Each gesture is used to extract geometric features for the proposed hand gesture recognition (HGR) system. The extracted features helped in gesture detection and recognition via the Convolutional Neural Network (CNN) classifier. The proposed approach experimentation result demonstrated over the MNIST dataset achieved a gesture recognition rate of 93.2% and 90.2% with one-third and two-third training validation systems, respectively. Also, the proposed system performance is validated on the ASL dataset, giving accuracy of 91.6% and 88.14% with one-third and two-third training validation systems, respectively. The proposed system is also compared with other conventional systems. Different emerging domains such as human-computer interaction (HCI), human-robot interaction (HRI), and virtual reality (VR) are applicable to the proposed system to fill the communication gap.
KW - ASL sign language
KW - character understanding
KW - CNN
KW - geometric feature
KW - hand gesture recognition
KW - landmark identification
UR - http://www.scopus.com/inward/record.url?scp=85166759934&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2023.3300712
DO - 10.1109/ACCESS.2023.3300712
M3 - Article
AN - SCOPUS:85166759934
SN - 2169-3536
VL - 11
SP - 82065
EP - 82078
JO - IEEE Access
JF - IEEE Access
ER -