TY - JOUR
T1 - Vision Sensor for Automatic Recognition of Human Activities via Hybrid Features and Multi-Class Support Vector Machine
AU - Kamal, Saleha
AU - Alhasson, Haifa F.
AU - Alnusayri, Mohammed
AU - Alatiyyah, Mohammed
AU - Aljuaid, Hanan
AU - Jalal, Ahmad
AU - Liu, Hui
N1 - Publisher Copyright:
© 2025 by the authors.
PY - 2025/1
Y1 - 2025/1
N2 - Over recent years, automated Human Activity Recognition (HAR) has been an area of concern for many researchers due to its widespread application in surveillance systems, healthcare environments, and many more. This has led researchers to develop coherent and robust systems that efficiently perform HAR. Although there have been many efficient systems developed to date, still, there are many issues to be addressed. There are several elements that contribute to the complexity of the task, making it more challenging to detect human activities, i.e., (i) poor lightning conditions; (ii) different viewing angles; (iii) intricate clothing styles; (iv) diverse activities with similar gestures; and (v) limited availability of large datasets. However, through effective feature extraction, we can develop resilient systems for higher accuracies. During feature extraction, we aim to extract unique key body points and full-body features that exhibit distinct attributes for each activity. Our proposed system introduces an innovative approach for the identification of human activity in outdoor and indoor settings by extracting effective spatio-temporal features, along with a Multi-Class Support Vector Machine, which enhances the model’s performance to accurately identify the activity classes. The experimental findings show that our model outperforms others in terms of classification, accuracy, and generalization, indicating its efficient analysis on benchmark datasets. Various performance metrics, including mean recognition accuracy, precision, F1 score, and recall assess the effectiveness of our model. The assessment findings show a remarkable recognition rate of around 88.61%, 87.33, 86.5%, and 81.25% on the BIT-Interaction dataset, UT-Interaction dataset, NTU RGB + D 120 dataset, and PKUMMD dataset, respectively.
AB - Over recent years, automated Human Activity Recognition (HAR) has been an area of concern for many researchers due to its widespread application in surveillance systems, healthcare environments, and many more. This has led researchers to develop coherent and robust systems that efficiently perform HAR. Although there have been many efficient systems developed to date, still, there are many issues to be addressed. There are several elements that contribute to the complexity of the task, making it more challenging to detect human activities, i.e., (i) poor lightning conditions; (ii) different viewing angles; (iii) intricate clothing styles; (iv) diverse activities with similar gestures; and (v) limited availability of large datasets. However, through effective feature extraction, we can develop resilient systems for higher accuracies. During feature extraction, we aim to extract unique key body points and full-body features that exhibit distinct attributes for each activity. Our proposed system introduces an innovative approach for the identification of human activity in outdoor and indoor settings by extracting effective spatio-temporal features, along with a Multi-Class Support Vector Machine, which enhances the model’s performance to accurately identify the activity classes. The experimental findings show that our model outperforms others in terms of classification, accuracy, and generalization, indicating its efficient analysis on benchmark datasets. Various performance metrics, including mean recognition accuracy, precision, F1 score, and recall assess the effectiveness of our model. The assessment findings show a remarkable recognition rate of around 88.61%, 87.33, 86.5%, and 81.25% on the BIT-Interaction dataset, UT-Interaction dataset, NTU RGB + D 120 dataset, and PKUMMD dataset, respectively.
KW - body pose
KW - human activity recognition
KW - human motion analysis
KW - key body points
KW - machine learning
KW - object detectors
KW - spatio-temporal features
UR - http://www.scopus.com/inward/record.url?scp=85214472757&partnerID=8YFLogxK
U2 - 10.3390/s25010200
DO - 10.3390/s25010200
M3 - Article
C2 - 39796988
AN - SCOPUS:85214472757
SN - 1424-3210
VL - 25
JO - Sensors
JF - Sensors
IS - 1
M1 - 200
ER -