TY - JOUR
T1 - A Hybrid Deep Learning Model for Human Activity Recognition Using Multimodal Body Sensing Data
AU - Gumaei, Abdu
AU - Hassan, Mohammad Mehedi
AU - Alelaiwi, Abdulhameed
AU - Alsalman, Hussain
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2019
Y1 - 2019
N2 - Human activity recognition from multimodal body sensor data has proven to be an effective approach for the care of elderly or physically impaired people in a smart healthcare environment. However, traditional machine learning techniques are mostly focused on a single sensing modality, which is not practical for robust healthcare applications. Therefore, recently increasing attention is being given by the researchers on the development of robust machine learning techniques that can exploit multimodal body sensor data and provide important decision making in Smart healthcare. In this paper, we propose an effective multi-sensors-based framework for human activity recognition using a hybrid deep learning model, which combines the simple recurrent units (SRUs) with the gated recurrent units (GRUs) of neural networks. We use the deep SRUs to process the sequences of multimodal input data by using the capability of their internal memory states. Moreover, we use the deep GRUs to store and learn how much of the past information is passed to the future state for solving fluctuations or instability in accuracy and vanishing gradient problems. The system has been compared against the conventional approaches on a publicly available standard dataset. The experimental results show that the proposed approach outperforms the available state-of-the-art methods.
AB - Human activity recognition from multimodal body sensor data has proven to be an effective approach for the care of elderly or physically impaired people in a smart healthcare environment. However, traditional machine learning techniques are mostly focused on a single sensing modality, which is not practical for robust healthcare applications. Therefore, recently increasing attention is being given by the researchers on the development of robust machine learning techniques that can exploit multimodal body sensor data and provide important decision making in Smart healthcare. In this paper, we propose an effective multi-sensors-based framework for human activity recognition using a hybrid deep learning model, which combines the simple recurrent units (SRUs) with the gated recurrent units (GRUs) of neural networks. We use the deep SRUs to process the sequences of multimodal input data by using the capability of their internal memory states. Moreover, we use the deep GRUs to store and learn how much of the past information is passed to the future state for solving fluctuations or instability in accuracy and vanishing gradient problems. The system has been compared against the conventional approaches on a publicly available standard dataset. The experimental results show that the proposed approach outperforms the available state-of-the-art methods.
KW - activity recognition
KW - deep recurrent neural networks (RNNs)
KW - gated recurrent unit (GRU)
KW - Multi-modal body sensor data
KW - robust healthcare
KW - simple recurrent unit (SRU)
UR - http://www.scopus.com/inward/record.url?scp=85075362690&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2019.2927134
DO - 10.1109/ACCESS.2019.2927134
M3 - Article
AN - SCOPUS:85075362690
SN - 2169-3536
VL - 7
SP - 99152
EP - 99160
JO - IEEE Access
JF - IEEE Access
M1 - 8786773
ER -