TY - JOUR
T1 - A deep learning-based driver distraction identification framework over edge cloud
AU - Gumaei, Abdu
AU - Al-Rakhami, Mabrook
AU - Hassan, Mohammad Mehedi
AU - Alamri, Atif
AU - Alhussein, Musaed
AU - Razzaque, Md Abdur
AU - Fortino, Giancarlo
N1 - Publisher Copyright:
© 2020, Springer-Verlag London Ltd., part of Springer Nature.
PY - 2020
Y1 - 2020
N2 - Currently, the number of traffic accidents has been increased globally. One of the main reasons for this increase is the distraction of the driver on the road. Distracted driving can cause collisions and cause injury, death, or property damage. New techniques can help to mitigate this problem, and one of the recent approaches is to employ body wearable sensors or camera sensors in the vehicle for real-time monitoring and detection of drivers’ distraction and behaviors, such as cell phone use, talking, eating, drinking, radio tuning, navigation interaction, or even combing hair while driving. However, this type of approach requires not only a powerful training module but also a lightweight module for real-time detection and analyzing the captured data. Data need to be collected from specific wearable or camera sensors in order to detect drivers’ distraction and ensure immediate feedback by the administrator for safe driving. Therefore, in this paper, we propose an effective camera-based framework for real-time identification of drivers’ distraction by using a deep learning approach with edge and cloud computing technologies. More specifically, the framework consists of three modules, including the distraction detection module deployed on edge devices in the vehicle environment, the training module deployed in the cloud environment, and finally the analyzing module implemented in the monitoring environment (administrator side) connected with a telecommunication network. The proposed framework is developed using two deep learning models. The first is a custom deep convolutional neural network (CDCNN) model, and the second one is a visual geometry group-16 (VGG16)-based fine-tuned model. Several experiments are conducted on a public large-scale driver distraction dataset to evaluate the two models. The experimental results show that the accuracy rates were 99.64% for the first model and 99.73% for the second model using a holdout test set of 10%. In addition, the first and second models have achieved accuracy rates of 99.36% and 99.57% using a holdout test set of 30%. The results confirmed the applicability and appropriateness of the adopted deep learning models for designing the proposed driver distraction detection framework.
AB - Currently, the number of traffic accidents has been increased globally. One of the main reasons for this increase is the distraction of the driver on the road. Distracted driving can cause collisions and cause injury, death, or property damage. New techniques can help to mitigate this problem, and one of the recent approaches is to employ body wearable sensors or camera sensors in the vehicle for real-time monitoring and detection of drivers’ distraction and behaviors, such as cell phone use, talking, eating, drinking, radio tuning, navigation interaction, or even combing hair while driving. However, this type of approach requires not only a powerful training module but also a lightweight module for real-time detection and analyzing the captured data. Data need to be collected from specific wearable or camera sensors in order to detect drivers’ distraction and ensure immediate feedback by the administrator for safe driving. Therefore, in this paper, we propose an effective camera-based framework for real-time identification of drivers’ distraction by using a deep learning approach with edge and cloud computing technologies. More specifically, the framework consists of three modules, including the distraction detection module deployed on edge devices in the vehicle environment, the training module deployed in the cloud environment, and finally the analyzing module implemented in the monitoring environment (administrator side) connected with a telecommunication network. The proposed framework is developed using two deep learning models. The first is a custom deep convolutional neural network (CDCNN) model, and the second one is a visual geometry group-16 (VGG16)-based fine-tuned model. Several experiments are conducted on a public large-scale driver distraction dataset to evaluate the two models. The experimental results show that the accuracy rates were 99.64% for the first model and 99.73% for the second model using a holdout test set of 10%. In addition, the first and second models have achieved accuracy rates of 99.36% and 99.57% using a holdout test set of 30%. The results confirmed the applicability and appropriateness of the adopted deep learning models for designing the proposed driver distraction detection framework.
KW - Cloud computing
KW - Convolutional neural networks (CNNs)
KW - Deep learning
KW - Driver distraction detection
KW - Raspberry Pi
KW - VGG16
UR - http://www.scopus.com/inward/record.url?scp=85090953471&partnerID=8YFLogxK
U2 - 10.1007/s00521-020-05328-1
DO - 10.1007/s00521-020-05328-1
M3 - Article
AN - SCOPUS:85090953471
SN - 0941-0643
JO - Neural Computing and Applications
JF - Neural Computing and Applications
ER -