Real-time detection of distracted driving based on deep learning
- Author(s): Duy Tran 1 ; Ha Manh Do 1 ; Weihua Sheng 1 ; He Bai 2 ; Girish Chowdhary 3
-
-
View affiliations
-
Affiliations:
1:
School of Electrical and Computer Engineering, Oklahoma State University , Stillwater, OK 74078 , USA ;
2: School of Mechanical and Aerospace Engineering, Oklahoma State University , Stillwater, OK 74078 , USA ;
3: Department of Agricultural and Biological Engineering, University of Illinois at Urbana-Champaign , Urbana, IL 61801 , USA
-
Affiliations:
1:
School of Electrical and Computer Engineering, Oklahoma State University , Stillwater, OK 74078 , USA ;
- Source:
Volume 12, Issue 10,
December
2018,
p.
1210 – 1219
DOI: 10.1049/iet-its.2018.5172 , Print ISSN 1751-956X, Online ISSN 1751-9578
Driver distraction is a leading factor in car crashes. With a goal to reduce traffic accidents and improve transportation safety, this study proposes a driver distraction detection system which identifies various types of distractions through a camera observing the driver. An assisted driving testbed is developed for the purpose of creating realistic driving experiences and validating the distraction detection algorithms. The authors collected a dataset which consists of images of the drivers in both normal and distracted driving postures. Four deep convolutional neural networks including VGG-16, AlexNet, GoogleNet, and residual network are implemented and evaluated on an embedded graphic processing unit platform. In addition, they developed a conversational warning system that alerts the driver in real-time when he/she does not focus on the driving task. Experimental results show that the proposed approach outperforms the baseline one which has only 256 neurons in the fully-connected layers. Furthermore, the results indicate that the GoogleNet is the best model out of the four for distraction detection in the driving simulator testbed.
Inspec keywords: driver information systems; neural nets; graphics processing units; road safety; road traffic; cameras; learning (artificial intelligence); road accidents; object detection
Other keywords: traffic accident reduction; deep learning; driving task; embedded graphic processing unit platform; VGG-16; transportation safety; distracted driving postures; real-time detection; deep convolutional neural networks; residual network; fully-connected layers; distraction detection algorithms; GoogleNet; AlexNet; car crashes; conversational warning system; realistic driving experiences; camera; driving simulator testbed; assisted driving testbed; driver distraction detection system
Subjects: Image sensors; Neural computing techniques; Optical, image and video signal processing; Computer vision and image processing techniques; Traffic engineering computing
References
-
-
1)
-
25. Liang, Y., Reyes, M.L., Lee, J.D.: ‘Real-time detection of driver cognitive distraction using support vector machines’, IEEE Trans. Intell. Transp. Syst., 2007, 8, (2), pp. 340–350.
-
-
2)
-
33. Lőrincz, A., Csákvári, M., Fóthi, Á., et al: ‘Cognitive deep machine can train itself’, arXiv preprint arXiv:161200745, 2016.
-
-
3)
-
10. Tabrizi, P.R., Zoroofi, R.A.: ‘Drowsiness detection based on brightness and numeral features of eye image’. Fifth Int. Conf. on Intelligent Information Hiding and Multimedia Signal Processing, Kyoto, Japan, 2009, pp. 1310–1313.
-
-
4)
-
5. Ameen, L.: ‘The 25 scariest texting and driving accident statistics’. Available at http://www.icebike.org/texting-and-driving/.
-
-
5)
-
40. He, K., Zhang, X., Ren, S., et al: ‘Deep residual learning for image recognition’. Proc. IEEE Conf. on computer vision and pattern recognition, Las Vegas, Nevada, USA, 2016, pp. 770–778.
-
-
6)
-
45. Srivastava, N., Hinton, G.E., Krizhevsky, A., et al: ‘Dropout: a simple way to prevent neural networks from overfitting’, J. Mach. Learn. Res., 2014, 15, (1), pp. 1929–1958.
-
-
7)
-
8. Jin, L., Niu, Q., Hou, H., et al: ‘Driver cognitive distraction detection using driving performance measures’, Discret. Dyn. Nat. Soc., 2012, 2012, Available at: https://www.hindawi.com/journals/ddns/2012/432634/cta/.
-
-
8)
-
23. Ji, Q., Zhu, Z., Lan, P.: ‘Real-time nonintrusive monitoring and prediction of driver fatigue’, IEEE Trans. Veh. Technol., 2004, 53, (4), pp. 1052–1068.
-
-
9)
-
42. Carnetsoft Inc.: ‘Research driving simulator’. Available at http://www.carnetsoft.com/research-simulator.html.
-
-
10)
-
15. Azman, A., Meng, Q., Edirisinghe, E.: ‘Non-intrusive physiological measurement for driver cognitive distraction detection: eye and mouth movements’. Third Int. Conf. on Advanced Computer Theory and Engineering (ICACTE), Chengdu, China, 2010, vol. 3, pp. V3-595–V3-599.
-
-
11)
-
4. Just, M.A., Keller, T.A., Cynkar, J.: ‘A decrease in brain activation associated with driving when listening to someone speak’, Brain Res., 2008, 1205, pp. 70–80.
-
-
12)
-
28. Eskandarian, A., Sayed, R.: ‘Analysis of driver impairment, fatigue, and drowsiness and an unobtrusive vehicle-based detection scheme’. Proc. 1st Int. Conf. on Traffic Accidents, Tehran, Iran, 2005, pp. 35–49.
-
-
13)
-
29. State Farm Corporate: ‘State farm distracted driver detection’. Available at https://www.kaggle.com/c/state-farm-distracted-driver-detection.
-
-
14)
-
20. Bergasa, L.M., Nuevo, J., Sotelo, M.A., et al: ‘Real-time system for monitoring driver vigilance’, IEEE Trans. Intell. Transp. Syst., 2006, 7, (1), pp. 63–77.
-
-
15)
-
1. US Department of Transportation – National Highway Traffic Safety Administration: ‘Traffic safety facts’. Available at https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812318?-ga=1.78055380.1104132544.1489526594.
-
-
16)
-
41. Shin, H.C., Roth, H.R., Gao, M., et al: ‘Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning’, IEEE Trans. Med. Imaging, 2016, 35, (5), pp. 1285–1298.
-
-
17)
-
16. Park, S., Trivedi, M.: ‘Driver activity analysis for intelligent vehicles: issues and development framework’. IEEE Proc. Intelligent Vehicles Symp., Las Vegas, Nevada, USA, 2005, pp. 644–649.
-
-
18)
-
9. Ranney, T.A.: ‘Driver distraction: a review of the current state-of-knowledge’ (US Department of Transportation - National Highway Traffic Safety Administration, Washington DC, 2008).
-
-
19)
-
21. Ji, Q., Lan, P., Looney, C.: ‘A probabilistic framework for modeling and real-time monitoring human fatigue’, IEEE Trans. Syst. Man Cybern. A, Syst. Humans, 2006, 36, (5), pp. 862–875.
-
-
20)
-
27. Eskandarian, A., Sayed, R.: ‘Driving simulator experiment: detecting driver fatigue by monitoring eye and steering activity’. Proc. Annual Intelligent Vehicles Systems Symp., Traverse City, Michigan, USA, 2003.
-
-
21)
-
19. Murphy-Chutorian, E., Doshi, A., Trivedi, M. M.: ‘Head pose estimation for driver assistance systems: a robust algorithm and experimental evaluation’. IEEE Intelligent Transportation Systems Conf., Seattle, Washington, USA, 2007, pp. 709–714.
-
-
22)
-
6. Shiwu, L., Linhong, W., Zhifa, Y., et al: ‘An active driver fatigue identification technique using multiple physiological features’. Int. Conf. on Mechatronic Science, Electric Engineering and Computer (MEC), Jilin, China, 2011, pp. 733–737.
-
-
23)
-
31. Okon, O.D., Meng, L.: ‘Detecting distracted driving with deep learning’. Interactive Collaborative Robotics, Hatfield, United Kingdom, 2017, pp. 170–179.
-
-
24)
-
37. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’, arXiv preprint arXiv:14091556, 2014.
-
-
25)
-
14. Fletcher, L., Zelinsky, A.: ‘Driver state monitoring to mitigate distraction’. Proc. Int. Conf. on the Distractions in Driving, Sydney, Australia, 2007, pp. 487–523.
-
-
26)
-
3. Esurance: ‘3 types of distracted driving’, 2016. Available at: https://www.esurance.com/info/car/3-types-of-distracted-driving, accessed October 2017.
-
-
27)
-
43. NVIDIA: ‘Embedded systems’. Available at http://www.nvidia.com/object/embedded-systems-dev-kits-modules.html.
-
-
28)
-
46. Bottou, L.: ‘Large-scale machine learning with stochastic gradient descent’. Proc. COMPSTAT, 2010, pp. 177–186.
-
-
29)
-
18. Kircher, K., Ahlstrom, C., Kircher, A.: ‘Comparison of two eye-gaze based realtime driver distraction detection algorithms in a small-scale field operational test’. Proc. Fifth Int. Symp. on Human Factors in Driver Assessment, Training and Vehicle Design, Big Sky, Montana, USA, 2009, pp. 16–23.
-
-
30)
-
24. Craye, C., Karray, F.: ‘Driver distraction detection and recognition using RGB-D sensor’, CoRR, 2015, abs/1502.00250. Available at: http://arxiv.org/abs/1502.00250.
-
-
31)
-
30. Colbran, S., Cen, K., Luo, D.: ‘Classification of driver dis- traction’ (Stanford University, Stanford, CA, 2016). Available at: http://cs229.stanford.edu/proj2016/report/SamCenLuo-ClassificationOfDriverDistraction-report.pdf.
-
-
32)
-
17. Pohl, J., Birk, W., Westervall, L.: ‘A driver-distraction-based lane-keeping assistance system’, Proc. Inst. Mech. Eng. I, J. Syst. Control Eng., 2007, 221, (4), pp. 541–552.
-
-
33)
-
11. Farber, E., Foley, J., Scott, S.: ‘Visual attention design limits for its in-vehicle systems: the society of automotive engineers standard for limiting visual distraction while driving’. Transportation Research Board Annual General Meeting, Washington DC, USA, 2000, pp. 2–3.
-
-
34)
-
32. Abouelnaga, Y., Eraqi, H.M., Moustafa, M.N.: ‘Real-time distracted driver posture classification’, arXiv preprint arXiv:170609498, 2017.
-
-
35)
-
38. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ‘ImageNet classification with deep convolutional neural networks’. Adv. Neural. Inf. Process. Syst., 2012, 1, pp. 1097–1105.
-
-
36)
-
35. Venturelli, M., Borghi, G., Vezzani, R., et al: ‘Deep head pose estimation from depth data for in-car automotive applications’, arXiv preprint arXiv:170301883, 2017.
-
-
37)
-
44. FriendlyARM: ‘Nanopi M3’. Available at http://nanopi.io/nanopi-m3.html.
-
-
38)
-
13. Kutila, M., Jokela, M., Markkula, G., et al: ‘Driver distraction detection with a camera vision system’. IEEE Int. Conf. on Image Processing, San Antonio, Texas, USA, 2007, vol. 6, pp. VI-201–VI-204.
-
-
39)
-
26. Gu, H., Ji, Q.: ‘Facial event classification with task oriented dynamic Bayesian network’. Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, CVPR, Washington, DC, USA, 2004, vol. 2, pp. II-870–II-875.
-
-
40)
-
12. Victor, T., Blomberg, O., Zelinsky, A.: ‘Automating the measurement of driver visual behaviours using passive stereo vision’. Proc. Int. Conf. on Series Vision Vehicles (VIV9), Brisbane, Queensland, Australia, 2001.
-
-
41)
-
39. Szegedy, C., Liu, W., Jia, Y., et al: ‘Going deeper with convolutions’. Proc. IEEE Conf. on computer vision and pattern recognition, Boston, Massachusetts, USA, 2015, pp. 1–9.
-
-
42)
-
7. Lal, S.K., Craig, A.: ‘Driver fatigue: electroencephalography and psychological assessment’, Psychophysiology, 2002, 39, (3), pp. 313–321.
-
-
43)
-
22. Craye, C., Karray, F.: ‘Multi-distributions particle filter for eye tracking inside a vehicle’, Image Anal. Recognit., 2013, 6, pp. 407–416.
-
-
44)
-
2. US Department of Transportation – National Highway Traffic Safety Administration: ‘Distracted driving’. Available at: https://www.nhtsa.gov/riskydriving/distracted-driving.
-
-
45)
-
34. Choi, I.H., Hong, S.K., Kim, Y.G.: ‘Real-time categorization of driver's gaze zone using the deep learning techniques’. Int. Conf. on Big Data and Smart Computing (BigComp), Jeongseon, Republic of Korea, 2016, pp. 143–148.
-
-
46)
-
36. Hssayeni, M.D., Saxena, S., Ptucha, R., et al: ‘Distracted driver detection: deep learning vs. handcrafted features’, Electron. Imaging, 2017, 7, (10), p. 20–.
-
-
1)