access icon free Predicting driver behaviour at intersections based on driver gaze and traffic light recognition

This work introduces and evaluates a model for predicting driver behaviour, namely turns or proceeding straight, at traffic light intersections from driver three-dimensional gaze data and traffic light recognition. Based on vehicular data, this work relates the traffic light position, the driver's gaze, head movement, and distance from the centre of the traffic light to build a model of driver behaviour. The model can be used to predict the expected driver manoeuvre 3 to 4 s prior to arrival at the intersection. As part of this study, a framework for driving scene understanding based on driver gaze is presented. The outcomes of this study indicate that this deep learning framework for measuring, accumulating and validating different driving actions may be useful in developing models for predicting driver intent before intersections and perhaps in other key-driving situations. Such models are an essential part of advanced driving assistance systems that help drivers in the execution of manoeuvres.

Inspec keywords: human factors; deep learning (artificial intelligence); road vehicles; gaze tracking; driver information systems; road traffic

Other keywords: driver behaviour prediction; head movement; traffic light position; driving scene understanding; traffic light intersections; driver intent prediction; driver three-dimensional gaze data; traffic light recognition; expected driver manoeuvre; advanced driving assistance systems; deep learning framework

Subjects: Image recognition; Traffic engineering computing; Computer vision and image processing techniques; Social and behavioural sciences computing; Machine learning (artificial intelligence)

References

    1. 1)
      • 5. John, V., Yoneda, K., Qi, B., et al: ‘Traffic light recognition in varying illumination using deep learning and saliency map’. 17th Int. Conf. on Intelligent Transportation Systems (ITSC), Qingdao, China, 2014, pp. 22862291.
    2. 2)
      • 11. Matas, J., Chum, O., Martin, U., et al: ‘Robust wide baseline stereo from maximally stable extremal regions’. Proc. British Machine Vision Conf. (BMVC), Cardiff, Wales, Volume 1, 2002, pages pp. 384393.
    3. 3)
      • 16. Dogan, U., Edelbrunner, H., Iossifidis, I: ‘Towards a driver model: preliminary study of lane change behaviour’. Proc. of the 11th Int. IEEE Conf. on Intelligent Transportation Systems, Beijing, China, 2008, pp. 931937.
    4. 4)
      • 6. Bao, C., Chen, C., Kui, H., et al: ‘Safe driving at traffic lights: an image recognition based approach’. 20th IEEE Int. Conf. on Mobile Data Management (MDM), Hong Kong, Hong Kong, 2019, pp. 112117.
    5. 5)
      • 10. Matas, J., Chum, O., Martin, U., et al: ‘Distinguished regions for wide-baseline stereo. Technical report, Centre for Machine Perception, K333 FEE Czech Technical University, Prague, Czech Republic. CTU-CMP-2001–33’, available at http://cmp.felk.cvut.cz/~matas/papers/matas-tr-2001-33.ps.gz.
    6. 6)
      • 12. Martin, S., Trivedi, M: ‘Gaze fixations and dynamics for behavior modeling and prediction of On-road driving maneuvers’. IEEE Intelligent Vehicles Symp. (IV), Redondo Beach, CA, USA, 2017, pp pp. 15411545.
    7. 7)
      • 30. Kowsari, T., Beauchemin, S., Bauer, A., et al: ‘Multi-depth cross-calibration of remote eye gaze trackers and stereoscopic scene systems’. IEEE Intelligent Vehicles Symp. Proc., Dearborn, Michigan, USA, 2014, pp. 146150.
    8. 8)
      • 21. Khodayari, A., Kazemi, R., Ghaffari, A., et al: ‘Design of an improved fuzzy logic based model for prediction of car following behaviour’. IEEE Int. Conf. on Mechatronics, Beijing, China, 2011, pp. 200205.
    9. 9)
      • 27. Redmon, J., Divvala, S., Girshick, R., et al: ‘You only look once: unified, real-time object detection’. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Nevada, USA, 2016, pp. 779788.
    10. 10)
      • 13. Martin, S., Vora, S., Yuen, K., et al: ‘Dynamics of driver's gaze: explorations in behavior modeling and maneuver prediction’, IEEE Trans. Intell. Veh., 2018, 3, (2), pp. 141150.
    11. 11)
      • 7. Kulkarni, R., Dhavalikar, S., Bangar, S: ‘Traffic light detection and recognition for self driving cars using deep learning’. 2018 Fourth Int. Conf. on Computing Communication Control and Automation (ICCUBEA), Pune, India, 2018, pp. 14.
    12. 12)
      • 25. Beauchemin, S., Bauer, M., Laurendeau, D., et alRoadlab: an in-vehicle laboratory for developing cognitive cars’. Proc. 23rd Int. Conf. CAINE, Las Vegas, 2010, pp. 710.
    13. 13)
      • 23. Kumagai, T., Sakaguchi, Y., Okuwa, M., et al: ‘Prediction of driving behaviour through probabilistic inference’. Int. Conf. on Engineering Applications of Neural Networks, Malaga Spain, 2003, 8, pp. 810.
    14. 14)
      • 33. Jain, A., Koppula, H., Raghavan, B., et al: ‘Car that knows before you do: anticipating manoeuvres via learning temporal driving models’. Proc. IEEE Int. Conf. on Computer Vision, Santiago, Chile, 2015, pp. 31823190.
    15. 15)
      • 14. Jain, A., Mao, J., Mohiuddin, K.: ‘Artificial neural networks: A tutorial’, IEEE Comput., 1996, 29, (3), pp. 3144.
    16. 16)
      • 29. Olsson, C., Ulén, J., Boykov, Y.: ‘Defense of 3D-label stereo’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Portland Oregon, USA, 2013, pp. 17301737.
    17. 17)
      • 8. Wu, N., Fang, H: ‘A novel traffic light recognition method for traffic monitoring systems’. Second Asia-Pacific Conf. on Intelligent Robot Systems, IEEE, Wuhan, China, 2017, pp. 141145.
    18. 18)
      • 26. Beauchemin, S., Bauer, M., Kowarsi, T., et al: ‘Portable and scalable vision-based vehicular instrumentation for the analysis of driver intentionality’, IEEE Trans. Instrum. Meas., 2012, 61, (2), pp. 391401.
    19. 19)
      • 22. Li, J., Li, X., Bohan Jiang, B., et al: ‘A maneuver-prediction method based on dynamic Bayesian network in highway scenarios’. 2018 Chinese Control And Decision Conf. (CCDC), Shenyang, 2018, pp. 33923397.
    20. 20)
      • 28. Gevers, T., Smeulders, A.: ‘Colour based object recognition’, Pattern Recognit., 1999, 32, (3), pp. 453464.
    21. 21)
      • 31. Bengio, Y., Frasconi, P.: ‘An input output HMM architecture’, Adv. Neural. Inf. Process. Syst., 1995, 7, pp. 427434.
    22. 22)
      • 20. Blaschke, C., Schmitt, J., Fahrmaneover-Pradiktionuber, F.: ‘CAN-Bus Daten. Fahrerim21. Jahrhundert-human machine interface’. VDI, Tagung Braunschweig, Germany, 2007, pp. 165177.
    23. 23)
      • 1. World Health Organization (WHO). Global Status Report on Road Safety 2018. December 2018. https://www.who.int/violence_injury_prevention/road_safety_status/2018/en/. accessed May 22, 2020.
    24. 24)
      • 9. Yeh, T., Lin, S., Lin, H., et al: ‘Traffic light detection using convolutional neural networks and lidar data’. 2019 Int. Symp. on Intelligent Signal Processing and Communication Systems (ISPACS), Taipei, Taiwan, 2019, pp. 12.
    25. 25)
      • 19. Ou, C., Karray, F: ‘Deep learning-based driving maneuver prediction system’, IEEE Trans. Veh. Technol., 2020, 69, (2), pp. 13281340.
    26. 26)
      • 17. Kraiss, K., Kuttelwesch, H.: ‘Identification and application of neural operator models in car driving situation’. Proc. of the Fifth IFAC/IFIP/IFORS/IEA Symp. on Analysis, Design and Evaluation of Man-Machine-Systems, the Hague, Netherlands, 1992, pp. 121126.
    27. 27)
      • 3. Zhu, L., Young, S., Day, C.: ‘Exploring First-Order Approximation of Energy Equivalence of Safety at Intersections: Preprint’, United States: 2019. Web. National Renewable Energy Laboratory (NREL): www.nrel.gov/publications.
    28. 28)
      • 32. Poritz, A.: ‘Linear predictive hidden Markov models and the speech signal’. Proc. Int. Conf. on Acoustics, Speech and Signal Processing, Paris, France, 1982, pp. 12911294.
    29. 29)
      • 18. Leonhardt, V., Wanielik, G: ‘Neural network for lane change prediction assessing driving situation, driver behavior and vehicle movement’. IEEE 20th Int. Conf. on Intelligent Transportation Systems (ITSC), Yokoharna, Japan, 2017, pp. 16.
    30. 30)
      • 15. MacAdam, C., Johnson, G: ‘Application of elementary neural networks and preview sensors for representing driver steering control behaviour’, Veh. Syst. Dyn., 1996, 25, (1), pp. 330.
    31. 31)
      • 4. https://www.autoaccident.com/statistics-on-intersection-accidents.htrnl. accessed February 1, 2020.
    32. 32)
      • 24. Polling, D., Mulder, M., van Paassen, M., et al: ‘Inferring the driver's lane change intention using context-based dynamic Bayesian networks’. Proc. 2005 IEEE Int. Conf. on Systems, Man, and Cybernetics, vol. 1, Waikoloa, Hawaii, USA, 2005, pp. 853858.
    33. 33)
      • 2. Ministry of Transport 2015. Canadian Motor Vehicle Traffic Collision Statistics 2015. Collected in cooperation with the Canadian Council of Motor Transport Administrators.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-its.2020.0087
Loading

Related content

content/journals/10.1049/iet-its.2020.0087
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading