access icon free Hybrid strategy for traffic light detection by combining classical and self-learning detectors

Detection of the traffic light is a key function of the automatic driving system for urban traffic. Considering the characteristics of classical and self-learning algorithms, a fusion logic is proposed to make up the shortcoming of learning algorithms by combining the known knowledge with the learning features to detect the red and yellow–green traffic light without turn indicator. The relationship of detection performance among different detectors is established analytically. Then the improvement of detection performance by fusion is analysed theoretically and optimised numerically. According to the analysis results, the hybrid detector is designed by using the colour information in hue-saturation-intensity to extract the candidate region, the hog feature to identify the shape information of traffic light classified by a support vector machine, and a comparatively simple convolutional neural network (CNN) with the classical AlexNet structure to act as the self-learned detector. The effectiveness of the hybrid method is validated by several comparative tests with single CNN detectors and other fusion methods on the training dataset, and the extensibility to new application conditions is evaluated by vehicle tests.

Inspec keywords: pattern classification; convolutional neural nets; image fusion; unsupervised learning; road vehicles; traffic engineering computing; support vector machines; object detection; feature extraction; image colour analysis

Other keywords: yellow–green traffic light; self-learning algorithms; self-learned detector; single CNN detectors; red traffic light; detection performance; urban traffic; automatic driving system; fusion logic; classical AlexNet structure; hybrid detector; fusion methods; traffic light detection; hybrid strategy; self-learning detectors; learning features; convolutional neural network

Subjects: Traffic engineering computing; Optical, image and video signal processing; Data handling techniques; Neural computing techniques; Computer vision and image processing techniques; Sensor fusion

References

    1. 1)
      • 15. Waisakurnia, W., Widyantoro, D.H.: ‘Traffic light candidate elimination based on position’. 10th Int. Conf. on Telecommunication Systems Services and Applications, Denpasar, Indonesia, ISSN: 2372-7314, 2016.
    2. 2)
      • 26. Wang, J., Zhou, L.: ‘Traffic light recognition with high dynamic range imaging and deep learning’. IEEE Trans. Intell. Transp. Syst., 2019, 20, (4), pp. 13411352.
    3. 3)
      • 6. Ouyang, Z., Niu, J., Liu, Y., et al: ‘Deep CNN-based real-time traffic light detector for self-driving vehicles’, IEEE Trans. Mob. Comput., 2020, 19, (2), pp. 300313.
    4. 4)
      • 34. Luo, H., Yang, Y., Tong, B., et al: ‘Traffic sign recognition using a multi-task convolutional neural network’, IEEE Trans. Intell. Trans. Syst., 2018, 19, (4), pp. 11001111.
    5. 5)
      • 24. Szegedy, C., Vanhoucke, V., Ioffe, S., et al: ‘Rethinking the inception architecture for computer vision’. 2016 IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 28182826.
    6. 6)
      • 11. Said, A.F., Hazrati, M.K., Akhbari, F.: ‘Real-time detection and classification of traffic light signals’. IEEE Applied Imagery Pattern Recognition Workshop, Washington, DC, USA, 2016, pp. 15.
    7. 7)
      • 22. Kim, J., Cho, H., Hwangbo, M., et al: ‘Deep traffic light detection for self-driving cars from a large-scale dataset’. 2018 21st Int. Conf. on Intelligent Transportation Systems, Maui, HI, USA, 2018, pp. 280285.
    8. 8)
    9. 9)
      • 25. Janahiraman, T.V., Subuhan, M.S.M.: ‘Traffic light detection using tensorflow object detection framework’. 2019 IEEE 9th Int. Conf. on System Engineering and Technology (ICSET), Shah Alam, Malaysia, 2019, pp. 108113.
    10. 10)
    11. 11)
      • 10. Omachi, M., Omachi, S.: ‘Detection of traffic light using structural information’. 10th IEEE Int. Conf. on Signal Processing, Beijing, People's Republic of China, 2010, pp. 809812.
    12. 12)
      • 30. Traffic lights recognition public benchmarks’. Available at http://www.lara.prd.fr/benchmarks/trafficlightsrecognition.
    13. 13)
      • 21. Redmon, J., Farhadi, A.: ‘YOLOv3: an incremental improvement’. arXiv:1804.02767[cs.CV], 2018.
    14. 14)
      • 20. Ren, S., He, K., Girshick, R., et al: ‘Faster R-CNN: towards real-time object detection with region proposal networks’. IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (6), pp. 11371149.
    15. 15)
      • 16. Jensen, M.B., Philipsen, M.P., Moelmose, A., et al: ‘Traffic light detection at night: comparison of a learning-based detector and three model-based detectors’. 11th Symp. on Visual Computing, Las Vegas, NV, USA., 2015, vol. 9474, pp. 774783.
    16. 16)
    17. 17)
      • 4. Omachi, M., Omachi, S.: ‘Traffic light detection with color and edge information’. 2nd IEEE Int. Conf. on Computer Science and Information Technology, Beijing, People's Republic of China, 2009, pp. 284287.
    18. 18)
      • 27. Müller, J., Dietmayer, K.: ‘Detecting traffic lights by single shot detection’. 2018 21st Int. Conf. on Intelligent Transportation Systems, Maui, HI, USA, 2018, pp. 266273.
    19. 19)
      • 29. Li, X., Ma, H., Wang, X., et al: ‘Traffic light recognition for complex scene with fusion detections’, IEEE Trans. Intell. Trans. Syst., 2018, 19, (1), pp. 199208.
    20. 20)
      • 1. Gao, F., Lin, F., Liu, B.: ‘Distributed hinf control of platoon interacted by switching and undirected topology’, Int. J. Automot. Technol., 2020, 21, (1), pp. 259268.
    21. 21)
      • 5. Widyantoro, D.H., Saputra, K.I.: ‘Traffic lights detection and recognition based on color segmentation and circle hough transform’. Int. Conf. on Data and Software Engineering, Yogyakarta, Indonesia, 2015, pp. 237240.
    22. 22)
      • 17. Shen, X., Andersen, H., Ang, M.H., et al: ‘A hybrid approach of candidate region extraction for robust traffic light recognition’. 20th IEEE Int. Conf. on Intelligent Transportation Systems, Yokohama, Japan, 2017, pp. 18.
    23. 23)
      • 13. Lee, S., Kim, J., Lim, Y., et al: ‘Traffic light detection and recognition based on Haar-like features’. Int. Conf. on Electronics, Information, and Communication, Honolulu, HI, USA, 2018, pp. 14.
    24. 24)
      • 18. Zhao, Z., Zheng, P., Xu, S., et al: ‘Object detection with deep learning: a review’, IEEE Trans. Neural Netw. Learning Syst., 2019, 30, (11), pp. 32123232.
    25. 25)
      • 9. Ji, Y., Yang, M., Lu, Z., et al: ‘Integrating visual selective attention model with HOG features for traffic light detection and recognition’. IEEE Intelligent Vehicles Symp., Seoul, Republic of Korea, 2015, pp. 280285.
    26. 26)
    27. 27)
      • 7. Cheng, H., Jiang, X., Sun, Y., et al: ‘Color image segmentation: advances and prospects’, Pattern Recogn., 2001, 34, (12), pp. 22592281.
    28. 28)
      • 33. Krizhevsky, A., Sutskever, I., Hinton, G.: ‘ImageNet classification with deep convolutional neural networks’. NIPS'12 Proc. 25th Int. Conf. on Neural Information Processing Systems, Lake Tahoe, NV, USA., 2012, vol. 1, pp. 10971105.
    29. 29)
      • 23. Weber, M., Wolf, P., Zöllner, J.M.: ‘DeepTLR: a single deep convolutional network for detection and classification of traffic lights’. 2016 IEEE Intelligent Vehicles Symp., Gothenburg, Sweden, 2016, pp. 342348.
    30. 30)
      • 3. Jensen, M.B., Philipsen, M.P., Møgelmose, A., et al: ‘Vision for looking at traffic lights: issues, survey, and perspectives’, IEEE Trans. Intell. Transp. Syst., 2016, 7, pp. 18001815.
    31. 31)
      • 31. Møgelmose, A., Trivedi, M.M., Moeslund, T.B.: ‘Vision based traffic sign detection and analysis for intelligent driver assistance systems: perspectives and survey’, IEEE Trans. Intell. Transp. Syst., 2012, 13, (4), pp. 14841497.
    32. 32)
      • 12. Binangkit, J.L., Widyantoro, D.H.: ‘Increasing accuracy of traffic light color detection and recognition using machine learning’. 10th Int. Conf. on Telecommunication Systems Services and Applications, Denpasar, Indonesia, 2016, pp. 15.
    33. 33)
      • 32. Dalal, N., Triggs, B.: ‘Histograms of oriented gradients for human detection’. Computer Vision and Pattern Recognition, IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, San Diego, CA, USA., 2005, vol. 1, pp. 886893.
    34. 34)
      • 28. Behrendt, K., Novak, L., Botros, R.: ‘A deep learning approach to traffic lights: detection, tracking, and classification’. 2017 IEEE Int. Conf. on Robotics and Automation, Singapore, 2017, pp. 13701377.
    35. 35)
      • 2. Red light running’. Insurance Institute of Highway Safety. Available at http://www.iihs.org/iihs/topics/t/red-light-running/topicoverview.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-its.2019.0782
Loading

Related content

content/journals/10.1049/iet-its.2019.0782
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading