access icon free Improved detection method for traffic signs in real scenes applied in intelligent and connected vehicles

Detecting traffic signs is an essential task for intelligent and connected vehicles. In this study, a modified model based on the method of You Only Look Once (YOLO) is proposed for detecting different types of Chinese signs, including mandatory, prohibitory, danger warning, guide, and tourist signs. Images of Chinese traffic signs are collected in real scenes and a new dataset is established. The modified model combines the DenseNet method with the YOLOv3 network. Dense blocks are used to strengthen feature propagation and promote feature reuse in those feature layers with low resolution in the YOLOv3 network. Experimental results on the test dataset reveal that the average precision of the modified model, the original YOLOv3, and the YOLOv2 networks are 95.92, 94.59, and 89.39%, respectively. Further comparative analyses that give more detailed experimental evaluation results are conducted on the designed model, including (i) the performance of the designed model based on five categories; (ii) the influence of training set size on the designed model; (iii) the performance of the designed model on occlusion and no object conditions in real scenes. The experimental results show that the modified model is effective at fast and accurate Chinese traffic sign detection in real scenes.

Inspec keywords: road traffic; learning (artificial intelligence); neural nets; image recognition; intelligent transportation systems; object detection; natural language processing; traffic engineering computing

Other keywords: mandatory sign; prohibitory sign; feature propagation; connected vehicles; Chinese sign type; YOLOv2 network; Chinese traffic sign detection; intelligent vehicles; YOLOv3 network; DenseNet; guide sign; danger warning; tourist signs; feature layers

Subjects: Image recognition; Natural language interfaces; Computer vision and image processing techniques; Traffic engineering computing; Neural computing techniques

References

    1. 1)
      • 30. Huang, G., Liu, Z., Laurens, V.M, et al: ‘Densely connected convolutional networks’, arXiv preprint arXiv: 1608. 06993, 2018.
    2. 2)
      • 17. Dai, J., Li, Y., He, K., et al: ‘R-FCN: object detection via region-based fully convolutional networks’. Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 379387.
    3. 3)
      • 12. Redmon, J., Farhadi, A.: ‘YOLO9000: better, faster, stronger’, arXiv preprint arXiv: 1612. 08242, 2016.
    4. 4)
      • 21. Mathias, M., Timofte, R., Benenson, R.: ‘Traffic sign recognition – how far are we from the solution?’. The 2011 Int. Joint Conf. on Neural Networks, USA, 2013, pp. 18.
    5. 5)
      • 3. Ma, X.Y., Grimson, W.E.L.: ‘Edge-based rich representation for vehicle classification’. Proc. of 10th IEEE Int. Conf. on Computer Vision, Beijing, China, 2005, pp. 11851192.
    6. 6)
      • 23. Larsson, F., Felsberg, M.: ‘Using Fourier descriptors and spatial models for traffic sign recognition’. Scandinavian Conf. on Image Analysis, Ystad, Sweden, 2011 (LNCS, 6688), pp. 238249.
    7. 7)
      • 32. Glorot, X., Bordes, A., Bengio, Y.: ‘Deep sparse rectifier neural networks’. 14th International Conference on Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, FL, USA, 2011.
    8. 8)
      • 26. Belaroussi, R., Foucher, P., Tarel, J.P., et al: ‘Road sign detection in images: a case study’. 20th Int. Conf. on Pattern Recognition, Istanbul, 2010, pp. 484488.
    9. 9)
      • 33. Du, L.Y., Chen, W., Shuaizhi, F., et al: ‘Real-time detection of vehicle and traffic light for intelligent and connected vehicles based on YOLOv3 network’. The 5th Int. Conf. on Transportation Information and Safety, Liverpool, United Kingdom2019, pp. 388392.
    10. 10)
      • 15. Girshick, R.: ‘Fast r-cnn’. Proc of IEEE Int. Conf. on Computer Vision, Santiago, ChileSantiago, Chile, 2015, pp. 14401448.
    11. 11)
      • 16. Ren, S., He, K., Girshick, R., et al: ‘Faster R-CNN: towards real-time object detection with region proposal networks’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (6), pp. 11371149.
    12. 12)
      • 24. Fleyeh, H., Davami, E.: ‘Eigen-based traffic sign recognition’, IET Intell. Transp. Syst., 2011, 5, (3), pp. 190196.
    13. 13)
      • 9. Zou, Z.X., Shi, Z.W., Guo, Y.H., et al: ‘Object detection in 20 years: a survey’, arXiv preprint arXiv: 1905.05055v1, 2019, pp. 140.
    14. 14)
      • 4. Dalal, N., Triggs, B.: ‘Histograms of oriented gradients for human detection’. Computer Vision and Pattern Recognition, San Diego, CA, USA, 2005, pp. 886893.
    15. 15)
      • 13. Redmon, J., Farhadi, A.: ‘YOLOv3: an incremental improvement’, arXiv preprint arXiv: 1804. 02767, 2018.
    16. 16)
      • 8. Felzenszwalb, P.F., Girshick, R.B., McAllester, D., et al: ‘Object detection with discriminatively trained part-based models’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (9), pp. 16271645.
    17. 17)
      • 20. Stallkamp, J., Schlipsing, M., Salmen, J., et al: ‘Man vs. Computer: benchmarking machine learning algorithms for traffic sign recognition’, Neural Netw., 2012, 32, pp. 323332.
    18. 18)
      • 29. Zhang, J.M., Huang, M.T., Jin, X.K., et al: ‘A real-time Chinese traffic sign detection algorithm based on modified YOLOv2’, Algorithms, 2017, 10, (4), p. 127.
    19. 19)
      • 10. Liu, W., Anguelov, D., Erhan, D., et al: ‘SSD: single shot multibox detector’. Proc of European Conf. on Computer Vision [S.l.], Amsterdam, Netherlands, 2016, pp. 2137.
    20. 20)
      • 2. Lowe, D.G.: ‘Distinctive image features from scale-invariant keypoints’, Int. J. Comput. Vis., 2004, 60, (2), pp. 91110.
    21. 21)
      • 31. Ioffe, S., Szegedy, C.: ‘Batch normalization: accelerating deep network training by reducing internal covariate shift’. Int. Conf. on Machine Learning, Lile, France, 2015.
    22. 22)
      • 22. Timofte, R., Zimmermann, K.: ‘Multi-view traffic sign detection, recognition, and 3d localisation’. 2009 Workshop on Applications of Computer Vision, USA, 2009, pp. 18.
    23. 23)
      • 5. Taigman, Y., Yang, M., Ranzato, M.: ‘Deepface: closing the gap to human-level performance in face verification’. IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 17011708.
    24. 24)
      • 1. Li, K.Q., Dai, Y.F., Li, S.B., et al: ‘State-of-the-art and technical trends of intelligent and connected vehicles’, J. Autom. Safety Energy, 2017, 8, (1), pp. 114.
    25. 25)
      • 7. Wen, J.W., Zhan, Y.W., Ling, W.L., et al: ‘Batch re-normalization of real-time object detection algorithm YOLO’, Appl. Res. Comput., 2018, 35, (10), pp. 31793185.
    26. 26)
      • 28. Yang, Y., Luo, H.L., Xu, H.R, et al: ‘Towards real-time traffic sign detection and classification’, IEEE Trans. Intell. Transp. Syst., 2016, 17, (7), pp. 20222031.
    27. 27)
      • 11. Redmon, J., Divvala, S., Girshick, R., et al: ‘You only look once: unified, real time object detection’. Proc of IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 779788.
    28. 28)
      • 14. He, K., Zhang, X., Ren, S., et al: ‘Spatial pyramid pooling in deep convolutional networks for visual recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2015, 37, (9), pp. 19041916.
    29. 29)
      • 19. Stallkamp, J., Schlipsing, M., Salmen, J., et al: ‘The German traffic sign recognition benchmark: a multi-class classification competition’. The 2011 Int. Joint Conf. on Neural Networks, USA, 2011, pp. 14531460.
    30. 30)
      • 6. Long, J., Shelhamer, E., Darrell, T.: ‘Fully convolutional networks for semantic segmentation’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (4), p. 640.
    31. 31)
      • 27. Mogelmose, A., Trivedi, M.M., Moeslund, T.B.: ‘Vision-based traffic sign detection and analysis for intelligent driver assistance systems: perspectives and survey’, IEEE Trans. Intell. Transp. Syst., 2012, 13, (4), pp. 14841497.
    32. 32)
      • 18. Anjan, G., Shreesha, C., Raghavendra, U.: ‘A review on automatic detection and recognition of traffic sign’, Multimedia Tools Appl., 2016, 75, (1), pp. 333364.
    33. 33)
      • 25. Grigorescu, C., Petkov, N.: ‘Distance sets for shape filters and shape recognition’, IEEE Trans. Image Process., 2003, 12, (10), pp. 12741286.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-its.2019.0475
Loading

Related content

content/journals/10.1049/iet-its.2019.0475
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading