http://iet.metastore.ingenta.com
1887

Traffic sign recognition using weighted multi-convolutional neural network

Traffic sign recognition using weighted multi-convolutional neural network

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Intelligent Transport Systems — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Traffic signs play a crucial role in regulating traffic and facilitating cautious driving. Automatic traffic sign recognition is one of the key tasks in autonomous driving. Accuracy in the classification of traffic signs is therefore very important for the navigation of a vehicle. Here, a reliable and robust convolutional neural network (CNN) is presented for classifying these signs. The proposed classifier is a weighted multi-CNN trained with a novel methodology. It achieves a near state-of-the-art recognition rate of 99.59% when tested on the German traffic sign recognition benchmark dataset. Compared to the existing classifiers, the proposed one is a low-complexity network that recognises a test image in 10 ms when running on an NVIDIA 980 Ti GPU system. The results demonstrate its suitability and reliability in high-speed driving scenarios.

References

    1. 1)
      • 1. Viola, P., Jones, M.: ‘Rapid object detection using a boosted cascade of simple features’. Proc. Intl. Conf. on Computer Vision and Pattern Recognition, 2001.
    2. 2)
      • 2. Stallkamp, J., Schlipsing, M., Salmen, J., et al: ‘Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition’, Neural Netw., 2012, 32, pp. 323332.
    3. 3)
      • 3. Stallkamp, J., Schlipsing, M., Salmen, J., et al: ‘The German traffic sign recognition benchmark: a multi-class classification competition’. Proc. IEEE Intl. Joint Conf. on Neural Networks, 2011, pp. 14531460.
    4. 4)
      • 4. Sermanet, P., LeCun, Y.: ‘Traffic sign recognition with multi-scale convolutional networks’. Proc. Intl. Joint Conf. on Neural Networks, 2011, pp. 28092813.
    5. 5)
      • 5. Zeng, Y., Xu, X., Fang, Y., et al: ‘Traffic sign recognition using extreme learning classifier with deep convolutional features’. Proc. Intl. Conf. on Intelligence Science and Big Data Engineering, 2015.
    6. 6)
      • 6. Ribeiro, B., Lopes, N.: ‘Extreme learning classifier with deep concepts’. Proc. Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications - 18th Iberoamerican Congress, 2013, pp. 182189.
    7. 7)
      • 7. Peemen, M., Mesman, B., Corporaal, H.: ‘Speed sign detection and recognition by convolutional neural networks’. Proc. 8th Intl. Automotive Congress, 2011, pp. 162170.
    8. 8)
      • 8. Bühlmann, P., Hothorn, T.: ‘Boosting algorithms: regularization, prediction and model fitting’, Stat. Sci., 2007, 22, pp. 477505.
    9. 9)
      • 9. Wu, Y., Liu, Y., Li, J., et al: ‘Traffic sign detection based on convolutional neural networks’. Proc. Intl. Joint Conf. on Neural Networks, 2013, pp. 17.
    10. 10)
      • 10. Ciresan, D.C., Meier, U., Schmidhuber, J.: ‘Multi-column deep neural networks for image classification’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2012, pp. 36423649.
    11. 11)
      • 11. Ciresan, D.C., Meier, U., Masci, J., et al: ‘A committee of neural networks for traffic sign classification’. Proc. Intl. Joint Conf. on Neural Networks, 2011, pp. 19181921.
    12. 12)
      • 12. Jin, J., Fu, K., Zhang, C.: ‘Traffic sign recognition with hinge loss trained convolutional neural networks’, IEEE Trans. Intell. Transp. Syst., 2014, 15, (5), pp. 19911999.
    13. 13)
      • 13. Yang, Y., Luo, H., Xu, H., et al: ‘Towards real-time traffic sign detection and classification’, IEEE Trans. Intell. Transp. Syst., 2016, 17, (7), pp. 20222031.
    14. 14)
      • 14. Mao, X., Hijazi, S., Casas, R., et al: ‘Hierarchical CNN for traffic sign recognition’. Proc. IEEE Intelligent Vehicles Symp., 2016, pp. 130135.
    15. 15)
      • 15. Hu, W., Zhuo, Q., Zhang, C., et al: ‘Fast branch convolutional neural network for traffic sign recognition’, IEEE Intell. Transp. Syst. Mag., 2017, 9, (3), pp. 114126.
    16. 16)
      • 16. Aghdam, H., Heravi, E., Puig, D.: ‘A practical and highly optimized convolutional neural network for classifying traffic signs in real-time’, Int. J. Comput. Vis., 2017, 122, (2), pp. 246269.
    17. 17)
      • 17. Zeng, Y., Xu, X., Shen, D., et al: ‘Traffic sign recognition using kernel extreme learning machines with deep perceptual features’, IEEE Trans. Intell. Transp. Syst., 2017, 18, (6), pp. 16471653.
    18. 18)
      • 18. Huang, Z., Yu, Y., Gu, J., et al: ‘An efficient method for traffic sign recognition based on extreme learning machine’, IEEE Trans. Cybern., 2017, 47, (4), pp. 920933.
    19. 19)
      • 19. Vukotic, V., Krapac, J., Šegvic, S.: ‘Convolutional neural networks for Croatian traffic signs recognition’. Proc. Croatian Computer Vision Workshop, 2014, p. 2.
    20. 20)
      • 20. Qian, R., Zhang, B., Yue, Y., et al: ‘Robust Chinese traffic sign detection and recognition with deep convolutional neural network’. Proc. 11th Intl. Conf. on Natural Computation, 2015, pp. 791796.
    21. 21)
      • 21. Zhu, Y., Zhang, C., Zhou, D., et al: ‘Traffic sign detection and recognition using fully convolutional network guided proposals’, Neurocomputing, 2016, 214, pp. 758766.
    22. 22)
      • 22. Lim, K., Hong, Y., Choi, Y., et al: ‘Real-time traffic sign recognition based on a general purpose GPU and deep-learning’, PLoS One, 2017, 12, (3), pp. 122.
    23. 23)
      • 23. Bonaci, I., Kusalic, I., Kovacek, I., et al: ‘Addressing false alarms and localization inaccuracy in traffic sign detection and recognition’. Proc. 16th Computer Vision Winter Workshop, 2011, pp. 18.
    24. 24)
      • 24. Srivastava, N., Hinton, G.E., Krizhevsky, A., et al: ‘Dropout: a simple way to prevent neural networks from overfitting’, J. Mach. Learn. Res., 2014, 15, (1), pp. 19291958.
    25. 25)
      • 25. Larsson, F., Felsberg, M.: ‘Using Fourier descriptors and spatial models for traffic sign recognition’. Proc. 17th Scandinavian Conf. on Image Analysis, LNCS 6688, 2011, pp. 238249.
    26. 26)
      • 26. Timofte, R.: ‘Sparse representation based projections’. Proc. 22nd British Machine Vision Conf., 2011, pp. 61.161.12.
    27. 27)
      • 27. rMASTIF traffic sign classification dataset’. Available at: http://www.zemris.fer.hr/kalfa/Datasets/rMASTIF/, accessed 5 January 2018.
    28. 28)
      • 28. Szegedy, C., Liu, W., Jia, Y., et al: ‘Going deeper with convolutions’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2015, pp. 19.
    29. 29)
      • 29. Cho, S., Lee, S.: ‘Fast motion deblurring’. Proc. SIGGRAPH ASIA, 2009.
    30. 30)
      • 30. Xu, L., Jia, J.: ‘Two-phase kernel estimation for robust motion deblurring’. Proc. ECCV, 2010.
    31. 31)
      • 31. Belgian traffic sign classification’. Available at: http://btsd.ethz.ch/shareddata/, accessed 5 January 2018.
    32. 32)
      • 32. Swedish traffic sign dataset’. Available at: http://www.cvl.isy.liu.se/research/datasets/traffic-signs-dataset/download/, accessed 5 January 2018.
    33. 33)
      • 33. Temel, D., Kwon, G., Prabhushankar, M., et al: ‘CURE-TSR: challenging unreal and real environments for traffic sign recognition’. Proc. Neural Information Processing Systems (NIPS) Workshop on Machine Learning for Intelligent Transportation Systems, 2017.
    34. 34)
      • 34. Matas, J., Chum, O., Urban, M., et al: ‘Robust wide-baseline stereo from maximally stable extremal regions’, IEEE Trans. Intell. Transp. Syst., 2004, 22, (10), pp. 761767.
    35. 35)
      • 35. Howard, A.G., Zhu, M., Chen, B., et al: ‘Mobilenets: efficient convolutional neural networks for mobile vision applications’, arXiv:1704.04861, 2017.
    36. 36)
      • 36. Iandola, F.N., Han, S., Moskewicz, M.W., et al: ‘Squeezenet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size’, arXiv:1602.07360, 2016.
    37. 37)
      • 37. Szegedy, C., Liu, W., Jia, Y., et al: ‘Going deeper with convolutions’. Intl. Conf. on Computer Vision and Pattern Recognition, 2015.
    38. 38)
      • 38. He, K., Zhang, X., Ren, S., et al: ‘Deep residual learning for image recognition’. Intl. Conf. on Computer Vision and Pattern Recognition, 2016.
    39. 39)
      • 39. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’. Intl. Conf. on Learning Representations, 2015.
    40. 40)
      • 40. Wu, B., Iandola, F.N., Jin, P.H., et al: ‘Squeezedet: unified, small, low power fully convolutional neural networks for real-time object detection for autonomous driving’. Intl. Conf. on Computer Vision and Pattern Recognition (CVPR), 2017.
    41. 41)
      • 41. Teichmann, M., Weber, M., Zöllner, J.M., et al: ‘Multinet: real-time joint semantic reasoning for autonomous driving’, arXiv:1612.07695, 2016.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-its.2018.5171
Loading

Related content

content/journals/10.1049/iet-its.2018.5171
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address