Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Efficient convNets for fast traffic sign recognition

While deep convolutional networks gain overwhelming accuracy for computer vision, they are also well-known for their high computation costs and memory demands. Given limited resources, they are difficult to apply. As a consequence, it is beneficial to investigate small, lightweight, accurate deep convolutional neural networks (ConvNets) that are better suited for resource-limited electronic devices. This study presents qNet and sqNet, two small and efficient ConvNets for fast traffic sign recognition using uniform macro-architecture and depth-wise separable convolution. The qNet is designed with fewer parameters for even better accuracy. It possesses only 0.29M parameters (0.6 of one of the smallest models), while achieving a better accuracy of 99.4% on the German Traffic Sign Recognition Benchmark (GTSRB). The resulting sqNet possesses only 0.045M parameters (almost 0.1 of one of the smallest models) and 7.01M multiply-add computations (reducing computations to 30% of one of the smallest models), while keeping an accuracy of 99% on the benchmark. The experimental results on the GTSRB demonstrate that authors’ networks are more efficient in using parameters and computations.

References

    1. 1)
      • 11. Zhang, X., Zhou, X., Lin, M., et al: ‘Shufflenet: an extremely efficient convolutional neural network for mobile devices’, arXiv preprint arXiv:1707.01083, 4 July 2017.
    2. 2)
      • 9. Chollet, F.: ‘Xception: deep learning with depthwise separable convolutions’. IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, Hawaii, USA, July 2017, pp. 18001807.
    3. 3)
      • 17. Anisimov, D., Khanova, T.: ‘Towards lightweight convolutional neural networks for object detection’. IEEE Int. Conf. Advanced Video and Signal Based Surveillance, Lecce, Italy, August 2017, pp. 18.
    4. 4)
      • 19. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’. Int. Conf. Learning Representations, San Diego, California, USA, May 2015.
    5. 5)
      • 23. van Laarhoven, T.: ‘L2 regularization versus batch and weight normalization’, arXiv preprint arXiv:1706.05350, 16 Jun 2017.
    6. 6)
      • 18. Sandler, M., Howard, A., Zhu, M., et al: ‘MobileNet V2: inverted residuals and linear bottlenecks’, arXiv preprint arXiv:1801.04381, 13 January 2018.
    7. 7)
      • 16. He, K., Sun, J.: ‘Convolutional neural networks at constrained time cost’. IEEE Conf. Computer Vision and Pattern Recognition, Boston, MA, USA, June 2015, pp. 53535360.
    8. 8)
      • 22. Canziani, A., Paszke, A., Culurciello, E.: ‘An analysis of deep neural network models for practical applications’, arXiv preprint arXiv:1605.07678, 24 May 2016.
    9. 9)
      • 10. Xie, S., Girshick, R., Dollár, P., et al: ‘Aggregated residual transformations for deep neural networks’. IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, Hawaii, USA, July 2017, pp. 59875995.
    10. 10)
      • 8. Aghdam, H.H., Heravi, E.J., Puig, D.: ‘A practical approach for detection and classification of traffic signs using convolutional neural networks’, Robot. Auton. Syst., 2016, 84, pp. 97112.
    11. 11)
      • 21. Abadi, M., Agarwal, A., Barham, P., et al: ‘Tensorflow: large-scale machine learning on heterogeneous distributed systems’, Software available from tensorflow.org, 2015.
    12. 12)
      • 14. Stallkamp, J., Schlipsing, M., Salmen, J., et al: ‘Man vs. Computer: benchmarking machine learning algorithms for traffic sign recognition’, Neural Netw., 2012, 32, pp. 323332.
    13. 13)
      • 7. Arcos-García, Á., Álvarez-García, J.A., Soria-Morillo, L.M.: ‘Deep neural network for traffic sign recognition systems: an analysis of spatial transformers and stochastic optimisation methods’. Neural Netw., 2018, 99, pp. 158165.
    14. 14)
      • 20. Changpinyo, S., Sandler, M., Zhmoginov, A.: ‘The power of sparsity in convolutional neural networks’, arXiv preprint arXiv:1702.06257, 21 Feburary 2017.
    15. 15)
      • 2. Howard, A.G., Zhu, M., Chen, B., et al: ‘Mobilenets: efficient convolutional neural networks for mobile vision applications’, arXiv preprint arXiv:1704.04861, 17 April 2017.
    16. 16)
      • 15. Romero, A., Ballas, N., Kahou, S.E., et al: ‘Fitnets: hints for thin deep nets’. Int. Conf. Learning Representations, San Diego, California, USA, May 2015.
    17. 17)
      • 13. Szegedy, C., Vanhoucke, V., Ioffe, S., et al: ‘Rethinking the inception architecture for computer vision’. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, Nevada, USA, June 2016, pp. 28182826.
    18. 18)
      • 3. Wong, A., Shafiee, M.J., Jules, M.S.: ‘Micronnet: a highly compact deep convolutional neural network architecture for real-time embedded traffic sign classification’, arXiv preprint arXiv:1804.00497, 28 March 2018.
    19. 19)
      • 12. Iandola, F.N., Han, S., Moskewicz, M.W., et al: ‘Squeezenet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size’, arXiv preprint arXiv:1602.07360, 24 February 2016.
    20. 20)
      • 1. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ‘Imagenet classification with deep convolutional neural networks’. Proc. Int. Conf. Neural Information Processing Systems, Lake Tahoe, Nevada, USA, December 2012, pp. 10971105.
    21. 21)
      • 6. Jin, J., Fu, K., Zhang, C.: ‘Traffic sign recognition with hinge loss trained convolutional neural networks’. IEEE Trans. Intell. Transp. Syst., 2014, 15, (5), pp. 19912000.
    22. 22)
      • 5. Ciresan, D., Meier, U., Masci, J., et al: ‘Multi-column deep neural network for traffic sign classification’. Neural Netw., 2012, 32, pp. 333338.
    23. 23)
      • 4. Stallkamp, J., Schlipsing, M., Salmen, J.: ‘The German traffic sign recognition benchmark: a multi-class classification competition’. IEEE Int. Joint Conf. Neural Networks, San Jose, California, USA, February 2011, pp. 14531460.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-its.2018.5489
Loading

Related content

content/journals/10.1049/iet-its.2018.5489
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address