access icon free Traffic sign recognition by combining global and local features based on semi-supervised classification

The legibility of traffic signs has been considered from the beginning of design, and traffic signs are easy to identify for humans. For computer systems, however, identifying traffic signs still poses a challenging problem. Both image-processing and machine-learning algorithms are constantly improving, aimed at better solving this problem. However, with a dramatic increase in the number of traffic signs, labelling a large amount of training data means high cost. Therefore, how to use a small number of labelled traffic sign data reasonably to build an efficient and high-quality traffic sign recognition (TSR) model in the Internet-of-things–based (IOT-based) transport system has been an urgent research goal. Here, the authors propose a novel semi-supervised learning approach combining global and local features for TSR in an IOT-based transport system. In their approach, histograms of oriented gradient, colour histograms (CH), and edge features (EF) are used to build different feature spaces. Meanwhile, on the unlabelled samples, a fusion feature space is found to alleviate the differences between different feature spaces. Extensive evaluations on a collection of signs from the German Traffic Sign Recognition Benchmark (GTSRB) dataset shows that the proposed approach outperforms the others and provides a potential solution for practical applications.

Inspec keywords: image classification; traffic engineering computing; feature extraction; image colour analysis; learning (artificial intelligence); edge detection; image fusion; object recognition

Other keywords: local features; labelled traffic sign data; IOT-based transport system; semisupervised classification; Internet-of-things–based transport system; global features; fusion feature space; edge features; high-quality traffic sign recognition model; colour histograms; histograms of oriented gradient; German Traffic Sign Recognition Benchmark dataset

Subjects: Traffic engineering computing; Computer vision and image processing techniques; Sensor fusion; Knowledge engineering techniques; Image recognition

References

    1. 1)
      • 21. Schurmann, J.: ‘Pattern classification: a unified view of statistical and neural approaches’ (John Wiley & Sons, New York, NY, USA, 1996).
    2. 2)
      • 24. Yang, Y., Liu, X., Ye, Q., et al: ‘Ensemble learning-based person re-identification with multiple feature representations’, Complexity, 2018, 2018, pp. 112.
    3. 3)
      • 10. Tian, T., Sethi, I., Patel, N.: ‘Traffic sign recognition using a novel permutation-based local image feature’. 2014 Int. Joint Conf. on Neural Networks (IJCNN), Beijing, China, July 2014, pp. 947954.
    4. 4)
      • 22. Chang, C.-C., Lin, C.-J.: ‘LIBSVM: A library for support vector machines’, ACM Trans.Intell.Syst.Technol. (TIST), 2011, 2, (3), p. 27.
    5. 5)
      • 13. Luong, T.X., Kim, B.-K., Lee, S.-Y.: ‘Color image processing based on nonnegative matrix factorization with convolutional neural network’. 2014 Int. Joint Conf. on Neural Networks (IJCNN), Beijing, China, July 2014, pp. 21302135.
    6. 6)
      • 5. Tang, S., Huang, L.-L.: ‘Traffic sign recognition using complementary features’. 2013 2nd IAPR Asian Conf. on Pattern Recognition, Tainan, Taiwan, December 2013, pp. 210214.
    7. 7)
      • 31. Hamer, M.M.: ‘Failure prediction: sensitivity of classification accuracy to alternative statistical methods and variable sets’, J.Account.Public Policy, 1983, 2, (4), pp. 289307.
    8. 8)
      • 19. Yang, Y., Liu, X.: ‘A robust semi-supervised learning approach via mixture of label information’, Pattern Recognit. Lett., 2015, 68, pp. 1521.
    9. 9)
      • 14. Garbin, D., et al: ‘Variability-tolerant convolutional neural network for pattern recognition applications based on OxRAM synapses’. 2014 IEEE Int. Electron Devices Meeting, San Francisco, CA, USA, December 2014, pp. 28.4.128.4.4.
    10. 10)
      • 3. Stallkamp, J., Schlipsing, M., Salmen, J., et al: ‘Man vs. Computer: benchmarking machine learning algorithms for traffic sign recognition’, Neural Netw., 2012, 32, pp. 323332.
    11. 11)
      • 8. Baró, X., Escalera, S., Vitrià, J., et al: ‘Traffic sign recognition using evolutionary adaboost detection and forest-ECOC classification’, IEEE Trans. Intell. Transp. Syst., 2009, 10, (1), pp. 113126.
    12. 12)
      • 23. Yang, Y., Jiang, J.: ‘Adaptive Bi-weighting toward automatic initialization and model selection for HMM-based hybrid meta-clustering ensembles’, IEEE Trans.Cybernet.,2018, 49, (5), pp. 16571668.
    13. 13)
      • 33. Hillebrand, M., Kreßel, U., Wöhler, C., et al: ‘Traffic sign classifier adaption by semi-supervised co-training’. IAPR Workshop on Artificial Neural Networks in Pattern Recognition, Trento, Italy, September 2012, pp. 193200.
    14. 14)
      • 25. Han, D.D., Zhang, T.C., Zhang, J.: ‘Research and implementation of an improved canny edge detection algorithm’, Key Eng. Mater., 2013, 572, pp. 566569.
    15. 15)
      • 26. Déniz, O., Bueno, G., Salido, J., et al: ‘Face recognition using histograms of oriented gradients’, Pattern Recognit. Lett., 2011, 32, (12), pp. 15981603.
    16. 16)
      • 29. Hripcsak, G., Rothschild, A.S.: ‘Agreement, the f-measure, and reliability in information retrieval’, J. Am. Med. Inform. Assoc., 2005, 12, (3), pp. 296298.
    17. 17)
      • 15. Li, H., Zhang, Z., Yang, J., et al: ‘A novel vision chip architecture for image recognition based on convolutional neural network’. 2015 IEEE 11th Int.Conf. on ASIC (ASICON), Chengdu, China, November 2015, pp. 14.
    18. 18)
      • 2. Stallkamp, J., Schlipsing, M., Salmen, J., et al: ‘The German traffic sign recognition benchmark: a multi-class classification competition’. Int. Joint Conf. on Neural Networks, San Jose, CA, USA, 31 July–5 August 2011, pp. 14531460.
    19. 19)
      • 7. Zaklouta, F., Stanciulescu, B.: ‘Real-time traffic-sign recognition using tree classifiers’, IEEE Trans. Intell. Transp. Syst., 2012, 13, (4), pp. 15071514.
    20. 20)
      • 28. Stehman, S.V.: ‘Selecting and interpreting measures of thematic classification accuracy’, Remote Sens. Environ., 1997, 62, (1), pp. 7789.
    21. 21)
      • 30. Velupillai, S., Dalianis, H., Hassel, M., et al: ‘Developing a standard for de-identifying electronic patient records written in Swedish: precision, recall and F-measure in a manual and computerized annotation trial’, Int. J. Med. Inform., 2009, 78, (12), pp. e19e26.
    22. 22)
      • 9. Wong, A., Shafiee, M.J., Jules, M.S.: ‘Micronnet: A highly compact deep convolutional neural network architecture for real-time embedded traffic sign classification’, IEEE. Access., 2018, 6, pp. 5980359810.
    23. 23)
      • 4. Lim, K.H., Seng, K.P., Ang, L.M.: ‘Intra color-shape classification for traffic sign recognition’. 2010 Int. Computer Symp. (ICS2010), 2010, pp. 642647.
    24. 24)
      • 11. Wu, Z., Shen, C., Van Den Hengel, A.: ‘Wider or deeper: revisiting the resnet model for visual recognition’, Pattern Recognit., 2019, 90, pp. 119133.
    25. 25)
      • 20. Zhou, Z.-H., Li, M.: ‘Tri-training: exploiting unlabeled data using three classifiers’, IEEE Trans. Knowl. Data Eng., 2005, 17, (11), pp. 15291541.
    26. 26)
      • 1. Ciresan, D.C., Meier, U., Masci, J., et al: ‘A committee of neural networks for traffic sign classification’.Int. Joint Conf. on Neural Networks, San Jose, CA, USA, 31 July–5 August 2011, pp. 19181921.
    27. 27)
      • 6. Lu, K., Ding, Z., Ge, S.: ‘Sparse-representation-based graph embedding for traffic sign recognition’, IEEE Trans. Intell. Transp. Syst., 2012, 13, (4), pp. 15151524.
    28. 28)
      • 16. Yoon, S., Kim, E.: ‘Temporal classification error compensation of convolutional neural network for traffic sign recognition’, J. Phys., Conf. Ser., 2017, 806, (1), p. 012007.
    29. 29)
      • 12. Zheng, Z., Zhang, H., Wang, B., et al: ‘Robust traffic sign recognition and tracking for advanced driver assistance systems’. 2012 15th Int. IEEE Conf. on Intelligent Transportation Systems, Anchorage, AK, USA, September 2012, pp. 704709.
    30. 30)
      • 32. Sohn, S.Y., Lee, S.H.: ‘Data fusion, ensemble and clustering to improve the classification accuracy for the severity of road traffic accidents in Korea’, Saf. Sci., 2003, 41, (1), pp. 114.
    31. 31)
      • 34. Sermanet, P., LeCun, Y.: ‘Traffic sign recognition with multi-scale convolutional networks’. Int. Joint Conf. on Neural Networks, San Jose, CA, USA, 31 July–5 August 2011, pp. 28092813.
    32. 32)
      • 18. Yang, Y., Li, Z., Wang, W., et al: ‘An adaptive semi-supervised clustering approach via multiple density-based information’, Neurocomputing, 2017, 257, pp. 193205.
    33. 33)
      • 17. Ballester, P., Araujo, R.M.: ‘On the performance of GoogLeNet and AlexNet applied to sketches’. Thirtieth AAAI Conf. on Artificial Intelligence, Phoenix, AZ, USA, February 2016, pp. 11241128.
    34. 34)
      • 27. Ahmadi, S., Mani-Varnosfaderani, A., Habibi, B.: ‘Motor oil classification using color histograms and pattern recognition techniques’, J. AOAC Int., 2018, 101, (6), pp. 19671976.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-its.2019.0409
Loading

Related content

content/journals/10.1049/iet-its.2019.0409
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading