access icon free Counting vehicles in urban traffic scenes using foreground time-spatial images

A foreground time-spatial image (FTSI) is proposed for counting vehicles in complex urban traffic scenes to resolve deficiencies of traditional counting methods, which are highly computationally expensive and become unsuccessful with increasing complexity in urban traffic scenarios. First, a self-adaptive sample consensus background model with confidence measurements for each pixel is constructed on the virtual detection line in the frames of a video. The foreground of the virtual detection line is then collected over time to form a FTSI. The occlusion cases are then estimated based on the convexity of connected components. Finally, counting the number of connected components in the FTSI reveals the number of vehicles. Based on real-world urban traffic videos, the experiments in this study are conducted using FTSI, and compared in accuracy with two other time-spatial images methods. Experimental results based on real-world urban traffic videos show that the accuracy rate of the proposed approach is above 90% and it performs better than the state-of the-art methods.

Inspec keywords: natural scenes; video signal processing; intelligent transportation systems; traffic engineering computing; road vehicles

Other keywords: foreground time-spatial images; video frames; urban traffic scenes; connected component convexity; virtual detection line; self-adaptive sample consensus background model; FTSI; vehicles counting methods; confidence measurements; urban traffic videos

Subjects: Traffic engineering computing; Video signal processing; Optical, image and video signal processing

References

    1. 1)
      • 8. Lai, A.H., Yung, N.H.: ‘A fast and accurate scoreboard algorithm for estimating stationary backgrounds in an image sequence’.  Proc. 1998 IEEE Int. Symp. Circuits and Systems (ISCAS'98), 1998, vol. 4, pp. 241244.
    2. 2)
      • 9. Wren, C.R., Azarbayejani, A., Darrell, T., et al: ‘Pfinder: real-time tracking of the human body’, IEEE Trans. Pattern Anal. Mach. Intell., 1997, 19, (7), pp. 780785.
    3. 3)
      • 7. Bouwmans, T.: ‘Traditional and recent approaches in background modeling for foreground detection: an overview’, Comput. Sci. Rev., 2014, 11, pp. 3166.
    4. 4)
      • 11. Barcellos, P., Bouvié, C., Escouto, F.L., et al: ‘A novel video based system for detecting and counting vehicles at user-defined virtual loops’, Expert Syst. Appl., 2015, 42, (4), pp. 18451856.
    5. 5)
      • 6. Li, S., Yu, H., Zhang, J., et al: ‘Video-based traffic data collection system for multiple vehicle types’, IET Intell. Transp. Syst., 2014, 8, (2), pp. 164174.
    6. 6)
      • 21. Yang, M.T., Jhang, R.K., Hou, J.S.: ‘Traffic flow estimation and vehicle-type classification using vision-based spatial-temporal profile analysis’, IET Comput. Vis., 2013, 7, (5), pp. 394404.
    7. 7)
      • 22. Buch, N., Velastin, S.A., Orwell, J.: ‘A review of computer vision techniques for the analysis of urban traffic’, IEEE Trans. Intell. Transp. Syst., 2011, 12, (3), pp. 920939.
    8. 8)
      • 24. Milla, J.M., Toral, S.L., Vargas, M., et al: ‘Dual-rate background subtraction approach for estimating traffic queue parameters in urban scenes’, IET Intell. Transp. Syst., 2013, 7, (1), pp. 122130.
    9. 9)
      • 15. Elgammal, A., Harwood, D., Davis, L.: ‘Non-parametric model for background subtraction’. Comput. Vis.—ECCV 2000, Berlin, Heidelberg, 2000, pp. 751767.
    10. 10)
      • 3. Sengar, S.S., Mukhopadhyay, S.: ‘Moving object area detection using normalized self adaptive optical flow’, Optik-Int. J. Light Electron Opt., 2016, 127, (16), pp. 62586267.
    11. 11)
      • 18. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: ‘SuBSENSE: a universal change detection method with local adaptive sensitivity’, IEEE Trans. Image Process., 2015, 24, (1), pp. 359373.
    12. 12)
      • 16. Barnich, O., Van Droogenbroeck, M.: ‘ViBe: a universal background subtraction algorithm for video sequences’, IEEE Trans. Image Process., 2011, 20, (6), pp. 17091724.
    13. 13)
      • 20. Mithun, N.C., Rashid, N.U., Rahman, S.M.: ‘Detection and classification of vehicles from video using multiple time-spatial images’, IEEE Trans. Intell. Transp. Syst., 2012, 13, (3), pp. 12151225.
    14. 14)
      • 5. Sobral, A., Vacavant, A.: ‘A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos’, Comput. Vis. Image Underst., 2014, 122, pp. 421.
    15. 15)
      • 13. Chen, Z., Ellis, T., Velastin, S.A.: ‘Vision-based traffic surveys in urban environments’, J. Electron. Imag., 2016, 25, (5), pp. 051206.1051206.15.
    16. 16)
      • 2. Kamkar, S., Safabakhsh, R.: ‘Vehicle detection, counting and classification in various conditions’, IET Intell. Transp. Syst., 2016, 10, (6), pp. 406413.
    17. 17)
      • 19. Yue, Y.: ‘A traffic-flow parameters evaluation approach based on urban road video’, Int. J. Intell. Eng. Syst., 2009, 2, (1), pp. 3339.
    18. 18)
      • 4. Fei, M., Li, J., Liu, H.: ‘Visual tracking based on improved foreground detection and perceptual hashing’, Neurocomputing, 2015, 152, pp. 413428.
    19. 19)
      • 10. Stauffer, C., Grimson, W.: ‘Adaptive background mixture models for real-time tracking’. IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition, 1999, vol. 2, pp. 246252.
    20. 20)
      • 23. Vargas, M., Milla, J.M., Toral, S.L., et al: ‘An enhanced background estimation algorithm for vehicle detection in urban traffic scenes’, IEEE Trans. Veh. Technol., 2010, 59, (8), pp. 36943709.
    21. 21)
      • 1. Unzueta, L., Nieto, M., Cortés, A., et al: ‘Adaptive multicue background subtraction for robust vehicle counting and classification’, IEEE Trans. Intell. Transp. Syst., 2012, 13, (2), pp. 527540.
    22. 22)
      • 14. Zhang, Y., Zhao, C., He, J., et al: ‘Vehicles detection in complex urban traffic scenes using Gaussian mixture model with confidence measurement’, IET Intell. Transp. Syst., 2016, 10, (6), pp. 445452.
    23. 23)
      • 17. Hofmann, M., Tiefenbacher, P., Rigoll, G.: ‘Background segmentation with feedback: The pixel-based adaptive segmenter’. 2012 IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2012, pp. 3843.
    24. 24)
      • 12. Xia, Y., Shi, X., Song, G., et al: ‘Towards improving quality of video-based vehicle counting method for traffic flow estimation’, Signal Process., 2016, 120, pp. 672681.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-its.2016.0162
Loading

Related content

content/journals/10.1049/iet-its.2016.0162
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading