access icon free Vehicles detection in complex urban traffic scenes using Gaussian mixture model with confidence measurement

Aiming to efficiently resolve the problem that the subtraction background model is easily contaminated by slow-moving or temporarily stopped vehicles, the Gaussian mixture model with confidence measurement (GMMCM) is proposed for vehicle detection in complex urban traffic scenes. According to the current traffic state, each pixel of background model is set a CM. Whether to update the background model and the corresponding adaptive learning rate depends on if the current pixel point is in confidence period. Using the real-world urban traffic videos, the first experiments are conducted by GMMCM, compared with three commonly used models including GMM, self-adaptive GMM (SAGMM) and local parameter learning algorithm for the GMM (LPLGMM). The first experimental results show that GMMCM excels GMM, SAGMM and LPLGMM in keeping the background model being unpolluted from slow-moving or temporarily stopped vehicles. The second experiments are conducted by GMMCM, compared with visual background extractor, sigma-delta with CM, SAGMM, LPLGMM and GMM. The average recalls of six methods are 0.899, 0.753, 0.679, 0.420, 0.447 and 0.205, and the average F-measures of six methods are 0.636, 0.612, 0.592, 0.373, 0.330 and 0.179, respectively. All experimental results show the effectiveness of the proposed GMMCM in vehicles detection of complex urban traffic scenes.

Inspec keywords: mixture models; Gaussian processes; traffic engineering computing; image processing; road traffic; road vehicles

Other keywords: complex urban traffic scenes; subtraction background model; confidence measurement; vehicles detection; GMMCM; visual background extractor; Gaussian mixture model with confidence measurement

Subjects: Traffic engineering computing; Optical, image and video signal processing; Computer vision and image processing techniques; Other topics in statistics; Other topics in statistics

References

    1. 1)
    2. 2)
      • 29. KaewTraKulPong, P., Bowden, R.: ‘An improved adaptive background mixture model for real-time tracking with shadow detection’. Springer US In Video-based Surveillance Systems. 2002, pp. 135144.
    3. 3)
    4. 4)
      • 24. Toyama, K., Krumm, J., Brumitt, B., et al: ‘Wallflower: principles and practice of background maintenance. Computer Vision, 1999’. The Proc. Seventh IEEE Int. Conf. on, 1999, vol. 1, pp. 255261.
    5. 5)
      • 30. Wang, Y., Jodoin, P.M., Porikli, F., et al: ‘CDnet 2014: an expanded change detection benchmark dataset’. IEEE Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 2014, pp. 393400.
    6. 6)
    7. 7)
    8. 8)
    9. 9)
      • 19. Van Droogenbroeck, M., Barnich, O.,ViBe: ‘A disruptive method for background subtraction’, in Bouwmans, T., Porikli, F., Höferlin, B., Vacavant, A., (EDs.): ‘Background Model. Foreground Detect. Video Surveillance’, (Chapman and Hall/CRC, July 2014), Ch. 7, pp. 7.1–7.23.
    10. 10)
      • 12. Sheikh, Y., Shah, M.: ‘Bayesian modeling of dynamic scenes for object detection. Pattern analysis and machine Intelligence’, IEEE Trans., 2005, 27, (11), pp. 17781792.
    11. 11)
      • 13. Kim, K., Chalidabhongse, T.H., Harwood, D., et al: ‘Background modeling and subtraction by codebook construction’. . 2004 Int. Conf. onImage Processing2004. ICIP'04, 2004, vol. 5, pp. 30613064.
    12. 12)
      • 3. Stauffer, C., Grimson, W.: ‘Adaptive background mixture models for real-time tracking’, Proc. IEEE CVPR, 1999, 2, pp. 246252.
    13. 13)
    14. 14)
      • 18. Van Droogenbroeck, M., Barnich, O.: ‘Background subtraction: experiments and improvements for ViBe’. IEEE Computer Society Conf. on Computer Vision Pattern Recognition Workshops, 2012, pp. 3237.
    15. 15)
      • 16. Hofmann, M., Tiefenbacher, P., Rigoll, G.: ‘Background segmentation with feedback: the pixel-based adaptive segmenter’. 2012 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 2012, pp. 3843.
    16. 16)
      • 21. Tian, Y., Wang, Y., Hu, Z., et al: ‘Selective eigenbackground for background modeling and subtraction in crowded scenes. Circuits and systems for video technology’, IEEE Trans., 2013, 23, (11), pp. 18491864.
    17. 17)
    18. 18)
      • 25. Nonaka, Y., Shimada, A., Nagahara, H., et al: ‘Evaluation report of integrated background modeling based on spatio-temporal features’. 2012 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 2012, pp. 914.
    19. 19)
    20. 20)
    21. 21)
      • 22. Jodoin, J.P., Bilodeau, G.A., Saunier, N.: ‘Background subtraction based on local shape. arXiv preprint arXiv:1204.6326, 2012.
    22. 22)
    23. 23)
    24. 24)
    25. 25)
    26. 26)
    27. 27)
    28. 28)
    29. 29)
    30. 30)
      • 11. Elgammal, A., Harwood, D., Davis, L.: ‘Non-parametric model for background subtraction’. Computer Vision – ECCV 2000, Springer Berlin Heidelberg, 2000, pp. 751767.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-its.2015.0141
Loading

Related content

content/journals/10.1049/iet-its.2015.0141
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading