© The Institution of Engineering and Technology
Aiming to efficiently resolve the problem that the subtraction background model is easily contaminated by slow-moving or temporarily stopped vehicles, the Gaussian mixture model with confidence measurement (GMMCM) is proposed for vehicle detection in complex urban traffic scenes. According to the current traffic state, each pixel of background model is set a CM. Whether to update the background model and the corresponding adaptive learning rate depends on if the current pixel point is in confidence period. Using the real-world urban traffic videos, the first experiments are conducted by GMMCM, compared with three commonly used models including GMM, self-adaptive GMM (SAGMM) and local parameter learning algorithm for the GMM (LPLGMM). The first experimental results show that GMMCM excels GMM, SAGMM and LPLGMM in keeping the background model being unpolluted from slow-moving or temporarily stopped vehicles. The second experiments are conducted by GMMCM, compared with visual background extractor, sigma-delta with CM, SAGMM, LPLGMM and GMM. The average recalls of six methods are 0.899, 0.753, 0.679, 0.420, 0.447 and 0.205, and the average F-measures of six methods are 0.636, 0.612, 0.592, 0.373, 0.330 and 0.179, respectively. All experimental results show the effectiveness of the proposed GMMCM in vehicles detection of complex urban traffic scenes.
References
-
-
1)
-
20. Maddalena, L., Petrosino, A.: ‘A self-organizing approach to background subtraction for visual surveillance applications’, IEEE Trans. Image Process., 2008, 17, (7), pp. 1168–1177 (doi: 10.1109/TIP.2008.924285).
-
2)
-
29. KaewTraKulPong, P., Bowden, R.: ‘An improved adaptive background mixture model for real-time tracking with shadow detection’. . 2002, pp. 135–144.
-
3)
-
10. Juana, E.S., Rogelio, H.: ‘Video background subtraction in complex environments’, J. Appl. Res. Technol., 2014, 12, pp. 527–537 (doi: 10.1016/S1665-6423(14)71632-3).
-
4)
-
24. Toyama, K., Krumm, J., Brumitt, B., et al: ‘Wallflower: principles and practice of background maintenance. Computer Vision, 1999’. The Proc. Seventh IEEE Int. Conf. on, 1999, vol. 1, pp. 255–261.
-
5)
-
30. Wang, Y., Jodoin, P.M., Porikli, F., et al: ‘CDnet 2014: an expanded change detection benchmark dataset’. IEEE Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 2014, pp. 393–400.
-
6)
-
9. Du-Ming, T., Shia-Chih, L.: ‘Independent component analysis-based background subtraction for indoor surveillance’, IEEE Trans. Image Process., 2009, 18, (1), pp. 158–167 (doi: 10.1109/TIP.2008.2007558).
-
7)
-
6. Lin, H., Chuang, J., Liu, T.: ‘Regularized background adaptation: a novel learning rate control scheme for Gaussian mixture modeling’, IEEE Trans. Image Process., 2011, 20, (3), pp. 822–834 (doi: 10.1109/TIP.2010.2075938).
-
8)
-
17. St-Charles, P.L., Bilodeau, G.A., Bergevin, R., SuBSENSE: ‘A universal change detection method with local adaptive sensitivity’, IEEE Trans. Image Process., 2015, 24, (1), pp. 359–373 (doi: 10.1109/TIP.2014.2378053).
-
9)
-
19. Van Droogenbroeck, M., Barnich, O.,ViBe: ‘A disruptive method for background subtraction’, .
-
10)
-
12. Sheikh, Y., Shah, M.: ‘Bayesian modeling of dynamic scenes for object detection. Pattern analysis and machine Intelligence’, IEEE Trans., 2005, 27, (11), pp. 1778–1792.
-
11)
-
13. Kim, K., Chalidabhongse, T.H., Harwood, D., et al: ‘Background modeling and subtraction by codebook construction’. . 2004 Int. Conf. onImage Processing2004. ICIP'04, 2004, vol. 5, pp. 3061–3064.
-
12)
-
3. Stauffer, C., Grimson, W.: ‘Adaptive background mixture models for real-time tracking’, Proc. IEEE CVPR, 1999, 2, pp. 246–252.
-
13)
-
4. Lee, D.S.: ‘Effective Gaussian mixture learning for video background subtraction’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (5), pp. 827–832 (doi: 10.1109/TPAMI.2005.102).
-
14)
-
18. Van Droogenbroeck, M., Barnich, O.: ‘Background subtraction: experiments and improvements for ViBe’. IEEE Computer Society Conf. on Computer Vision Pattern Recognition Workshops, 2012, pp. 32–37.
-
15)
-
16. Hofmann, M., Tiefenbacher, P., Rigoll, G.: ‘Background segmentation with feedback: the pixel-based adaptive segmenter’. 2012 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 2012, pp. 38–43.
-
16)
-
21. Tian, Y., Wang, Y., Hu, Z., et al: ‘Selective eigenbackground for background modeling and subtraction in crowded scenes. Circuits and systems for video technology’, IEEE Trans., 2013, 23, (11), pp. 1849–1864.
-
17)
-
3. Zivkovic, Z., van der Heijden, F.: ‘Efficient adaptive density estimation per image pixel for the task of background subtraction’, Pattern Recognit. Lett., 2006, 27, pp. 773–780 (doi: 10.1016/j.patrec.2005.11.005).
-
18)
-
25. Nonaka, Y., Shimada, A., Nagahara, H., et al: ‘Evaluation report of integrated background modeling based on spatio-temporal features’. 2012 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 2012, pp. 9–14.
-
19)
-
2. Sobral, A., Vacavant, A.: ‘A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos’, Comput. Vis. Image Underst., 2014, 122, pp. 4–21 (doi: 10.1016/j.cviu.2013.12.005).
-
20)
-
28. Buch, N., Velastin, S.A., Orwell, J.: ‘A review of computer vision techniques for the analysis of urban traffic’, IEEE Trans. Intell. Transp. Syst., 2011, 12, (3), pp. 920–939 (doi: 10.1109/TITS.2011.2119372).
-
21)
-
22. Jodoin, J.P., Bilodeau, G.A., Saunier, N.: , 2012.
-
22)
-
15. Barnich, O., Van Droogenbroeck, M., ViBe: ‘a universal background subtraction algorithm for video sequences’, IEEE Trans. Image Process., 2011, 20, (6), pp. 1709–1724 (doi: 10.1109/TIP.2010.2101613).
-
23)
-
27. Vargas, M., Milla, J.M., Toral, S.L., et al: ‘An enhanced background estimation algorithm for vehicle detection in urban traffic scenes’, IEEE Trans. Veh. Technol., 2010, 59, (8), pp. 3694–3709 (doi: 10.1109/TVT.2010.2058134).
-
24)
-
26. Toral, S.L., Vargas, M., Barrero, F.: ‘Improved sigma–delta background estimation for vehicle detection’, Electron. Lett., 2009, 45, (1), pp. 32–34 (doi: 10.1049/el:20092212).
-
25)
-
14. Shah, M., Deng, J.D., Woodford, B.J.: ‘A self-adaptive CodeBook (SACB) model for real-time background subtraction’, Image Vis. Comput., 2015, 38, pp. 52–64 (doi: 10.1016/j.imavis.2015.02.001).
-
26)
-
1. Bouwmans, T.: ‘Traditional and recent approaches in background modeling for foreground detection: an overview’, Comput. Sci. Rev., 2014, 11–12, pp. 31–66 (doi: 10.1016/j.cosrev.2014.04.001).
-
27)
-
7. Liu, X., Qi, C.: ‘Future-data driven modeling of complex backgrounds using mixture of Gaussians’, Neurocomputing, 2013, 119, pp. 439–453 (doi: 10.1016/j.neucom.2013.03.013).
-
28)
-
8. Shah, M., Deng, J.D., Woodford, B.J.: ‘Video background modeling: recent approaches, issues and our proposed techniques’, Mach. Vis. Appl., 2014, 25, (5), pp. 1105–1119 (doi: 10.1007/s00138-013-0552-7).
-
29)
-
9. Chen, Z., Ellis, T.: ‘A self-adaptive Gaussian mixture model’, Comput. Vis. Image Underst., 2014, 122, pp. 35–46 (doi: 10.1016/j.cviu.2014.01.004).
-
30)
-
11. Elgammal, A., Harwood, D., Davis, L.: ‘Non-parametric model for background subtraction’. Computer Vision – ECCV 2000, Springer Berlin Heidelberg, 2000, pp. 751–767.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-its.2015.0141
Related content
content/journals/10.1049/iet-its.2015.0141
pub_keyword,iet_inspecKeyword,pub_concept
6
6