access icon free High variation removal for background subtraction in traffic surveillance systems

Background subtraction has been a fundamental task in video analytics and smart surveillance applications. In the field of background subtraction, Gaussian mixture model is a canonical model for many other methods. However, the unconscious learning of this model often leads to erroneous motion detection under high variation scenes. This article proposes a new method that incorporates entropy estimation and a removal framework into the Gaussian mixture model to improve the performance of background subtraction. Firstly, entropy information is computed for each pixel of a frame to classify frames into silent or high variation categories. Secondly, the removal framework is used to determine which frames from the background subtraction process are updated. The proposed method produces precise results with fast execution time, which are two critical factors in surveillance systems for more advanced tasks. The authors used two publicly available test sequences from the 2014 Change Detection and Scene background modelling data sets and internally collected data sets of scenes with dense traffic.

Inspec keywords: image classification; image motion analysis; entropy; Gaussian processes; traffic engineering computing; object detection; video surveillance; mixture models

Other keywords: erroneous motion detection; entropy information; fast execution time; background subtraction process; video analytics; entropy estimation; Gaussian mixture model; smart surveillance applications; traffic surveillance systems; frame classification; high variation removal

Subjects: Other topics in statistics; Image recognition; Other topics in statistics; Traffic engineering computing; Video signal processing; Computer vision and image processing techniques

References

    1. 1)
      • 1. Tian, B., Morris, B.T., Tang, M., et al: ‘Hierarchical and networked vehicle surveillance in ITS: a survey’, IEEE Trans. Intell. Transp. Syst., 2017, 18, (1), pp. 2548.
    2. 2)
      • 13. Stauffer, C., Grimson, W.E.L.: ‘Adaptive background mixture models for real-time tracking’. Proc. 1999 IEEE Computer Society Conf. Computer Vision and Pattern Recognition (Cat. No PR00149) IEEE Computer Society, Fort Collins, CO, USA, 1999, pp. 246252.
    3. 3)
      • 35. Ramirez-Alonso, G., Chacon-Murguia, M.I.: ‘Auto-adaptive parallel SOM architecture with a modular analysis for dynamic object segmentation in videos’, Neurocomputing, 2016, 175, pp. 9901000.
    4. 4)
      • 16. Haines, T.S.F., Xiang, T.: ‘Background subtraction with Dirichlet process mixture models’, IEEE Trans. Pattern Anal. Mach. Intell., 2014, 36, (4), pp. 670683.
    5. 5)
      • 33. Shannon, C.E.: ‘A mathematical theory of communication’, Bell Syst. Tech. J., 1948, 27, (3), pp. 379423.
    6. 6)
      • 36. Wang, Y., Luo, Z., Jodoin, P.-M.: ‘Interactive deep learning method for segmenting moving objects’, Pattern Recognit. Lett., 2016.
    7. 7)
      • 14. Zivkovic, Z.: ‘Improved adaptive Gaussian mixture model for background subtraction’. in Proc. 17th Int. Conf. Pattern Recognition, 2004. ICPR 2004 IEEE, Cambridge, UK, 2004, Vol. 2, pp. 2831.
    8. 8)
      • 38. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: ‘SuBSENSE: a universal change detection method with local adaptive sensitivity’, IEEE Trans. Image Process., 2015, 24, (1), pp. 359373.
    9. 9)
      • 25. Maddalena, L., Petrosino, A.: ‘Stopped object detection by learning foreground model in videos’, IEEE Trans. Neural Netw. Learn. Syst., 2013, 24, (5), pp. 723735.
    10. 10)
      • 21. Bhaskar, H., Mihaylova, L., Achim, A.: ‘Video foreground detection based on symmetric alpha-stable mixture models’, IEEE Trans. Circuits Syst. Video Technol., 2010, 20, (8), pp. 11331138.
    11. 11)
      • 15. Lee, D.S.: ‘Effective Gaussian mixture learning for video background subtraction’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (5), pp. 827832.
    12. 12)
      • 20. Suhr, J.K., Jung, H.G., Li, G., et al: ‘Mixture of gaussians-based background subtraction for Bayer-pattern image sequences’, IEEE Trans. Circuits Syst. Video Technol., 2011, 21, (3), pp. 365370.
    13. 13)
      • 12. Wren, C.R., Azarbayejani, A., Darrell, T., et al: ‘Pfinder: real-time tracking of the human body’, IEEE Trans. Pattern Anal. Mach. Intell., 1997, 19, (7), pp. 780785.
    14. 14)
      • 5. St-Charles, P.-L., Bilodeau, G.-A., Bergevin, R.: ‘Universal background subtraction using word consensus models’, IEEE Trans. Image Process., 2016, 25, (10), pp. 47684781.
    15. 15)
      • 9. Bouwmans, T., Zahzah, E.H.: ‘Robust PCA via principal component pursuit: a review for a comparative evaluation in video surveillance'Comput. Vis. Image Underst., 2014, 122, pp. 2234.
    16. 16)
      • 32. Bouwmans, T., El Baf, F., Vachon, B.: ‘Background modeling using mixture of Gaussians for foreground detection – a survey’, Recent Patents Comput. Sci., 2008, 1, (3), pp. 219237.
    17. 17)
      • 11. McFarlane, N.J.B., Schofield, C.P.: ‘Segmentation and tracking of piglets in images’, Mach. Vis. Appl., 1995, 8, (3), pp. 187193.
    18. 18)
      • 3. Prati, A., Mikic, I., Trivedi, M.M., et al: ‘Detecting moving shadows: algorithms and evaluation’, IEEE Trans. Pattern Anal. Mach. Intell., 2003, 25, (7), pp. 918923.
    19. 19)
      • 30. Reddy, V., Sanderson, C., Lovell, B.C.: ‘Improved foreground detection via block-based classifier cascade with probabilistic decision integration’, IEEE Trans. Circuits Syst. Video Technol., 2013, 23, (1), pp. 8393.
    20. 20)
      • 17. Wang, C., Blei, D.M.: ‘Online variational inference for the hierarchical Dirichlet process’, in Dunson, G.J.G., Blei, D. (Ed.): ‘Journal of Machine Learning Research – Workshop and Conf. Proceedings’, 2011, pp. 752760.
    21. 21)
      • 4. Nadimi, S., Bhanu, B.: ‘Physical models for moving shadow and object detection in video’, IEEE Trans. Pattern Anal. Mach. Intell., 2004, 26, (8), pp. 10791087.
    22. 22)
      • 23. Lin, H.H., Chuang, J.H., Liu, T.L.: ‘Regularized background adaptation: a novel learning rate control scheme for Gaussian mixture modeling’, IEEE Trans. Image Process., 2011, 20, (3), pp. 822836.
    23. 23)
      • 18. Li, D., Xu, L., Goodman, E.D.: ‘Illumination-robust foreground detection in a video surveillance system’, IEEE Trans. Circuits Syst. Video Technol., 2013, 23, (10), pp. 16371650.
    24. 24)
      • 19. Mukherjee, D., JonathanWu, Q.M.: ‘Real-timeVideoSegmentation using student'stMixture model’, Procedia Comput. Sci., 2012, 10, (0), pp. 153160.
    25. 25)
      • 29. Hofmann, M., Tiefenbacher, P., Rigoll, G.: ‘Background segmentation with feedback: the pixel-based adaptive segmenter’. IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops IEEE, Providence, RI, USA, 2012, pp. 3843.
    26. 26)
      • 6. Brutzer, S., Hoferlin, B., Heidemann, G.: ‘Evaluation of background subtraction techniques for video surveillance’. CVPR 2011 IEEE, Colorado Springs, CO, USA, 2011, pp. 19371944.
    27. 27)
      • 28. Yang, M.H., Huang, C.R., Liu, W.C., et al: ‘Binary descriptor based nonparametric background modeling for foreground extraction by using detection theory’, IEEE Trans. Circuits Syst. Video Technol., 2015, 25, (4), pp. 595608.
    28. 28)
      • 31. Van Droogenbroeck, M., Paquot, O.: ‘Background subtraction: experiments and improvements for ViBe’. IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops IEEE, Providence, RI, USA, 2012, pp. 3237.
    29. 29)
      • 24. Elgammal, A., Harwood, D., Davis, L.: ‘Non-parametric model for background subtraction’. Computer Vision’ ECCV 2000, London, UK, 2000, pp. 751767.
    30. 30)
      • 10. Sobral, A., Vacavant, A.: ‘A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos’, Comput. Vis. Image Underst., 2014, 122, pp. 421.
    31. 31)
      • 22. Zivkovic, Z., Van Der Heijden, F.: ‘Efficient adaptive density estimation per image pixel for the task of background subtraction’, Pattern Recognit. Lett., 2006, 27, (7), pp. 773780.
    32. 32)
      • 37. Babaee, M., Dinh, D.T., Rigoll, G.: ‘A deep convolutional neural network for video sequence background subtraction’, Pattern Recognit., 2018, 76, pp. 635649.
    33. 33)
      • 34. ‘ICPR 2016 Scene Background Modeling Contest challenge’. Available at http://scenebackgroundmodeling.net/.
    34. 34)
      • 8. Goyette, N., Jodoin, P.-M., Porikli, F., et al: ‘A novel video dataset for change detection benchmarking’, IEEE Trans. Image Process., 2014, 23, (11), pp. 46634679.
    35. 35)
      • 7. Bouwmans, T.: ‘Traditional and recent approaches in background modeling for foreground detection: an overview’, Comput. Sci. Rev., 2014, 11–12, pp. 3166.
    36. 36)
      • 27. Cabral, R., Torre, F.D., La Costeira, J.P., et al: ‘Unifying nuclear norm and bilinear factorization approaches for low-rank matrix decomposition’. Proc. IEEE Int. Conf. Computer Vision IEEE, Sydney, Australia, 2013, pp. 24882495.
    37. 37)
      • 26. Zhou, X., Yang, C., Yu, W.: ‘Moving object detection by detecting contiguous outliers in the low-rank representation’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (3), pp. 597610.
    38. 38)
      • 2. Wang, Y., Jodoin, P.-M., Porikli, F., et al: ‘CDnet 2014: An expanded change detection benchmark dataset’. 2014 IEEE Conf. Computer Vision and Pattern Recognition Workshops IEEE, Columbus, OH, USA, 2014, pp. 393400.
    39. 39)
      • 39. Yong, H., Meng, D., Zuo, W., et al: ‘Robust online matrix factorization for dynamic background subtraction’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, pp. 11.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2018.5033
Loading

Related content

content/journals/10.1049/iet-cvi.2018.5033
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading