http://iet.metastore.ingenta.com
1887

High variation removal for background subtraction in traffic surveillance systems

High variation removal for background subtraction in traffic surveillance systems

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Background subtraction has been a fundamental task in video analytics and smart surveillance applications. In the field of background subtraction, Gaussian mixture model is a canonical model for many other methods. However, the unconscious learning of this model often leads to erroneous motion detection under high variation scenes. This article proposes a new method that incorporates entropy estimation and a removal framework into the Gaussian mixture model to improve the performance of background subtraction. Firstly, entropy information is computed for each pixel of a frame to classify frames into silent or high variation categories. Secondly, the removal framework is used to determine which frames from the background subtraction process are updated. The proposed method produces precise results with fast execution time, which are two critical factors in surveillance systems for more advanced tasks. The authors used two publicly available test sequences from the 2014 Change Detection and Scene background modelling data sets and internally collected data sets of scenes with dense traffic.

References

    1. 1)
      • 1. Tian, B., Morris, B.T., Tang, M., et al: ‘Hierarchical and networked vehicle surveillance in ITS: a survey’, IEEE Trans. Intell. Transp. Syst., 2017, 18, (1), pp. 2548.
    2. 2)
      • 2. Wang, Y., Jodoin, P.-M., Porikli, F., et al: ‘CDnet 2014: An expanded change detection benchmark dataset’. 2014 IEEE Conf. Computer Vision and Pattern Recognition Workshops IEEE, Columbus, OH, USA, 2014, pp. 393400.
    3. 3)
      • 3. Prati, A., Mikic, I., Trivedi, M.M., et al: ‘Detecting moving shadows: algorithms and evaluation’, IEEE Trans. Pattern Anal. Mach. Intell., 2003, 25, (7), pp. 918923.
    4. 4)
      • 4. Nadimi, S., Bhanu, B.: ‘Physical models for moving shadow and object detection in video’, IEEE Trans. Pattern Anal. Mach. Intell., 2004, 26, (8), pp. 10791087.
    5. 5)
      • 5. St-Charles, P.-L., Bilodeau, G.-A., Bergevin, R.: ‘Universal background subtraction using word consensus models’, IEEE Trans. Image Process., 2016, 25, (10), pp. 47684781.
    6. 6)
      • 6. Brutzer, S., Hoferlin, B., Heidemann, G.: ‘Evaluation of background subtraction techniques for video surveillance’. CVPR 2011 IEEE, Colorado Springs, CO, USA, 2011, pp. 19371944.
    7. 7)
      • 7. Bouwmans, T.: ‘Traditional and recent approaches in background modeling for foreground detection: an overview’, Comput. Sci. Rev., 2014, 11–12, pp. 3166.
    8. 8)
      • 8. Goyette, N., Jodoin, P.-M., Porikli, F., et al: ‘A novel video dataset for change detection benchmarking’, IEEE Trans. Image Process., 2014, 23, (11), pp. 46634679.
    9. 9)
      • 9. Bouwmans, T., Zahzah, E.H.: ‘Robust PCA via principal component pursuit: a review for a comparative evaluation in video surveillance'Comput. Vis. Image Underst., 2014, 122, pp. 2234.
    10. 10)
      • 10. Sobral, A., Vacavant, A.: ‘A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos’, Comput. Vis. Image Underst., 2014, 122, pp. 421.
    11. 11)
      • 11. McFarlane, N.J.B., Schofield, C.P.: ‘Segmentation and tracking of piglets in images’, Mach. Vis. Appl., 1995, 8, (3), pp. 187193.
    12. 12)
      • 12. Wren, C.R., Azarbayejani, A., Darrell, T., et al: ‘Pfinder: real-time tracking of the human body’, IEEE Trans. Pattern Anal. Mach. Intell., 1997, 19, (7), pp. 780785.
    13. 13)
      • 13. Stauffer, C., Grimson, W.E.L.: ‘Adaptive background mixture models for real-time tracking’. Proc. 1999 IEEE Computer Society Conf. Computer Vision and Pattern Recognition (Cat. No PR00149) IEEE Computer Society, Fort Collins, CO, USA, 1999, pp. 246252.
    14. 14)
      • 14. Zivkovic, Z.: ‘Improved adaptive Gaussian mixture model for background subtraction’. in Proc. 17th Int. Conf. Pattern Recognition, 2004. ICPR 2004 IEEE, Cambridge, UK, 2004, Vol. 2, pp. 2831.
    15. 15)
      • 15. Lee, D.S.: ‘Effective Gaussian mixture learning for video background subtraction’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (5), pp. 827832.
    16. 16)
      • 16. Haines, T.S.F., Xiang, T.: ‘Background subtraction with Dirichlet process mixture models’, IEEE Trans. Pattern Anal. Mach. Intell., 2014, 36, (4), pp. 670683.
    17. 17)
      • 17. Wang, C., Blei, D.M.: ‘Online variational inference for the hierarchical Dirichlet process’, in Dunson, G.J.G., Blei, D. (Ed.): ‘Journal of Machine Learning Research – Workshop and Conf. Proceedings’, 2011, pp. 752760.
    18. 18)
      • 18. Li, D., Xu, L., Goodman, E.D.: ‘Illumination-robust foreground detection in a video surveillance system’, IEEE Trans. Circuits Syst. Video Technol., 2013, 23, (10), pp. 16371650.
    19. 19)
      • 19. Mukherjee, D., JonathanWu, Q.M.: ‘Real-timeVideoSegmentation using student'stMixture model’, Procedia Comput. Sci., 2012, 10, (0), pp. 153160.
    20. 20)
      • 20. Suhr, J.K., Jung, H.G., Li, G., et al: ‘Mixture of gaussians-based background subtraction for Bayer-pattern image sequences’, IEEE Trans. Circuits Syst. Video Technol., 2011, 21, (3), pp. 365370.
    21. 21)
      • 21. Bhaskar, H., Mihaylova, L., Achim, A.: ‘Video foreground detection based on symmetric alpha-stable mixture models’, IEEE Trans. Circuits Syst. Video Technol., 2010, 20, (8), pp. 11331138.
    22. 22)
      • 22. Zivkovic, Z., Van Der Heijden, F.: ‘Efficient adaptive density estimation per image pixel for the task of background subtraction’, Pattern Recognit. Lett., 2006, 27, (7), pp. 773780.
    23. 23)
      • 23. Lin, H.H., Chuang, J.H., Liu, T.L.: ‘Regularized background adaptation: a novel learning rate control scheme for Gaussian mixture modeling’, IEEE Trans. Image Process., 2011, 20, (3), pp. 822836.
    24. 24)
      • 24. Elgammal, A., Harwood, D., Davis, L.: ‘Non-parametric model for background subtraction’. Computer Vision’ ECCV 2000, London, UK, 2000, pp. 751767.
    25. 25)
      • 25. Maddalena, L., Petrosino, A.: ‘Stopped object detection by learning foreground model in videos’, IEEE Trans. Neural Netw. Learn. Syst., 2013, 24, (5), pp. 723735.
    26. 26)
      • 26. Zhou, X., Yang, C., Yu, W.: ‘Moving object detection by detecting contiguous outliers in the low-rank representation’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (3), pp. 597610.
    27. 27)
      • 27. Cabral, R., Torre, F.D., La Costeira, J.P., et al: ‘Unifying nuclear norm and bilinear factorization approaches for low-rank matrix decomposition’. Proc. IEEE Int. Conf. Computer Vision IEEE, Sydney, Australia, 2013, pp. 24882495.
    28. 28)
      • 28. Yang, M.H., Huang, C.R., Liu, W.C., et al: ‘Binary descriptor based nonparametric background modeling for foreground extraction by using detection theory’, IEEE Trans. Circuits Syst. Video Technol., 2015, 25, (4), pp. 595608.
    29. 29)
      • 29. Hofmann, M., Tiefenbacher, P., Rigoll, G.: ‘Background segmentation with feedback: the pixel-based adaptive segmenter’. IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops IEEE, Providence, RI, USA, 2012, pp. 3843.
    30. 30)
      • 30. Reddy, V., Sanderson, C., Lovell, B.C.: ‘Improved foreground detection via block-based classifier cascade with probabilistic decision integration’, IEEE Trans. Circuits Syst. Video Technol., 2013, 23, (1), pp. 8393.
    31. 31)
      • 31. Van Droogenbroeck, M., Paquot, O.: ‘Background subtraction: experiments and improvements for ViBe’. IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops IEEE, Providence, RI, USA, 2012, pp. 3237.
    32. 32)
      • 32. Bouwmans, T., El Baf, F., Vachon, B.: ‘Background modeling using mixture of Gaussians for foreground detection – a survey’, Recent Patents Comput. Sci., 2008, 1, (3), pp. 219237.
    33. 33)
      • 33. Shannon, C.E.: ‘A mathematical theory of communication’, Bell Syst. Tech. J., 1948, 27, (3), pp. 379423.
    34. 34)
      • 34. ‘ICPR 2016 Scene Background Modeling Contest challenge’. Available at http://scenebackgroundmodeling.net/.
    35. 35)
      • 35. Ramirez-Alonso, G., Chacon-Murguia, M.I.: ‘Auto-adaptive parallel SOM architecture with a modular analysis for dynamic object segmentation in videos’, Neurocomputing, 2016, 175, pp. 9901000.
    36. 36)
      • 36. Wang, Y., Luo, Z., Jodoin, P.-M.: ‘Interactive deep learning method for segmenting moving objects’, Pattern Recognit. Lett., 2016.
    37. 37)
      • 37. Babaee, M., Dinh, D.T., Rigoll, G.: ‘A deep convolutional neural network for video sequence background subtraction’, Pattern Recognit., 2018, 76, pp. 635649.
    38. 38)
      • 38. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: ‘SuBSENSE: a universal change detection method with local adaptive sensitivity’, IEEE Trans. Image Process., 2015, 24, (1), pp. 359373.
    39. 39)
      • 39. Yong, H., Meng, D., Zuo, W., et al: ‘Robust online matrix factorization for dynamic background subtraction’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, pp. 11.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2018.5033
Loading

Related content

content/journals/10.1049/iet-cvi.2018.5033
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address