Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Dynamic background subtraction method based on spatio-temporal classification

The dynamic background will cause extremely negative effects on background subtraction and is difficult to eliminate. This study proposes a dynamic background subtraction method based on a spatio-temporal classification which mainly contains two key steps: temporal and spatial classifications. For temporal classification, the closest pixel sampling algorithm is used to sample background pixels in groups, which avoids centralised sampling and a complicated mathematical modelling process. For the background model obtained by group sampling, the pixels which are similar to the detected pixel are classified into the same category. According to the number of pixels in this category, the label (foreground or background) of the detected pixel can be determined thus a coarse foreground mask is obtained. For spatial classification, considering the correlation between dynamic background pixels and neighbouring pixels, a square window can be set for each foreground pixel in the coarse mask, and then all pixels in the window classified. According to the labels of these classified pixels, a more accurate foreground mask is obtained. The experiments on public datasets demonstrate that the proposed method outperforms other state-of-the-art methods.

References

    1. 1)
      • 16. Seo, J.W., Kim, S.D.: ‘Dynamic background subtraction via sparse representation of dynamic textures in a low-dimensional subspace’, Signal Image Video Process., 2016, 10, (1), pp. 2936.
    2. 2)
      • 8. Zivkovic, Z.: ‘Improved adaptive Gaussian mixture model for background subtraction’. Int. Conf. Pattern Recognition, Cambridge, UK, 2004, pp. 2831.
    3. 3)
      • 11. Elgammal, A.M., Harwood, D., Davis, L.S.: ‘Non-parametric model for background subtraction’. European Conf. Computer Vision, Dublin, Ireland, 2000, pp. 751767.
    4. 4)
      • 15. Guyon, C., Bouwmans, T., Zahzah, E.H.: ‘Foreground detection via robust low rank matrix decomposition including spatio-temporal constraint’. Int. Conf. Computer Vision, Daejeon, Korea, 2012, pp. 315320.
    5. 5)
      • 34. Achanta, R., Shaji, A., Smith, K., et al: ‘SLIC superpixels compared to state-of-the-art superpixel methods’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (11), pp. 22742282.
    6. 6)
      • 35. Wang, Y., Jodoin, P.M., Porikli, F., et al: ‘CDnet 2014: An expanded change detection benchmark dataset’. Computer Vision and Pattern Recognition Workshops, Ohio, USA, 2014, pp. 393400.
    7. 7)
      • 24. Ge, W., Guo, Z., Dong, Y., et al: ‘Dynamic background estimation and complementary learning for pixel-wise foreground/background segmentation’, Pattern Recognit., 2016, 59, (C), pp. 112125.
    8. 8)
      • 25. Huynh-The, T., Banos, O., Lee, S., et al: ‘NIC: a robust background extraction algorithm for foreground detection in dynamic scenes’, IEEE Trans. Circuits Syst. Video Technol., 2017, 27, (7), pp. 14781490.
    9. 9)
      • 2. Sobral, A., Vacavant, A.: ‘A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos’, Comput. Vis. Image Underst., 2014, 122, pp. 421.
    10. 10)
      • 18. Cao, W., Wang, Y., Sun, J., et al: ‘Total variation regularized tensor RPCAfor background subtraction from compressive measurements’, IEEE Trans. Image Process., 2016, 25, (9), pp. 40754090.
    11. 11)
      • 27. Stcharles, P.L., Bilodeau, G.A., Bergevin, R.: ‘SuBSENSE: a universal change detection method with local adaptive sensitivity’, IEEE Trans. Image Process., 2015, 24, (1), pp. 359373.
    12. 12)
      • 22. Maddalena, L., Petrosino, A.: ‘A self-organizing approach to background subtraction for visual surveillance applications’, IEEE Trans. Image Process., 2008, 17, (7), pp. 11681177.
    13. 13)
      • 37. ‘Wallflower dataset’. Available at: http://research.microsoft.com/en-us/um/people/jckrumm/wallflower/Testimages.htm, accessed October 2017.
    14. 14)
      • 19. Wang, X., Navasca, C.: ‘Adaptive low rank approximation for tensors’. IEEE Int. Conf. Computer Vision Workshop, Santiago, Chile, 2015, pp. 939945.
    15. 15)
      • 21. Barnich, O., Van, D.M.: ‘Vibe: a universal background subtraction algorithm for video sequences’, IEEE Trans. Image Process., 2011, 20, (6), p. 17091724.
    16. 16)
      • 28. Vemulapalli, R., Aravind, R.: ‘Spatio-temporal nonparametric background modeling and subtraction’. IEEE Int. Conf. Computer Vision Workshops, Kyoto, Japan, 2010, pp. 11451152.
    17. 17)
      • 17. Zhou, Z., Jin, Z.: ‘Two-dimension principal component analysis-based motion detection framework with subspace update of background’, IET Comput. Vis., 2016, 10, (6), pp. 603612.
    18. 18)
      • 12. Kim, K., Chalidabhongse, T.H., Harwood, D., et al: ‘Real-time foreground-background segmentation using codebook model’, Real-Time Imaging, 2005, 11, (3), pp. 172185.
    19. 19)
      • 13. Wu, M., Peng, X.: ‘Spatio-temporal context for codebook-based dynamic background subtraction’, AEU – Int. J. Electron. Commun., 2010, 64, (8), pp. 739747.
    20. 20)
      • 26. Yin, B., Zhang, J., Wang, Z.: ‘Background segmentation of dynamic scenes based on dual model’, IET Comput. Vis., 2014, 8, (6), pp. 545555.
    21. 21)
      • 6. Stauffer, C., Grimson, W.E.L.: ‘Learning patterns of activity using real-time tracking’ (IEEE Computer Society, 2000).
    22. 22)
      • 9. Varadarajan, S., Miller, P., Zhou, H.: ‘Region-based mixture of Gaussians modelling for foreground detection in dynamic scenes’, Pattern Recognit., 2015, 48, (11), pp. 34883503.
    23. 23)
      • 33. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: ‘Universal background subtraction using word consensus models’, IEEE Trans. Image Process., 2016, 25, (10), pp. 47684781.
    24. 24)
      • 3. Zhang, S., Lan, X., Qi, Y., et al: ‘Robust visual tracking via basis matching’, IEEE Trans. Circuits Syst. Video Technol., 2017, 27, (3), pp. 421430.
    25. 25)
      • 30. Chen, M., Yang, Q., Li, Q., et al: ‘Spatiotemporal background subtraction using minimum spanning tree and optical flow’. European Conf. Computer Vision, Zurich, Switzerland, 2014, pp. 521534.
    26. 26)
      • 20. Javed, S., Bouwmans, T., Jung, S.K.: ‘Stochastic decomposition into low rank and sparse tensor for robust background subtraction’. Int. Conf. Imaging for Crime Prevention and Detection, London, UK, 2015, pp. 16.
    27. 27)
      • 29. Yang, Y., Zhang, Q., Wang, P., et al: ‘Moving object detection for dynamic background scenes based on spatiotemporal model’, Adv. Multimedia, 2017, 2017, (28), pp. 19.
    28. 28)
      • 10. Varadarajan, S., Wang, H., Miller, P., et al: ‘Fast convergence of regularised region-based mixture of Gaussians for dynamic background modelling’, Comput. Vis. Image Underst., 2015, 136, pp. 4558.
    29. 29)
      • 1. Bouwmans, T.: ‘Traditional and recent approaches in background modeling for foreground detection: an overview’, Comput. Sci. Rev., 2014, 11, pp. 3166.
    30. 30)
      • 23. Maddalena, L., Petrosino, A.: ‘The SOBS algorithm: what are the limits?’. Computer Vision and Pattern Recognition Workshops, Rhode Island, USA, 2012, pp. 2126.
    31. 31)
      • 14. Guo, J.M., Hsia, C.H., Liu, Y.F., et al: ‘Fast background subtraction based on a multilayer codebook model for moving object detection’, IEEE Trans. Circuits Syst. Video Technol., 2013, 23, (10), pp. 18091821.
    32. 32)
      • 5. Brutzer, S., Hoferlin, B., Heidemann, G.: ‘Evaluation of background subtraction techniques for video surveillance’. Computer Vision and Pattern Recognition, Colorado, USA, 2011, pp. 19371944.
    33. 33)
      • 31. Yan, J., Wang, S., Xie, T., et al: ‘Variational Bayesian learning for background subtraction based on local fusion feature’, IET Comput. Vis., 2017, 10, (8), pp. 884893.
    34. 34)
      • 4. Jiang, F., Zhang, S., Wu, S., et al: ‘Multi-layered gesture recognition with kinect’, J. Mach. Learn. Res., 2015, 16, (1), pp. 227254.
    35. 35)
      • 32. St-Charles, P.L., Bilodeau, G.A.: ‘Improving background subtraction using local binary similarity patterns’. Applications of Computer Vision, Colorado, USA, 2014, pp. 509515.
    36. 36)
      • 7. Stauffer, C., Grimson, W.E.L.: ‘Learning Patterns of Activity Using Real-Time Tracking’. IEEE Trans. Pattern Anal. Mach. Intell., 2000, 22, (8), pp. 747757.
    37. 37)
      • 36. Li, L., Huang, W., Gu, Y.H., et al: ‘Statistical modeling of complex backgrounds for foreground object detection’, IEEE Trans. Image Process., 2004, 13, (11), p. 14591472.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0339
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0339
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address