Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Background segmentation of dynamic scenes based on dual model

Detecting moving objects from background in video sequences is the first step of many image applications. The background can be divided into two types according to whether the pixel values of it are variable or not: static one and dynamic one. How to correctly detect moving foreground objects from dynamic scenes is a difficult problem because of the similarity between the moving foreground and the variable background. In this study, a new method for non-parametric background segmentation of dynamic scenes is proposed. Here the background is described by two interrelated models. One of them is called the self-model, which concerns with the recently observed pixel values at the same position, and the other one is called the neighbourhood-model, which is described by the pixel values of the neighbourhood. The author's method can accurately detect the dynamic background. To correctly detect pixels in the foreground as much as possible, the authors also propose an adaptive threshold for foreground decision based on the background characteristics. All of the above detection processes can be done in real time. Experimental results on public dataset demonstrate that the proposed method outperforms the state-of-the-art for background segmentation in dynamic scenes.

References

    1. 1)
      • 10. Sepehri Boroujeni, N., Etemad, S.A., Whitehead, A.: ‘Fast obstacle detection using targeted optical flow’. IEEE Conf. on Image Processing (ICIP), Orlando, FL, 2012, pp. 6568.
    2. 2)
    3. 3)
      • 16. Stauffer, C., Grimson, W.: ‘Adaptive background mixture models for real-time tracking’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR), Fort Collins, USA, 1999, pp. 246252.
    4. 4)
    5. 5)
      • 5. Zhang, S., Yao, H., Sun, X., Liu, S.: ‘Robust visual tracking using an effective appearance model based on sparse coding’, ACM TIST, 2012, 3, (3), pp. 43.
    6. 6)
    7. 7)
      • 26. Evangelio, R.H., Patzold, M., Sikora, T.: ‘Splitting gaussians in mixture models’. Advanced Video and Signal-Based Surveillance (AVSS), Beijing, China, 2012, pp. 300305.
    8. 8)
    9. 9)
    10. 10)
      • 24. Zhang, S., Yao, H., Liu, S.: ‘A covariance-based method for dynamic background subtraction’. IEEE Conf. on Pattern Recognition, Tampa, FL, 2008, pp. 31413144.
    11. 11)
    12. 12)
    13. 13)
    14. 14)
      • 17. Evangelio, R.H., Sikora, T.: ‘Complementary background models for the detection of static and moving objects in crowded environments’. Proc. Int. Conf. Advanced Video and Signal Based Surveillance, Klagenfurt, Austria, 2011, pp. 7176.
    15. 15)
      • 13. Bouwmans, T., El Baf, F., Vachon, B.: ‘Statistical background modeling for foreground detection: a survey’. Handbook of Pattern Recognition and Computer Vision, 2010, pp. 181199.
    16. 16)
    17. 17)
      • 23. Zhang, S., Yao, H., Liu, S.: ‘Spatial-temporal nonparametric background subtraction in dynamic scenes’. IEEE Conf. on Multimedia and Expo, New York, NY, 2009, pp. 518521.
    18. 18)
    19. 19)
    20. 20)
      • 35. Goyette, N., Jodoin, P., Porikli, F., Konrad, J., Ishwar, P.: ‘Changedetection.net: a new change detection benchmark dataset’. IEEE Conf. on Computer Vision and Pattern Recognition workshops (CVPRW), USA, 2012, pp. 18.
    21. 21)
    22. 22)
      • 20. Zhang, S., Yao, H., Liu, S.: ‘Dynamic background modeling and subtraction using spatio-temporal local binary patterns’. IEEE Conf. on Image Processing (ICIP), San Diego, CA, 2008, pp. 15561559.
    23. 23)
      • 31. Elgammal, A., Harwood, D., Davis, L.: ‘Non-parametric model for background subtraction’. European Conf. on Computer Vision (ECCV), Dublin, Ireland, 2000, pp. 751767.
    24. 24)
      • 28. Droogenbroeck, M.V., Paquot, O.: ‘Background subtraction: experiments and improvements for vibe’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR), Providence, USA, 2012, pp. 3237.
    25. 25)
      • 29. Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: ‘Wallflower: principles and practice of background maintenance’. IEEE Int. Conf. on Computer Vision, Kerkyra, 1999, pp. 255261.
    26. 26)
    27. 27)
      • 25. KaewTraKulPong, P., Bowden, R.: ‘An improved adaptive background mixture model for real-time tracking with shadow detection’. Video-Based Surveillance Systems, USA, 2002, pp. 135144.
    28. 28)
    29. 29)
      • 30. Gregorio, M.D., Giordano, M.: ‘A wizard-based approach to cdnet’. First BRICS Countries Congress (BRICS-CCI), 2013.
    30. 30)
      • 32. Huang, J.B., Chen, C.S.: ‘Moving cast shadow detection using physics-based features’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Miami, USA, 2009, pp. 23102317.
    31. 31)
    32. 32)
    33. 33)
      • 22. Hofmann, M., Tiefenbacher, P., Rigoll, G.: ‘Background segmentation with feedback: the pixel-based adaptive segmenter’. IEEE Conf. on Computer Vision and Pattern Recognition workshops (CVPRW), USA, 2012, pp. 3843.
    34. 34)
      • 11. Liang, R., Yan, L., Gao, P., Qian, X., Zhang, Z., Sun, H.: ‘Aviation video moving-target detection with inter-frame difference’. Image and Signal Processing (CISP), Yantai, China, 2010, pp. 14941497.
    35. 35)
      • 9. Denman, S., Fookes, C., Sridharan, S.: ‘Improved simultaneous computation of motion detection and optical flow for object tracking’. Digital Image Computing: Techniques and Applications, Melbourne, VIC, 2009, pp. 175182.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2013.0319
Loading

Related content

content/journals/10.1049/iet-cvi.2013.0319
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address