http://iet.metastore.ingenta.com
1887

CVABS: moving object segmentation with common vector approach for videos

CVABS: moving object segmentation with common vector approach for videos

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Background modelling is a fundamental step for several real-time computer vision applications that requires security systems and monitoring. An accurate background model helps to detect the activity of moving objects in the video. In this work, the authors have developed a new subspace-based background-modelling algorithm using the concept of common vector approach (CVA) with Gram–Schmidt orthogonalisation. Once the background model that involves the common characteristic of different views corresponding to the same scene is acquired, a smart foreground detection and background updating procedure is applied based on dynamic control parameters. A variety of experiments is conducted on different problem types related to dynamic backgrounds. Several types of metrics are utilised as objective measures and the obtained visual results are judged subjectively. It was observed that the proposed method stands successfully for all problem types reported on CDNet2014 dataset by updating the background frames with a self-learning feedback mechanism.

References

    1. 1)
      • 1. Panahi, S., Sheikhi, S., Hadadan, S., et al: ‘Evaluation of background subtraction methods’. Digital Image Computing: Techniques and Applications (DICTA), Canberra, Australia, 2008, pp. 357364.
    2. 2)
      • 2. Sobral, A., Vacavant, A.: ‘A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos’, Comput. Vis. Image Underst., 2014, 122, pp. 421.
    3. 3)
      • 3. Cristani, M., Farenzena, M., Bloisi, D., et al: ‘Background subtraction for automated multisensor surveillance: a comprehensive review’, EURASIP J. Adv. Signal Process., 2010, 2010, p. 343057.
    4. 4)
      • 4. Bouwmans, T.: ‘Traditional and recent approaches in background modeling for foreground detection: an overview’, J. Comput. Sci. Rev., 2014, 11, pp. 3166.
    5. 5)
      • 5. Haritaoglu, I., Harwood, D., Davis, L.S.: ‘W4: real-time surveillance of people and their activities’, IEEE Trans. Pattern Anal. Mach. Intell., 2000, 22, pp. 809830.
    6. 6)
      • 6. Pal, A., Schaefer, G., Celebi, M.E.: ‘Robust codebook-based video background subtraction’. 2010 IEEE Int. Conf. on Acoustics Speech and Signal Processing, Dallas, Texas, U.S.A., 2010, pp. 11461149.
    7. 7)
      • 7. Piccardi, M.: ‘Background subtraction techniques: a review’. 2014 IEEE Int. Conf. on Systems, Man and Cybernetics, 2004, pp. 30993104.
    8. 8)
      • 8. Wren, C.R., Azarbayejani, A., Darrell, T., et al: ‘Pfinder: real-time tracking of the human body’, IEEE Trans. Pattern Anal. Mach. Intell., 1997, 19, pp. 780785.
    9. 9)
      • 9. Wang, Y., Jodoin, P.-M., Porikli, F., et al: ‘CDnet 2014: an expanded change detection benchmark dataset’. 2014 IEEE Conf. on Computer Vision and Pattern Recognition Workshops, Columbus, Ohio, U.S.A., 2014, pp. 387394.
    10. 10)
      • 10. Chen, Y., Wang, J., Lu, H.: ‘Learning sharable models for robust background subtraction’. 2015 IEEE Int. Conf. on Multimedia and Expo (ICME), Turin, Italy, 2015, pp. 16.
    11. 11)
      • 11. Jiang, S., Lu, X.: ‘WeSamBE: a weight-sample-based method for background subtraction’, IEEE Trans. Circuits Syst. Video Technol., 2018, 28, pp. 21052115.
    12. 12)
      • 12. St-Charles, P.-L., Bilodeau, G.-A., Bergevin, R.: ‘Subsense: a universal change detection method with local adaptive sensitivity’, IEEE Trans. Image Process., 2015, 24, pp. 359373.
    13. 13)
      • 13. St-Charles, P.-L., Bilodeau, G.-A., Bergevin, R.: ‘A self-adjusting approach to change detection based on background word consensus’. IEEE Winter Conf. on Applications of Computer Vision (WACV) IEEE, Waikoloa, Hawaii, U.S.A., 2015, pp. 990997.
    14. 14)
      • 14. Wang, R., Bunyak, F., Seetharaman, G., et al: ‘Static and moving object detection using flux tensor with split Gaussian models’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition Workshops, Columbus, Ohio, U.S.A., 2014, pp. 414418.
    15. 15)
      • 15. Sajid, H., Cheung, S.-C.S.: ‘Universal multimode background subtraction’, IEEE Trans. Image Process., 2017, 26, pp. 32493260.
    16. 16)
      • 16. Tsai, D.-M., Lai, S.-C.: ‘Independent component analysis-based background subtraction for indoor surveillance’, IEEE Trans. Image Process., 2009, 18, pp. 158167.
    17. 17)
      • 17. Bucak, S.S., Günsel, B., Gursoy, O.: ‘Incremental non-negative matrix factorization for dynamic background modelling’. IEEE 15th Signal Processing and Communications Applications, Eskisehir, Turkey, 2007, pp. 107116.
    18. 18)
      • 18. Li, X., Hu, W., Zhang, Z., et al: ‘Robust foreground segmentation based on two effective background models’. Proc. 1st ACM Int. Conf. on Multimedia information Retrieval, ACM, Vancouver, British Columbia, Canada, 2008, pp. 223228.
    19. 19)
      • 19. Stauffer, C., Grimson, W.E.L.: ‘Adaptive background mixture models for real-time tracking’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Fort Collins, Colorado, U.S.A., 1999.
    20. 20)
      • 20. Bouwmans, T.: ‘Subspace learning for background modeling: a survey’, Recent Patents Comput. Sci., 2009, 2, pp. 223234.
    21. 21)
      • 21. Oliver, N.M., Rosario, B., Pentland, A.P.: ‘A Bayesian computer vision system for modeling human interactions’, IEEE Trans. Pattern Anal. Mach. Intell., 2000, 22, pp. 831843.
    22. 22)
      • 22. Marghes, C., Bouwmans, T., Vasiu, R.: ‘Background modeling and foreground detection via a reconstructive and discriminative subspace learning approach’. Proc. Int. Conf. on Image Processing, Computer Vision, and Pattern Recognition (IPCV), Las Vegas, Nevada, U.S.A., 2012, p. 1.
    23. 23)
      • 23. Farcas, D., Marghes, C., Bouwmans, T.J.M.V.: ‘Applications, background subtraction via incremental maximum margin criterion: a discriminative subspace approach’, Mach. Vis. Appl., 2012, 23, pp. 10831101.
    24. 24)
      • 24. Yan, J., Zhang, B., Yan, S., et al: ‘IMMC: incremental maximum margin criterion’. Proc. Tenth ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, ACM, Seattle, Washington, U.S.A., 2004, pp. 725730.
    25. 25)
      • 25. Hughes, K., Grzeda, V., Greenspan, M.: ‘Eigenbackground bootstrapping’. Int. Conf. on Computer and Robot Vision, IEEE, Regina, Saskatchewan, Canada, 2013, pp. 196201.
    26. 26)
      • 26. Seo, J.-W., Kim, S.D.: ‘Recursive on-line (2D)2PCA and its application to long-term background subtraction’, IEEE Trans. Multimed., 2014, 16, pp. 23332344.
    27. 27)
      • 27. De la Torre, F., Black, M.J.: ‘Robust principal component analysis for computer vision’. Int. Conf. on Computer Vision (ICCV) IEEE, Vancouver, British Columbia, Canada, 2001, pp. 362369.
    28. 28)
      • 28. Qiu, C., Vaswani, N.: ‘Real-time robust principal components’ pursuit’. 2010 48th Annual Allerton Conf. on Communication, Control, and Computing (Allerton) IEEE, Allerton, Illinois, U.S.A., 2010, pp. 591598.
    29. 29)
      • 29. Zhou, T., Tao, D.: ‘Godec: randomized low-rank & sparse matrix decomposition in noisy case’. Proc. 28th Int. Conf. on Machine Learning, ICML, Bellevue, Washington, U.S.A., 2011.
    30. 30)
      • 30. Babacan, S.D., Luessi, M., Molina, R., et al: ‘Sparse Bayesian methods for low-rank matrix estimation’, IEEE Trans. Signal Process., 2012, 60, pp. 39643977.
    31. 31)
      • 31. Rodríguez, P., Wohlberg, B.: ‘Translational and rotational jitter invariant incremental principal component pursuit for video background modeling’. IEEE Int. Conf. on Image Processing (ICIP), Quebec City, Quebec, Canada, 2015, pp. 537541.
    32. 32)
      • 32. Javed, S., Bouwmans, T., Jung, S.K.: ‘Combining ARF and OR-PCA for robust background subtraction of noisy videos’. Int. Conf. on Image Analysis and Processing, Genoa, Italy, 2015, pp. 340351.
    33. 33)
      • 33. Javed, S., Jung, S.K., Mahmood, A., et al: ‘Motion-aware graph regularized RPCA for background modeling of complex scenes’. 2016 23rd Int. Conf. on Pattern Recognition (ICPR), Cancun, Mexico, 2016, pp. 120125.
    34. 34)
      • 34. Bouwmans, T., Sobral, A., Javed, S., et al: ‘Decomposition into low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a large-scale dataset’, Comput. Sci. Rev., 2017, 23, pp. 171.
    35. 35)
      • 35. Vaswani, N., Bouwmans, T., Javed, S., et al: ‘Robust subspace learning: robust PCA, robust subspace tracking, and robust subspace recovery’, IEEE Signal Process. Mag., 2018, 35, pp. 3255.
    36. 36)
      • 36. Wang, Y., Luo, Z., Jodoin, P.-M.: ‘Interactive deep learning method for segmenting moving objects’, Pattern Recognit. Lett., 2017, 96, pp. 6675.
    37. 37)
      • 37. Babaee, M., Dinh, D.T., Rigoll, G.: ‘A deep convolutional neural network for video sequence background subtraction’, Pattern Recognit., 2018, 76, pp. 635649.
    38. 38)
      • 38. Lim, L.A., Keles, H.Y.: ‘Foreground segmentation using convolutional neural networks for multiscale feature encoding’, Pattern Recognit. Lett., 2018, 112, pp. 256262.
    39. 39)
      • 39. Gulmezoglu, M., Dzhafarov, V., Barkana, A.: ‘The common vector approach and its relation to principal component analysis’, IEEE Trans. Speech Audio Process., 2001, 9, pp. 655662.
    40. 40)
      • 40. Cevikalp, H., Neamtu, M., Wilkes, M., et al: ‘Discriminative common vectors for face recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, pp. 413.
    41. 41)
      • 41. Günal, S., Ergin, S., Gerek, Ö.N.: ‘Spam E-mail recognition by subspace analysis’. INISTA–Int. Symp. on Innovations in Intelligent Systems and Applications, Istanbul, Türkiye, 2005, pp. 307310.
    42. 42)
      • 42. Özkan, K, Işık, Ş.: ‘A novel multi-scale and multi-expert edge detector based on common vector approach’, AEU-Int. J. Electron. Commun., 2015, 69, pp. 12721281.
    43. 43)
      • 43. Gülmezoğlu, M.B., Dzhafarov, V., Edizkan, R., et al: ‘The common vector approach and its comparison with other subspace methods in case of sufficient data’, Comput. Speech Lang., 2007, 21, pp. 266281.
    44. 44)
      • 44. Işık, Ş., Özkan, K., Günal, S.Ö., et al: ‘SWCD: a sliding window and self-regulated learning-based background updating method for change detection in videos’, J. Electron. Imaging, 2018, 27, p. 023002.
    45. 45)
      • 45. Agrawal, A., Raskar, R., Chellappa, R.: ‘Edge suppression by gradient field transformation using cross-projection tensors’. 2006 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition IEEE, New York, New York, USA, 2006, pp. 23012308.
    46. 46)
      • 46. Hofmann, M., Tiefenbacher, P., Rigoll, G.: ‘Background segmentation with feedback: the pixel-based adaptive segmenter’. 2012 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), Providence, Rhode Island, USA, 2012, pp. 3843.
    47. 47)
      • 47. Yi, X., Ling, N.: ‘Fast pixel-based video scene change detection’. IEEE Int. Symp. on Circuits and Systems, 2005. ISCAS 2005 IEEE, Kobe, Japan, 2005, pp. 34433446.
    48. 48)
      • 48. Cevikalp, H., Neamtu, M., Wilkes, M.: ‘Discriminative common vector method with kernels’, IEEE Trans. Neural Netw., 2006, 17, pp. 15501565.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2018.5642
Loading

Related content

content/journals/10.1049/iet-cvi.2018.5642
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address