Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Dense optical flow based background subtraction technique for object segmentation in moving camera environment

Segmentation of moving object in video with moving background is a challenging problem and it becomes more difficult with varying illumination. The authors propose a dense optical flow-based background subtraction technique for object segmentation. The proposed technique is fast and reliable for segmentation of moving objects in realistic unconstrained videos. In the proposed work, they stabilise the camera motion by computing homography matrix, then they perform statistical background modelling using single Gaussian background modelling approach. Moving pixels are identified using dense optical flow in the background modelled scenario. The dense optical flow provides motion information of each pixel between consecutive frames, therefore for moving pixel identification they compute motion flow vector of each pixel between consecutive frames. To distinguish between foreground and background pixels, they labelled each pixel and thresholding the magnitude of motion flow vector identifies the moving pixels. The effectiveness of the proposed algorithm has been evaluated both qualitatively and quantitatively. The proposed algorithm has been evaluated on several realistic videos of different complex conditions. To assess the performance of the proposed work, the authors compared their algorithm with other state-of-art methods and found that the proposed method outperforms the other methods.

References

    1. 1)
      • 16. Kurnianggoro, L., Shahbaz, A., Jo, K.H.: ‘Dense optical flow in stabilized scenes for moving object detection from a moving camera’. 16th Int. Conf. on Control, Automation and Systems (ICCAS), Gyeongju, Republic of Korea, October 2016, pp. 704708.
    2. 2)
      • 1. Zhang, H.B, Zhang, Y.X, Zhong, B, et al: ‘A comprehensive survey of vision-based human action recognition methods’, Sensors, 2019, 19, (5), p. 1005.
    3. 3)
      • 35. Azzari, P., Bevilacqua, A.: ‘Joint spatial and tonal mosaic alignment for motion detection with ptz camera’. InInt. Conf. Image Analysis and Recognition, Berlin, Heidelberg, September 2006, pp. 764775.
    4. 4)
      • 40. Farnebäck, G.: ‘Two-frame motion estimation based on polynomial expansion’. Scandinavian Conf. on Image Analysis, Berlin, Heidelberg, 2003, pp. 363370.
    5. 5)
      • 13. Horn, B.K., Schunck, B.G.: ‘Determining optical flow’, Artif. Intell., 1981, 17, (1–3), pp. 185203.
    6. 6)
      • 21. Mishra, A.K., Aloimonos, Y., Cheong, L.F., et al: ‘Active visual segmentation’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (4), pp. 639653.
    7. 7)
      • 39. Adelson, E.H., Bergen, J.R.: ‘Spatiotemporal energy models for the perception of motion’, Josa a., 1985, 2, (2), pp. 284299.
    8. 8)
      • 26. Wren, C.R., Azarbayejani, A., Darrell, T., et al: ‘Pfinder: real-time tracking of the human body’, IEEE Trans. Pattern Anal. Mach. Intell., 1997, 19, (7), pp. 780785.
    9. 9)
      • 25. Stauffer, C., Grimson, W.E.: ‘Adaptive background mixture models for real-time tracking’. IEEE Computer Society Conf. on Computer Vision and Pattern Recog., Fort Collins, CO, USA, June 1999, vol. 2, pp. 246252.
    10. 10)
      • 24. Huang, J., Zou, W., Zhu, J., et al: ‘Optical flow based real-time moving object detection in unconstrained scenes’, arXiv preprint arXiv, 1807.04890, July 2018.
    11. 11)
      • 14. Baghaie, A., D'Souza, R., Yu, Z.: ‘Dense descriptors for optical flow estimation: a comparative study’, J. Imaging, 2017, 3, (1), pp. 119.
    12. 12)
      • 2. Ke, S.R, Thuc, H., Lee, Y.J., et al: ‘A review on video-based human activity recognition’, Computers, 2013, 2, (2), pp. 88131.
    13. 13)
      • 32. Cho, S.H., Kang, H.B.: ‘Panoramic background generation using mean-shift in moving camera environment’. Proc. of the international conference on image processing, computer vision, and pattern recognition (IPCV), 2011, pp. 17.
    14. 14)
      • 23. Li, X., Xu, C.: ‘Moving object detection in dynamic scenes based on optical flow and superpixels’. IEEE Int. Conf. on Robotics and Bio. (ROBIO), Zhuhai, People's Republic of China, 2015, pp. 8489.
    15. 15)
      • 36. Lowe, D.G.: ‘Distinctive image features from scale-invariant keypoints’, Int. J. Comput. Vis., 2004, 60, (2), pp. 91110.
    16. 16)
      • 11. Lee, S., Kim, N., Jeong, K., et al: ‘Moving object detection using unstable camera for video surveillance systems’, Optik, 2015, 126, (20), pp. 24362441.
    17. 17)
      • 20. Cho, J., Jung, Y., Kim, D.S., et al: ‘Moving object detection based on optical flow estimation and a Gaussian mixture model for advanced driver assistance systems’, Sensors, 2019, 19, (14), p. 3217.
    18. 18)
      • 9. Kushwaha, A., Prakash, O., Srivastava, R.K., et al: ‘Dense flow-based video object segmentation in dynamic scenario’. Recent Trends in Communication, Computing, and Electronics, Singapore, 2019, pp. 271278.
    19. 19)
      • 38. Fischler, M.A., Bolles, R.C.: ‘Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography’, Commun. ACM, 1981, 24, (6), pp. 381395.
    20. 20)
      • 10. Zhou, D., Frémont, V., Quost, B., et al: ‘Moving object detection and segmentation in urban environments from a moving platform’, Image Vision Comp., 2017, 68, pp. 7687.
    21. 21)
      • 37. Dubrofsky, E.: ‘Homography estimation’, Diplomovápráce Vancouver, UniverzitaBritskéKolumbie, March 2009.
    22. 22)
      • 43. Bommisetty, R.M., Prakash, O., Khare, A.: ‘Video superpixels generation through integration of curvelet transform and simple linear iterative clustering’, Multimed. Tools Appl., 2019, 78, (17), pp. 2518525219.
    23. 23)
      • 33. Xue, K., Liu, Y., Ogunmakin, G., et al: ‘Panoramic Gaussian mixture model and large-scale range background substraction method for PTZ camera-based surveillance systems’, Mach. Vis. Appl., 2013, 24, (3), pp. 477492.
    24. 24)
      • 15. Anandan, P.: ‘A computational framework and an algorithm for the measurement of visual motion’, Int. J. Comput. Vis., 1989, 2, (3), pp. 283310.
    25. 25)
      • 41. Harris, C.G.: ‘Stephens M. A combined corner and edge detector’, InAlvey Vis. Conf., 1988, 15, (50), pp. 147151.
    26. 26)
      • 19. Yu, Y., Kurnianggoro, L., Jo, K.H.: ‘Moving object detection for a moving camera based on global motion compensation and adaptive background model’, Int. J. Control, Autom. Syst., 2019, 17, (7), pp. 18661874.
    27. 27)
      • 12. Fleet, D., Weiss, Y.: ‘Optical flow estimation’. Handbook of mathematical models in computer vision, Boston MA, 2006, pp. 237257.
    28. 28)
      • 34. Lucas, B.D., Kanade, T.: ‘An iterative image registration technique with an application to stereo vision’. Proc. DARPA Image Understanding Workshop, Vancouver, BC, Canada, April 1981, pp. 121130.
    29. 29)
      • 42. Tuytelaars, T., Mikolajczyk, K.: ‘Local invariant feature detectors: a survey’, Found. Trends® Comput. Graph. Vis., 2008, 3, (3), pp. 177280.
    30. 30)
      • 8. Yi, K.M., Yun, K., Wan Kim, S.J., et al: ‘Detection of moving objects with non-stationary cameras in 5.8 ms: bringing motion detection to your mobile device’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition Workshops, Portland, OR, USA, 2013, pp. 2734.
    31. 31)
      • 31. Cucchiara, R., Prati, A., Vezzani, R., et al: ‘Advanced video surveillance with pan tilt zoom cameras’. Proc. of the 6th IEEE Int. Workshop on Visual Surveillance, Graz, Austria, May 2006, pp. 334352.
    32. 32)
      • 17. Yun, K., Lim, J., Choi, J.Y.: ‘Scene conditional background update for moving object detection in a moving camera’, Pattern Recognit. Lett., 2017, 88, pp. 5763.
    33. 33)
      • 3. Yazdi, M., Bouwmans, T.: ‘New trends on moving object detection in video images captured by a moving camera: A survey’, Comput. Sci. Rev., 2018, 28, pp. 157177.
    34. 34)
      • 6. López-Rubio, F.J., López-Rubio, E.: ‘Foreground detection for moving cameras with stochastic approximation’, Pattern Recognit. Lett., 2015, 68, pp. 161168.
    35. 35)
      • 27. Bouwmans, T., El Baf, F., Vachon, B.: ‘Background modeling using mixture of Gaussians for foreground detection-a survey’, Recent Patents Comput. Sci., 2008, 1, (3), pp. 219237.
    36. 36)
    37. 37)
      • 7. Kim, J., Wang, X., Wang, H., et al: ‘Fast moving object detection with non-stationary background’, Multimedia Tools Appl., 2013, 67, (1), pp. 311335.
    38. 38)
      • 28. Khare, M, Srivastava, R.K., Khare, A.: ‘Moving object segmentation in daubechies complex wavelet domain’, Signal, Image Video Process., 2015, 9, (3), pp. 635650.
    39. 39)
      • 30. Mittal, A., Huttenlocher, D.: ‘Scene modeling for wide area surveillance and image synthesis’. IEEE Conf. on Computer Vision and Pattern Recognition, Hilton Head Island, SC, USA, June 2000, vol. 2, pp. 160167.
    40. 40)
      • 22. Andriluka, M., Roth, S., Schiele, B.: ‘People-tracking-by-detection and people-detection-by-trackin’. IEEE Conf. on computer vision and pattern recognition, Anchorage, AK, USA, June 2008, pp. 18.
    41. 41)
      • 29. Khare, M., Srivastava, R.K., Khare, A., et al: ‘Single change detection-based moving object segmentation by using daubechies complex wavelet transform’, IET Image Proc., 2014, 8, (6), pp. 334344.
    42. 42)
      • 5. Hu, W.C., Chen, C.H., Chen, T.Y., et al: ‘Moving object detection and tracking from video captured by moving camera’, J. Vis. Commun. Image Represent., 2015, 30, pp. 164180.
    43. 43)
      • 4. Kim, S.W., Yun, K., Yi, M.K., et al: ‘Detection of moving objects with a moving camera using non-panoramic background model’, Mach. Vis. Appl., 2013, 24, (5), pp. 10151028.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2019.0960
Loading

Related content

content/journals/10.1049/iet-ipr.2019.0960
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address