access icon free Background modelling using discriminative motion representation

Robustness is an important factor for background modelling on various scenarios. Current pixel-based adaptive segmentation method cannot effectively tackle diverse objects simultaneously. To address this problem, in this study, a background modelling method using discriminative motion representation is proposed. Instead of simple usage of intensity to construct the background model, the proposed method extracts a new local descriptor which uses a weighted combination of differential excitations for each pixel to enhance the discriminability of pixels. On the basis of this background model, different categories of objects can be quickly identified by a simple but effective classification rule and accurately be represented in background model by a smart selection of updating strategies. Therefore, the authors’ background modelling method can generate complete representation for static objects and decrease false detection caused by dynamic background or illumination variations. Extensive experiments have been conducted to demonstrate that the proposed method obtains more advantages of foreground detection than the state-of-the-art methods. In addition, the proposed method provides a computational efficient algorithm for foreground detection tasks.

Inspec keywords: image motion analysis; image representation

Other keywords: local descriptor; discriminative motion representation; weighted combination; background modelling method; discriminability enhancement; differential excitations; foreground detection tasks; classification rule; illumination variations; updating strategies; computational efficient algorithm; dynamic background; static objects; smart selection

Subjects: Optical, image and video signal processing; Computer vision and image processing techniques

References

    1. 1)
      • 45. Dou, J., Li, J.: ‘Moving object detection based on improved ViBe and graph cut optimization’, Optik, Int. J. Light Electron Opt., 2013, 124, pp. 60816088.
    2. 2)
      • 47. Kryjak, T., Komorkiewicz, M., Gorgon, M.: ‘Hardware implementation of the PBAS foreground detection method in FPGA’. Int. Conf. Mixed Design of Integrated Circuits and Systems, MIXDES 2013, 2013, pp. 479484.
    3. 3)
      • 44. He, B., Yu, S.: ‘An improved background subtraction method based on ViBe’. Chinese Conf. on Pattern Recognition, October 2016, pp. 356368.
    4. 4)
      • 35. Sheikh, Y., Shah, M.: ‘Bayesian modeling of dynamic scenes for object detection’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (11), pp. 17781792.
    5. 5)
      • 13. Brutzer, S., Höferlin, B., Heidemann, G.: ‘Evaluation of background subtraction techniques for video surveillance’. Proc. Computer Vision and Pattern Recognition (CVPR), June 2011, pp. 19371944.
    6. 6)
      • 16. Xu, Y., Dong, J., Zhang, B., et al: ‘Background modeling methods in video analysis: a review and comparative evaluation’, CAAI Trans. Intell. Tech, 2016, 1, (1), pp. 4360.
    7. 7)
      • 36. Maddalena, L., Petrosino, A.: ‘A self-organizing approach to background subtraction for visual surveillance applications’, IEEE Trans. Image Process., 2008, 17, (7), pp. 11681177.
    8. 8)
      • 10. Sobral, A., Vacavant, A.: ‘A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos’, Comput. Vis. Image Underst., 2014, 14, pp. 421.
    9. 9)
      • 54. Cheng, F., Chen, B., Huang, S.: ‘A background model re-initialization method based on sudden luminance change detection’, Eng. Appl. Artif. Intell., 2015, 38, pp. 138146.
    10. 10)
      • 25. Xu, H., Caramanis, C., Sanghavi, S.: ‘Robust PCA via outlier pursuit’, IEEE Trans. Inf. Theory, 2012, 58, (5), pp. 30473064.
    11. 11)
      • 20. Yao, J., Odobez, J.: ‘Multi-layer background subtraction based on color and texture’. Proc. IEEE Conf. Computer Vision Pattern Recognition, June 2007, pp. 18.
    12. 12)
      • 29. Tan, H., Cheng, B., Feng, J., et al: ‘Low-n-rank tensor recovery based on multi-linear augmented Lagrange multiplier method’, Neurocomputing, 2013, 119, pp. 144152.
    13. 13)
      • 26. Zhou, X., Yang, C., Yu, W.: ‘Moving object detection by detecting contiguous outliers in the low-rank representation’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (3), pp. 597610.
    14. 14)
      • 52. Thirde, D., Li, L., Ferryman, F.: ‘Overview of the PETS2006 challenge’. Proc. Ninth IEEE Int. Workshop on Performance Evaluation of Tracking and Surveillance, 2006, June 2006, pp. 4750.
    15. 15)
      • 21. Kim, S., Yang, D., Park, H.: ‘A disparity-based adaptive multi-homography method for moving target detection based on global motion compensation’, IEEE Trans. Circuits Syst. Video Technol., 2015, 26, (2), pp. 278289.
    16. 16)
      • 18. Zivkovic, Z.: ‘Improved adaptive Gaussian mixture model for background subtraction’. Proc. 17th IEEE Int. Conf. Pattern Recognition, 2004, vol. 2, pp. 2831.
    17. 17)
      • 23. Bouwmans, T., Zahzah, E.: ‘Robust PCA via principal component pursuit: a review for a comparative evaluation in video surveillance’, Comput. Vis. Image Underst., 2014, 122, pp. 2234.
    18. 18)
      • 24. Candès, E., Li, X., Ma, Y., et al: ‘Robust principal component analysis?’, J. ACM, 2011, 58, (3), pp. 18091821.
    19. 19)
      • 42. Chang, L., Liu, Z., Ren, Y.: ‘Improved adaptive ViBe and the application for segmentation of complex background’, Math. Probl. Eng., 2016, 2016, pp. 18.
    20. 20)
      • 39. Chen, F., Zhu, B., Jing, W., et al: ‘Removal shadow with background subtraction model ViBe algorithm’. Int. Symp. on Instrumentation and Measurement, Sensor Network and Automation, December 2013, pp. 264269.
    21. 21)
      • 3. Smeulders, A., Chu, D., Cucchiara, R., et al: ‘Visual tracking: an experimental survey’, IEEE Trans. Pattern Anal. Mach. Intell., 2014, 36, (7), pp. 14421468.
    22. 22)
      • 50. Chen, J., Shan, S., He, C., et al: ‘WLD: a robust local image descriptor’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (9), pp. 17051720.
    23. 23)
      • 17. Stauffer, C., Grimson, W.: ‘Adaptive background mixture models for real-time tracking’. Proc. IEEE Conf. Computer Vision Pattern Recognit., 1999, pp. 246252.
    24. 24)
      • 30. Sobral, A., Baker, C., Bouwmans, T., et al: ‘Incremental and multi-feature tensor subspace learning applied for background modeling and subtraction’. Int. Conf. on Image Analysis and Recognition, ICIAR 2014, October 2014.
    25. 25)
      • 9. Wen, X., Shao, L., Xue, Y., et al: ‘A rapid learning algorithm for vehicle classification’, Inf. Sci., 2015, 295, (1), pp. 395406.
    26. 26)
      • 7. Dollar, P., Wojek, C., Schiele, B., et al: ‘Pedestrian detection: an evaluation of the state of the art’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (4), pp. 743761.
    27. 27)
      • 19. Lin, H.H., Chuang, J.H., Liu, T.L.: ‘Regularized background adaptation: a novel learning rate control scheme for Gaussian mixture modeling’, IEEE Trans. Image Process., 20, (3), pp. 822836.
    28. 28)
      • 34. Elgammal, A., Duraiswami, R., Harwood, D., et al: ‘Background and foreground modeling using nonparametric kernel density estimation for visual surveillance’, Proc. IEEE, 2002, 90, (7), pp. 11511163.
    29. 29)
      • 1. Hu, W., Tan, T., Wang, L., et al: ‘A survey on visual surveillance of object motion and behaviors’, IEEE Trans. Syst. Man Cybern. C, Appl. Rev., 2004, 34, (3), pp. 334352.
    30. 30)
      • 11. Hofmann, M., Tiefenbacher, P., Rigoll, G.: ‘Background segmentation with feedback: the pixel-based adaptive segmenter’. Proc. IEEE Conf. Computer Vision Pattern Recognition Workshops, 2012, pp. 3843.
    31. 31)
      • 15. Bouwmans, T.: ‘Traditional and recent approaches in background modeling for foreground detection: an overview’, Comput. Sci. Rev., 2014, 11-12, pp. 3266.
    32. 32)
      • 33. Sobral, A., Javed, S., Jung, S., et al: ‘Online stochastic tensor decomposition for background subtraction in multispectral video sequences’. Workshop on Robust Subspace Learning and Computer Vision, ICCV 2015, Santiago, Chile, December 2015.
    33. 33)
      • 28. Tsai, D., Lai, S.: ‘Independent component analysis-based background subtraction for indoor surveillance’, IEEE Trans. Image Process., 2009, 18, (1), pp. 158167.
    34. 34)
      • 22. Liao, S., Zhao, G., Kellokumpu, V., et al: ‘Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes’. Proc. IEEE Conf. Computer Vision Pattern Recognition, 2010, pp. 13011306.
    35. 35)
      • 48. Kryjak, T., Komorkiewicz, M., Gorgon, M.: ‘Real-time foreground object detection combining the PBAS background modelling algorithm and feedback from scene analysis module’, Int. J. Electron. Telecommun., IJET, 2014, 2014, pp. 6172.
    36. 36)
      • 27. Wen, J., Xu, Y., Tang, J., et al: ‘Joint video frame set division and low-rank decomposition for background subtraction’, IEEE Trans. Circuits Syst. Video Technol., 2014, 24, (12), pp. 20342048.
    37. 37)
      • 4. Zhang, K., Zhang, L., Yang, M.: ‘Real-time compressive tracking’. Proc. European Conf. Computer Vision, October 2012, pp. 864877.
    38. 38)
      • 49. Javed, S., Oh, S., Jung, S.: ‘IPBAS: improved pixel based adaptive background segmenter for background subtraction’. Conf.: Human Computer Interaction, Korea, January 2014.
    39. 39)
      • 46. Chu, Y., Chen, J., Chen, X.: ‘An improved ViBe background subtraction method based on region motion classification’. MIPPR 2013: Automatic Target Recognition and Navigation, Wuhan, China, October 2013.
    40. 40)
      • 37. Wang, H., Suter, D.: ‘Background subtraction based on a robust consensus method’. Proc. IEEE Conf. Pattern Recognition, 2006, pp. 223226.
    41. 41)
      • 8. Tian, Y., Feris, R., Liu, H., et al: ‘Robust detection of abandoned and removed objects in complex surveillance videos’, IEEE Trans. Syst. Man Cybern. C, Appl., 2011, 41, (5), pp. 565576.
    42. 42)
      • 55. Goyette, N., Jodoin, P., Porikli, F., et al: ‘Change detection. net: a new change detection benchmark dataset’. Proc. IEEE Conf. Comput. Vision and Pattern Recognition Workshops, June 2012, pp. 18.
    43. 43)
      • 2. Huang, S.: ‘An advanced motion detection algorithm with video quality analysis for video surveillance systems’, IEEE Trans. Circuits Syst. Video Technol., 2011, 21, (1), pp. 114.
    44. 44)
      • 51. Zhong, Z., Zhang, B., Lu, G., et al: ‘An adaptive background modeling method for foreground segmentation’, IEEE Trans. Intell. Transp. Syst., 2017, 18, (5), pp. 11091121, doi: 10.1109/TITS.2016.2597441.
    45. 45)
      • 53. i-LIDS Dataset for AVSS 2007. Available at ftp://motinas.elec.qmul.ac.uk/pub/iLids.
    46. 46)
      • 41. Yu, Y., Cao, M., Yue, F.: ‘EViBe: an improved ViBe algorithm for detecting moving objects’, Chin. J. Sci. Instrum., 2014, 35, (4), pp. 924931.
    47. 47)
      • 12. Haque, M., Murshed, M.: ‘Perception-inspired background subtraction’, IEEE Trans. Circuits Syst. Video Technol., 2013, 23, (12), pp. 21272140.
    48. 48)
      • 14. Bouwmans, T.: ‘Recent advanced statistical background modeling for foreground detection – a systematic survey’, Recent Pat. Comput. Sci., 2011, 4, (3), pp. 147176.
    49. 49)
      • 32. Li, L., Wang, P., Hu, Q., et al: ‘Efficient background modeling based on sparse representation and outlier iterative removal’, IEEE Trans. Circuits Syst. Video Technol., 2014, 26, pp. 278289.
    50. 50)
      • 40. Kryjak, T., Gorgon, M.: ‘Real-time implementation of the ViBe foreground object segmentation algorithm’. Federated Conf. on Computer Science and Information Systems, FedCSIS 2013, 2013, pp. 591596.
    51. 51)
      • 38. Barnich, O., Droogenbroeck, M.: ‘ViBe: a universal background subtraction algorithm for video sequences’, IEEE Trans. Image Process., 2011, 20, (6), pp. 17091724.
    52. 52)
      • 6. Hou, Y., Pang, G.: ‘People counting and human detection in a challenging situation’, IEEE Trans. Syst. Man Cybern. A, Syst. Humans, 2011, 41, (1), pp. 2433.
    53. 53)
      • 43. Zhang, Y., Zhao, X., Tan, M.: ‘Motion detection based on improved Sobel and ViBe algorithm’. Chinese Control Conf., CCC 2016, Chengdu, China, 2016, pp. 41434148.
    54. 54)
      • 31. Navasca, C., Wang, X.: ‘Adaptive low rank approximation of tensors’. Workshop on Robust Subspace Learning and Computer Vision, ICCV 2015, December 2015.
    55. 55)
      • 5. Wang, X., Ma, K., Ng, G., et al: ‘Trajectory analysis and semantic region modeling using nonparametric hierarchical Bayesian models’, Int. J. Comput. Vis., 2011, 95, (3), pp. 287312.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2016.0426
Loading

Related content

content/journals/10.1049/iet-cvi.2016.0426
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading