access icon free Human-like evaluation method for object motion detection algorithms

This study proposes a new method to evaluate the performance of algorithms for moving object detection (MODA) in video sequences. The proposed method is based on human performance metric intervals, instead of ideal metric values (0 or 1) which are commonly used in the literature. These intervals are proposed to establish a more reliable evaluation and comparison, and to identify areas of improvement in the evaluation of MODA. The contribution of the study includes the determination of human segmentation performance metric intervals and their comparison with state-of-the-art MODA, and the evaluation of their segmentation results in a tracking task to establish the impact between performance and practical utility. Results show that human participants had difficulty with achieving a perfect segmentation score. Deep learning algorithms achieved performance above the human average, while other techniques achieved a performance between 88 and 92%. Furthermore, the authors demonstrate that algorithms not ranked at the top of the quantitative metrics worked satisfactorily in a tracking experiment; and therefore, should not be discarded for real applications.

Inspec keywords: object detection; image segmentation; video signal processing; image motion analysis; learning (artificial intelligence); image sequences

Other keywords: object detection; deep learning algorithms; perfect segmentation score; object motion detection algorithms; quantitative metrics; human performance metric intervals; ideal metric values; human average; human participants; reliable evaluation; video sequences; evaluation method; state-of-the-art MODA; human segmentation performance metric intervals

Subjects: Video signal processing; Computer vision and image processing techniques; Optical, image and video signal processing; Knowledge engineering techniques

References

    1. 1)
      • 21. Sobral, A., Vacavant, A.: ‘A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos’, Comput. Vis. Image Underst., 2014, 122, pp. 421.
    2. 2)
      • 22. Farahi, F., Yazdi, H.S.: ‘Probabilistic Kalman filter for moving object tracking’, Signal Process. Image Commun., 2020, 82, pp. 116.
    3. 3)
      • 20. Wang, Z., Bovik, A.C., Sheikh, H.R., et al: ‘Image quality assessment: from error visibility to structural similarity’, IEEE Trans. Image Process., 2004, 13, (4), pp. 600612.
    4. 4)
      • 11. Jiang, S., Lu, X.: ‘WeSamBE: a weight-sample-based method for background subtraction’, IEEE Trans. Circuits Syst. Video Technol., 2018, 28, (9), pp. 21052115.
    5. 5)
      • 10. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: ‘A self-adjusting approach to change detection based on background word consensus’. Proc. IEEE Winter Conf. Appl. Comput. Vision, Waikoloa, HI, 2015, pp. 990997.
    6. 6)
      • 1. Bouwmans, T.: ‘Traditional and recent approaches in background modeling for foreground detection: an overview’, Comput. Sci. Rev., 2014, 11-12, (1), pp. 3166.
    7. 7)
      • 2. Kulchandani, J.S., Dangarwala, K.J.: ‘Moving object detection: review of recent research trends’. Int. Conf. Pervasive Computing (ICPC), Pune, 2015, pp. 15.
    8. 8)
      • 5. Guzman-Pando, A., Chacon-Murguia, M.I.: ‘Analysis and trends on moving object detection algorithm techniques’, IEEE Lat. Am. Trans., 2019, 17, (11), pp. 17711783.
    9. 9)
      • 7. Wang, Y., Luo, Z., Jodoin, P.M.: ‘Interactive deep learning method for segmenting moving objects’, Pattern Recognit. Lett., 2017, 96, pp. 110.
    10. 10)
      • 4. Singha, A., Bhowmik, M.K.: ‘Moving object detection in night time: a survey’. Proc. 2nd Int. Conf. on Innovations in Electronics, Signal Processing and Communication (IESC), Shillong, India, 2019, pp. 4449.
    11. 11)
      • 14. Lim, L.A., Keles, H.Y.: ‘Foreground segmentation using a triplet convolutional neural network for multiscale feature encoding’, ArXiv e-print, 2018, arXiv:1801.02225v1, pp. 114.
    12. 12)
      • 19. Lallier, C., Reynaud, E., Robinault, L., et al: ‘A testing framework for background subtraction algorithms comparison in intrusion detection context’. IEEE Int. Conf. Adv. Video Signal Based Surveillance (AVSS), Klagenfurt, 2011, pp. 314319.
    13. 13)
      • 17. Lim, L.A., Keles, H.Y.: ‘Learning multi-scale features for foreground segmentation’, Pattern Anal. Appl., 2020, 23, pp. 13691380.
    14. 14)
      • 18. Chacon-Murguia, M.I., Guzman-Pando, A., Ramirez-Alonso, G., et al: ‘A novel instrument to compare dynamic object detection algorithms’, Image Vis. Comput., 2019, 88, pp. 1928.
    15. 15)
      • 13. Chen, M., Yang, Q., Li, Q., et al: ‘Spatiotemporal background subtraction using minimum spanning tree and optical flow’. European Conf. on Computer Vision (ECCV), Zurich, Switzerland, 2014, pp. 521534.
    16. 16)
      • 8. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: ‘SuBSENSE: a universal change detection method with local adaptive sensitivity’, IEEE Trans. Image Process., 2015, 24, (1), pp. 359373.
    17. 17)
      • 6. Wang, Y., Jodoin, P.M., Porikli, F., et al: ‘CDnet 2014: an expanded change detection benchmark dataset’. IEEE Conf. on Computer Vision and Pattern Recognition Workshops, Columbus, OH, 2014, pp. 393400.
    18. 18)
      • 15. Lim, L.A., Keles, H.Y.: ‘Foreground segmentation using convolutional neural networks for multiscale feature encoding’, Pattern Recognit. Lett., 2018, 112, pp. 256262.
    19. 19)
      • 12. Lu, H., Chen, Y., Wang, J.: ‘Learning sharable models for robust background subtraction’. IEEE Int. Conf. on Multimedia and Expo (ICME), Turin, 2015, pp. 16.
    20. 20)
      • 3. Yazdi, M., Bouwmans, T.: ‘New trends on moving object detection in video images captured by a moving camera: a survey’, Comput. Sci. Rev., 2018, 28, pp. 157177.
    21. 21)
      • 9. Ramirez-Alonso, G., Chacon-Murguía, M.I.: ‘Auto-adaptive parallel SOM architecture with a modular analysis for dynamic object segmentation in videos’, Neurocomputing, 2016, 175, (B), pp. 9901000.
    22. 22)
      • 16. Wang, R., Bunyak, F., Seetharaman, G., et al: ‘Static and moving object detection using flux tensor with split Gaussian models’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops, Columbus, OH, 2014, pp. 420424.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2019.0997
Loading

Related content

content/journals/10.1049/iet-cvi.2019.0997
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading