http://iet.metastore.ingenta.com
1887

Automatic underwater moving object detection using multi-feature integration framework in complex backgrounds

Automatic underwater moving object detection using multi-feature integration framework in complex backgrounds

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Moving object detection in a video sequence is one of the leading tasks of marine scientists to explore and monitor applications. The videos acquired in the underwater environment are usually degraded due to the physical properties of water medium as compared with images acquired in the air and that affects the performance of feature descriptors. In this study, a new feature descriptor, multi-frame triplet pattern (MFTP) is proposed for underwater moving object detection. The MFTP encodes the structure of local region based on three sets of frames, which are calculated by considering local differences in intensities between the centre pixel and its nine neighbours. Furthermore, the robustness of the proposed method is increased by integrating it with colour and motion features. The performance of the proposed framework is tested by conducting seven experiments on Fish4Knowledge database for underwater moving object detection applications. The results of the proposed method show a significant improvement as compared with state-of-the-art techniques in terms of their evaluation measures.

References

    1. 1)
      • 1. Schettini, R., Corchs, S.: ‘Underwater image processing: state of the art of restoration and image enhancement methods’, EURASIP J. Adv. Signal Process., 2010, 2010, (1), pp. 114.
    2. 2)
      • 2. Wren, C.R., Azarbayejani, A., Darrell, T., et al: ‘Pfinder: real-time tracking of the human body’, IEEE Trans. Pattern Anal. Mach. Intell., 1997, 19, (7), pp. 780785.
    3. 3)
      • 3. Stauffer, C., Grimson, W.E.L.: ‘Adaptive background mixture models for real-time tracking’. IEEE Computer Society Conf. Computer Vision and Pattern Recognition, Fort Collins, CO, USA, 1999.
    4. 4)
      • 4. Elgammal, A., Harwood, D., Davis, L.: ‘Non-parametric model for background subtraction’, in Vernon, D. (Ed.): ‘Computer vision’ (Springer Berlin, Heidelberg, 2000), pp. 751767.
    5. 5)
      • 5. Wang, Y., Liang, Y., Zhang, L., et al: ‘Adaptive spatiotemporal background modelling’, IET Comput. Vis., 2012, 6, (5), pp. 451458.
    6. 6)
      • 6. Sheikh, Y., Shah, M.: ‘Bayesian modeling of dynamic scenes for object detection’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (11), pp. 17781792.
    7. 7)
      • 7. Kim, K., Chalidabhongse, T.H., Harwood, D., et al: ‘Real-time foreground-background segmentation using codebook model’, Real-Time Imaging, 2005, 11, (3), pp. 172185.
    8. 8)
      • 8. Guo, J.M., Liu, Y.F., Hsia, C.H., et al: ‘Hierarchical method for foreground detection using codebook model’, IEEE Trans. Circuits Syst. Video Technol., 2011, 21, (6), pp. 804815.
    9. 9)
      • 9. Zaharescu, A., Jamieson, M.: ‘Multi-scale multi-feature codebook-based background subtraction’. IEEE Int. Conf. Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 2011.
    10. 10)
      • 10. Wang, Q., Fang, J., Yuan, Y.: ‘Adaptive road detection via context-aware label transfer’, Neurocomputing, 2015, 158, (Supplement C), pp. 174183.
    11. 11)
      • 11. Yuan, Y., Xiong, Z., Wang, Q.: ‘An incremental framework for video-based traffic sign detection, tracking, and recognition’, IEEE Trans. Intell. Transp. Syst., 2017, 18, (7), pp. 19181929.
    12. 12)
      • 12. Zhang, S., Li, N., Cheng, X., et al: ‘Adaptive object detection by implicit sub-class sharing features’, Signal Process., 2013, 93, (6), pp. 14581470.
    13. 13)
      • 13. Ojala, T., Pietikainen, M., Maenpaa, T.: ‘Multiresolution gray-scale and rotation invariant texture classification with local binary patterns’, IEEE Trans. Pattern Anal. Mach. Intell., 2002, 24, (7), pp. 971987.
    14. 14)
      • 14. Tan, X., Triggs, B.: ‘Enhanced local texture feature sets for face recognition under difficult lighting conditions’, IEEE Trans. Image Process., 2010, 19, (6), pp. 16351650.
    15. 15)
      • 15. Liao, S., Zhao, G., Kellokumpu, V., et al: ‘Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes’. IEEE Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 2010.
    16. 16)
      • 16. Chen, Y., Wang, J., Xu, M., et al: ‘A unified model sharing framework for moving object detection’, Signal Process., 2016, 124, pp. 7280.
    17. 17)
      • 17. Zhang, X., Yang, J.: ‘The analysis of the color similarity problem in moving object detection’, Signal Process., 2009, 89, (4), pp. 685691.
    18. 18)
      • 18. Tian, Z., Zheng, N., Xue, J., et al: ‘Video object segmentation with shape cue based on spatiotemporal superpixel neighbourhood’, IET Comput. Vis., 2014, 8, (1), pp. 1625.
    19. 19)
      • 19. Lin, L., Xu, Y., Liang, X., et al: ‘Complex background subtraction by pursuing dynamic spatio-temporal models’, IEEE Trans. Image Process., 2014, 23, (7), pp. 31913202.
    20. 20)
      • 20. Zhang, B., Gao, Y., Zhao, S., et al: ‘Kernel similarity modeling of texture pattern flow for motion detection in complex background’, IEEE Trans. Circuits Syst. Video Technol., 2011, 21, (1), pp. 2938.
    21. 21)
      • 21. Han, H., Zhu, J., Liao, S., et al: ‘Moving object detection revisited: speed and robustness’, IEEE Trans. Circuits Syst. Video Technol., 2015, 25, (6), pp. 910921.
    22. 22)
      • 22. Li, L., Jin, L., Xu, X., et al: ‘Unsupervised color–texture segmentation based on multiscale quaternion Gabor filters and splitting strategy’, Signal Process., 2013, 93, (9), pp. 25592572.
    23. 23)
      • 23. Spampinato, C., Palazzo, S., Kavasidis, I.: ‘A texton-based kernel density estimation approach for background modeling under extreme conditions’, Comput. Vis. Image Underst., 2014, 122, pp. 7483.
    24. 24)
      • 24. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: ‘Flexible background subtraction with self-balanced local sensitivity’. IEEE Conf. Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, 2014.
    25. 25)
      • 25. Noh, S., Jeon, M.: ‘A new framework for background subtraction using multiple cues’, in Lee, K.M., Matsushita, Y., Rehg, J.M., et al (Eds.): ‘Computer vision’ (Springer Berlin, Heidelberg, 2013), pp. 493506.
    26. 26)
      • 26. Chiranjeevi, P., Sengupta, S.: ‘Neighborhood supported model level fuzzy aggregation for moving object segmentation’, IEEE Trans. Image Process., 2014, 23, (2), pp. 645657.
    27. 27)
      • 27. Chiranjeevi, P., Sengupta, S.: ‘Detection of moving objects using multi-channel kernel fuzzy correlogram based background subtraction’, IEEE Trans. Cybern., 2014, 44, (6), pp. 870881.
    28. 28)
      • 28. Braham, M., Droogenbroeck, M.V.: ‘Deep background subtraction with scene-specific convolutional neural networks’. 2016 Int. Conf. Systems, Signals and Image Processing (IWSSIP), Bratislava, Slovakia, 2016.
    29. 29)
      • 29. Luo, Y., Guan, Y.P.: ‘Motion objects segmentation based on structural similarity background modelling’, IET Comput. Vis., 2015, 9, (4), pp. 476488.
    30. 30)
      • 30. Dawoud, A., Netchaev, A.: ‘Fusion of visual cues of intensity and texture in Markov random fields image segmentation’, IET Comput. Vis., 2012, 6, (6), pp. 603609.
    31. 31)
      • 31. Zhang, Y., Wang, X., Qu, B.: ‘Three-frame difference algorithm research based on mathematical morphology’, Procedia Eng., 2012, 29, pp. 27052709.
    32. 32)
      • 32. Lin, Y., Tong, Y., Cao, Y., et al: ‘Visual-attention-based background modeling for detecting infrequently moving objects’, IEEE Trans. Circuits Syst. Video Technol., 2017, 27, (6), pp. 12081221.
    33. 33)
      • 33. Heikkila, M., Pietikainen, M.: ‘A texture-based method for modeling the background and detecting moving objects’, IEEE Trans. Pattern Anal. Mach. Intell., 2006, 28, (4), pp. 657662.
    34. 34)
      • 34. Kavasidis, I., Palazzo, S., Salvo, R.D., et al: ‘An innovative web-based collaborative platform for video annotation’, Multimedia Tools Appl., 2014, 70, (1), pp. 413432.
    35. 35)
      • 35. Goyette, N., Jodoin, P.M., Porikli, F., et al: ‘Changedetection.Net: a new change detection benchmark dataset’. 2012 IEEE Computer Society Conf. Computer Vision and Pattern Recognition, Providence, RI, USA, 2012.
    36. 36)
      • 36. Otsu, N.: ‘A threshold selection method from gray-level histograms’, IEEE Trans. Syst. Man Cybern., 1979, 9, (1), pp. 6266.
    37. 37)
      • 37. Sobral, A.: ‘BGSLibrary: an Opencv C++ background subtraction library’. IX Workshop De Visão Computacional (WVC'2013), Botafogo, Brazil, June 2013.
    38. 38)
      • 38. Yao, J., Odobez, J.M.: ‘Multi-layer background subtraction based on color and texture’. 2007 IEEE Conf. Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 2007.
    39. 39)
      • 39. Baf, F.E., Bouwmans, T., Vachon, B.: ‘Fuzzy integral for moving object detection’. 2008 IEEE Int. Conf. Fuzzy Systems, Hong Kong, China, 2008.
    40. 40)
      • 40. Zhong, Z., Zhang, B., Lu, G., et al: ‘An adaptive background modeling method for foreground segmentation’, IEEE Trans. Intell. Transp. Syst., 2017, 18, (5), pp. 11091121.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0013
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0013
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address