Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Robust tracking of multiple objects in video by adaptive fusion of subband particle filters

Tracking of moving objects in video sequences is an important research problem because of its many industrial, biomedical, and security applications. Significant progress has been made on this topic in the last few decades. However, the ability to track objects accurately in video sequences that have challenging conditions and unexpected events, e.g. background motion and shadows; objects with different sizes and contrasts; a sudden change in illumination; partial object camouflage; and low signal-to-noise ratio, remains an important research problem. To address such difficulties, the authors developed a robust multiscale visual tracker that represents a captured video frame as different subbands in the wavelet domain. It then applies N independent particle filters to a small subset of these subbands, where the choice of this subset of wavelet subbands changes with each captured frame. Finally, it fuses the outputs of these N independent particle filters to obtain final position tracks of multiple moving objects in the video sequence. To demonstrate the robustness of their multiscale visual tracker, they applied it to four example videos that exhibit different challenges. Compared to a standard full-resolution particle filter-based tracker and a single wavelet subband (LL)2-based tracker, their multiscale tracker demonstrates significantly better tracking performance.

References

    1. 1)
      • 55. Sobral, A., Baker, C.G., Bouwmans, T., et al: ‘Incremental and multi-feature tensor subspace learning applied for background modeling and subtraction’. Int. Conf. Image Analysis and Recognition, Vilamoura, Algarve, Portugal, 2014.
    2. 2)
      • 47. Chen, Y., Wang, J., Zhu, B., et al: ‘Pixel-wise deep sequence learning for moving object detection’, IEEE Trans. Circuits Syst. Video Technol., 2017, early access.
    3. 3)
      • 36. Zhao, D., Liu, D., Yang, Y.: ‘An improved PIC algorithm of background reconstruction for detecting moving object’. Fifth Int. Conf. Fuzzy Systems and Knowledge Discovery 2008 FSKD'08, Shandong, China, 2008.
    4. 4)
      • 31. Candy, J.V.: ‘Bayesian signal processing: classical, modern, and particle filtering methods’ (John Wiley & Sons, Hoboken, New Jersey, 2016).
    5. 5)
      • 34. Lang, M., Guo, H., Odegard, J.E., et al: ‘Noise reduction using an undecimated discrete wavelet transform’, IEEE Signal Process. Lett., 1996, 3, (1), pp. 1012.
    6. 6)
      • 8. Raol, J.R.: ‘Data fusion mathematics: theory and practice’ (CRC Press, USA, 2015).
    7. 7)
      • 68. Starck, J.-L.: ‘Nonlinear multiscale transforms’, in Barth, T.J., Chan, T., Haimes, R. (Eds.): ‘Multiscale and multiresolution methods’ (Springer, Berlin, Heidelberg, 2002), pp. 239278.
    8. 8)
      • 66. Prasad, P., Prasad, D., Rao, G.S.: ‘Performance analysis of orthogonal and biorthogonal wavelets for edge detection of X-ray images’, Procedia Comput. Sci., 2016, 87, pp. 116121.
    9. 9)
      • 62. Bouwmans, T.: ‘Recent advanced statistical background modeling for foreground detection – a systematic survey’, Recent Pat. Comput. Sci., 2011, 4, (3), pp. 147176.
    10. 10)
      • 1. Yang, H., Shao, L., Zheng, F., et al: ‘Recent advances and trends in visual tracking: a review’, Neurocomputing, 2011, 74, (18), pp. 38233831.
    11. 11)
      • 37. Kim, H., Sakamoto, R., Kitahara, I., et al: ‘Robust foreground extraction technique using Gaussian family model and multiple thresholds’. Asian Conf. Computer Vision, Tokyo, Japan, 2007.
    12. 12)
      • 63. Hassanpour, H., Sedighi, M., Manashty, A.R.: ‘Video frame's background modeling: reviewing the techniques’, J. Signal Inf. Process., 2011, 2, (02), p. 72.
    13. 13)
      • 50. Farcas, D., Marghes, C., Bouwmans, T.: ‘Background subtraction via incremental maximum margin criterion: a discriminative subspace approach’, Mach. Vis. Appl., 2012, 23, (6), pp. 10831101.
    14. 14)
      • 33. Cappé, O., Moulines, E., Rydén, T.: ‘Inference in hidden Markov models’ (Springer Science & Business Media, New York, USA, 2006).
    15. 15)
      • 2. Yilmaz, A., Javed, O., Shah, M.: ‘Object tracking: a survey’, ACM Comput. Surv., 2006, 38, (4), p. 13.
    16. 16)
      • 60. Gao, T., Liu, Z.-G., Gao, W.-C., et al: ‘A robust technique for background subtraction in traffic video’. Int. Conf. Neural Information Processing, Auckland, New Zealand, 2008.
    17. 17)
      • 22. Cheng, F.-H., Chen, Y.-L.: ‘Real time multiple objects tracking and identification based on discrete wavelet transform’, Pattern Recognit., 2006, 39, (6), pp. 11261139.
    18. 18)
      • 42. Fan, D., Cao, M., Lv, C.: ‘An updating method of self-adaptive background for moving objects detection in video’. Int. Conf. Audio, Language and Image Processing 2008 ICALIP 2008, Shanghai, China, 2008.
    19. 19)
      • 40. Kim, K., Chalidabhongse, T.H., Harwood, D., et al: ‘Background modeling and subtraction by codebook construction’, 2004 Int. Conf. Image Processing 2004 ICIP'04, Singapore, 2004.
    20. 20)
      • 21. Prakash, O., Khare, A.: ‘Tracking of moving object using energy of biorthogonal wavelet transform’, Chiang Mai J. Sci., 2015, 42, (3), pp. 783795.
    21. 21)
      • 26. Shaikh, S.H., Saeed, K., Chaki, N.: ‘Moving object detection approaches, challenges and object tracking’, in Zdonik, S., Shekhar, S., Katz, J., et al (Eds.): ‘Moving object detection using background subtraction’ (Springer, Cham, Switzerland, 2014), pp. 514.
    22. 22)
      • 51. Liwicki, S., Tzimiropoulos, G., Zafeiriou, S., et al: ‘Euler principal component analysis’, Int. J. Comput. Vis., 2013, 101, (3), pp. 498518.
    23. 23)
      • 43. Barnich, O., Van Droogenbroeck, M.: ‘ViBe: a powerful random technique to estimate the background in video sequences’. IEEE Int. Conf. Acoustics, Speech and Signal Processing 2009 ICASSP 2009, Taipei, Taiwan, 2009.
    24. 24)
      • 41. Maddalena, L., Petrosino, A.: ‘The 3D SOBS+ algorithm for moving object detection’, Comput. Vis. Image Underst., 2014, 122, pp. 6573.
    25. 25)
      • 28. Pérez, P., Hue, C., Vermaak, J., et al: ‘Color-based probabilistic tracking’, in Heyden, A., Sparr, G., Nielsen, M., et al (Eds.): ‘Computer vision – ECCV 2002’ (IEEE, Verlag Berlin Heidelberg, 2002), pp. 661675.
    26. 26)
      • 17. Hu, M., Liu, Z., Zhang, J., et al: ‘Robust object tracking via multi-cue fusion’, Signal Process., 2017, 139, pp. 8695.
    27. 27)
      • 13. Felzenszwalb, P.F., Girshick, R.B., McAllester, D., et al: ‘Object detection with discriminatively trained part-based models’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (9), pp. 16271645.
    28. 28)
      • 24. Gonzalez, R., Woods, R.: ‘Digital image processing’ (Pearson Prentice-Hall, Upper Saddle River, NJ, 2008).
    29. 29)
      • 35. Bouwmans, T.: ‘Traditional and recent approaches in background modeling for foreground detection: an overview’, Comput. Sci. Rev., 2014, 11, pp. 3166.
    30. 30)
      • 27. Hua, G., Wu, Y.: ‘Multi-scale visual tracking by sequential belief propagation’. Proc. 2004 IEEE Computer Society Conf. Computer Vision and Pattern Recognition 2004 CVPR 2004, Washington, DC, USA, 2004.
    31. 31)
      • 11. Li, S.Z., Zhu, L., Zhang, Z., et al: ‘Statistical learning of multi-view face detection’. European Conf. Computer Vision, Berlin, Heidelberg, 2002.
    32. 32)
      • 16. Martínez-del-Rincón, J., Orrite, C., Medrano, C.: ‘Rao–Blackwellized particle filter for colour-based tracking’, Pattern Recognit. Lett., 2011, 32, (2), pp. 210220.
    33. 33)
      • 64. Prasad, P., Kumar, M., Rao, G.S.B.: ‘Design of biorthogonal wavelets based on parameterized filter for the analysis of X-ray images’, in Jain, L., Behera, H., Mandal, J., et al (Eds.): ‘Computational intelligence in data miningvol. 2, (Springer, New Delhi, 2015), pp. 99110.
    34. 34)
      • 29. Roy, S.D., Tran, S.D., Davis, L.S., et al: ‘Multi-resolution tracking in space and time’. Sixth Indian Conf. Computer Vision, Graphics & Image Processing 2008 ICVGIP'08, Bhubaneswar, India, 2008.
    35. 35)
      • 6. Zheng, N., Xue, J.: ‘Statistical learning and pattern analysis for image and video processing’ (Springer Science & Business Media, Verlag London, 2009).
    36. 36)
      • 65. Adams, M.D.: ‘Multiresolution signal and geometry processing: filter banks, wavelets, and subdivision (version: 2013-09-26)’ (Michael Adams, Canada, 2013), http://www.ece.uvic.ca/~mdadams/waveletbook.
    37. 37)
      • 14. Khan, Z.H., Gu, I.Y.-H., Backhouse, A.G.: ‘A robust particle filter-based method for tracking single visual object through complex scenes using dynamical object shape and appearance similarity’, J. Signal. Process. Syst., 2011, 65, (1), pp. 6379.
    38. 38)
      • 12. Yuan, Y., Lu, Y., Wang, Q.: ‘Tracking as a whole: multi-target tracking by modeling group behavior with sequential detection’, IEEE Trans. Intell. Transp. Syst., 2017, 18, (12), pp. 33393349.
    39. 39)
      • 7. Maggio, E., Cavallaro, A.: ‘Video tracking: theory and practice’ (John Wiley & Sons, USA, 2011).
    40. 40)
      • 25. Allen, T.G., Luettgen, M.R., Willsky, A.S.: ‘Multiscale appoaches to moving target detection in image sequences’, Opt. Eng., 1994, 33, (7), pp. 22482254.
    41. 41)
      • 4. Hsia, C.-H., Chiang, J.-S., Guo, J.-M.: ‘Multiple moving objects detection and tracking using discrete wavelet transform’ (INTECH Open Access Publisher, London, 2011).
    42. 42)
      • 58. Cevher, V., Sankaranarayanan, A., Duarte, M.F., et al: ‘Compressive sensing for background subtraction’. European Conf. Computer Vision, Springer, Marseille, France, 2008.
    43. 43)
      • 48. Sakkos, D., Liu, H., Han, J., et al: ‘End-to-end video background subtraction with 3D convolutional neural networks’, Multimedia Tools Appl., 2017, 77, (1), pp. 7490.
    44. 44)
      • 30. Stone, L.D., Streit, R.L., Corwin, T.L., et al: ‘Bayesian multiple target tracking’ (Artech House, London, 2013).
    45. 45)
      • 23. Sugandi, B., Kim, H., Tan, J.K., et al: ‘Tracking of moving objects by using a low resolution image’. Second Int. Conf. Innovative Computing, Information and Control 2007 ICICIC'07, Kumamoto, Japan, 2007.
    46. 46)
      • 67. Nieminen, A., Heinonen, P., Neuvo, Y.: ‘A new class of detail-preserving filters for image processing’, IEEE Trans. Pattern Anal. Mach. Intell., 1987, (1), pp. 7490.
    47. 47)
      • 52. Bouwmans, T., Zahzah, E.H.: ‘Robust PCA via principal component pursuit: a review for a comparative evaluation in video surveillance’, Comput. Vis. Image Underst., 2014, 122, pp. 2234.
    48. 48)
      • 38. Tavakkoli, A., Nicolescu, M., Bebis, G.: ‘A novelty detection approach for foreground region detection in videos with quasi-stationary backgrounds’. Int. Symp. Visual Computing, Lake Tahoe, NV, USA, 2006.
    49. 49)
      • 46. Braham, M., Van Droogenbroeck, M.: ‘Deep background subtraction with scene-specific convolutional neural networks’. 2016 Int. Conf. Systems, Signals and Image Processing (IWSSIP), Bratislava, Slovakia, 2016.
    50. 50)
      • 10. Li, P., Chaumette, F.: ‘Image cues fusion for object tracking based on particle filter’. Articulated Motion and Deformable Objects, Palma de Mallorca, Spain, 2004, pp. 99110.
    51. 51)
      • 57. Zhou, X., Yang, C., Yu, W.: ‘Moving object detection by detecting contiguous outliers in the low-rank representation’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (3), pp. 597610, https://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=6461861.
    52. 52)
      • 18. Leang, I., Herbin, S., Girard, B., et al: ‘On-line fusion of trackers for single-object tracking’, Pattern Recognit., 2018, 74, pp. 459473.
    53. 53)
      • 54. Javed, S., Mahmood, A., Bouwmans, T., et al: ‘Spatiotemporal low-rank modeling for complex scene background initialization’, IEEE Trans. Circuits Syst. Video Technol., 2016.
    54. 54)
      • 3. Zhang, B., Li, Z., Perina, A., et al: ‘Adaptive local movement modeling for robust object tracking’, IEEE Trans. Circuits Syst. Video Technol., 2017, 27, (7), pp. 15151526.
    55. 55)
      • 69. Pantrigo, J.J., Hernández, J., Sánchez, A.: ‘Multiple and variable target visual tracking for video-surveillance applications’, Pattern Recognit. Lett., 2010, 31, (12), pp. 15771590.
    56. 56)
      • 45. St-Charles, P.-L., Bilodeau, G.-A., Bergevin, R.: ‘A self-adjusting approach to change detection based on background word consensus’. 2015 IEEE Winter Conf. Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2015.
    57. 57)
      • 59. Pejčić, N., Reljin, N., McDaniel, S., et al: ‘Detection of moving objects using incremental connectivity outlier factor algorithm’. Proc. 47th Annual Southeast Regional Conf., New York, NY, USA, 2009.
    58. 58)
      • 15. Comaniciu, D., Ramesh, V., Meer, P.: ‘Real-time tracking of non-rigid objects using mean shift’. IEEE Conf. Computer Vision and Pattern Recognition 2000 Proc., Hilton Head Island, SC, USA, 2000.
    59. 59)
      • 56. He, J., Zhang, D., Balzano, L., et al: ‘Iterative Grassmannian optimization for robust image alignment’, Image Vis. Comput., 2014, 32, (10), pp. 800813.
    60. 60)
      • 32. Dunn, W.L., Shultis, J.K.: ‘Exploring Monte Carlo methods’ (Elsevier, USA, 2011).
    61. 61)
      • 5. Bar-Shalom, Y., Li, X.-R.: ‘Multitarget-multisensor tracking: principles and techniques’ (University of Connecticut, Storrs, CT, 1995).
    62. 62)
      • 20. Wang, Q., Fang, J., Yuan, Y.: ‘Multi-cue based tracking’, Neurocomputing, 2014, 131, pp. 227236.
    63. 63)
      • 44. St-Charles, P.-L., Bilodeau, G.-A., Bergevin, R.: ‘Flexible background subtraction with self-balanced local sensitivity’. Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, 2014.
    64. 64)
      • 19. Vadakkepat, P., Jing, L.: ‘Improved particle filter in sensor fusion for tracking randomly moving object’, IEEE Trans. Instrum. Meas., 2006, 55, (5), pp. 18231832.
    65. 65)
      • 61. Piccardi, M.: ‘Background subtraction techniques: a review’. 2004 IEEE int. Conf. Systems, Man and Cybernetics, Hague, Netherlands, 2004.
    66. 66)
      • 39. Butler, D.E., Bove, V.M., Sridharan, S.: ‘Real-time adaptive foreground/background segmentation’, EURASIP J. Adv. Signal Process., 2005, 2005, (14), p. 841926.
    67. 67)
      • 9. Hsia, C.-H., Guo, J.-M., Chiang, J.-S.: ‘Improved low-complexity algorithm for 2-D integer lifting-based discrete wavelet transform using symmetric mask-based scheme’, IEEE Trans. Circuits Syst. Video Technol., 2009, 19, (8), pp. 12021208.
    68. 68)
      • 53. Javed, S., Mahmood, A., Bouwmans, T., et al: ‘Background–foreground modeling based on spatiotemporal sparse subspace clustering’, IEEE Trans. Image Process., 2017, 26, (12), pp. 58405854.
    69. 69)
      • 49. Kim, W., Kim, C.: ‘Background subtraction for dynamic texture scenes using fuzzy color histograms’, IEEE Signal Process. Lett., 2012, 19, (3), pp. 127130.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2018.5376
Loading

Related content

content/journals/10.1049/iet-cvi.2018.5376
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address