http://iet.metastore.ingenta.com
1887

Robust tracking of multiple objects in video by adaptive fusion of subband particle filters

Robust tracking of multiple objects in video by adaptive fusion of subband particle filters

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Tracking of moving objects in video sequences is an important research problem because of its many industrial, biomedical, and security applications. Significant progress has been made on this topic in the last few decades. However, the ability to track objects accurately in video sequences that have challenging conditions and unexpected events, e.g. background motion and shadows; objects with different sizes and contrasts; a sudden change in illumination; partial object camouflage; and low signal-to-noise ratio, remains an important research problem. To address such difficulties, the authors developed a robust multiscale visual tracker that represents a captured video frame as different subbands in the wavelet domain. It then applies N independent particle filters to a small subset of these subbands, where the choice of this subset of wavelet subbands changes with each captured frame. Finally, it fuses the outputs of these N independent particle filters to obtain final position tracks of multiple moving objects in the video sequence. To demonstrate the robustness of their multiscale visual tracker, they applied it to four example videos that exhibit different challenges. Compared to a standard full-resolution particle filter-based tracker and a single wavelet subband (LL)2-based tracker, their multiscale tracker demonstrates significantly better tracking performance.

References

    1. 1)
      • 1. Yang, H., Shao, L., Zheng, F., et al: ‘Recent advances and trends in visual tracking: a review’, Neurocomputing, 2011, 74, (18), pp. 38233831.
    2. 2)
      • 2. Yilmaz, A., Javed, O., Shah, M.: ‘Object tracking: a survey’, ACM Comput. Surv., 2006, 38, (4), p. 13.
    3. 3)
      • 3. Zhang, B., Li, Z., Perina, A., et al: ‘Adaptive local movement modeling for robust object tracking’, IEEE Trans. Circuits Syst. Video Technol., 2017, 27, (7), pp. 15151526.
    4. 4)
      • 4. Hsia, C.-H., Chiang, J.-S., Guo, J.-M.: ‘Multiple moving objects detection and tracking using discrete wavelet transform’ (INTECH Open Access Publisher, London, 2011).
    5. 5)
      • 5. Bar-Shalom, Y., Li, X.-R.: ‘Multitarget-multisensor tracking: principles and techniques’ (University of Connecticut, Storrs, CT, 1995).
    6. 6)
      • 6. Zheng, N., Xue, J.: ‘Statistical learning and pattern analysis for image and video processing’ (Springer Science & Business Media, Verlag London, 2009).
    7. 7)
      • 7. Maggio, E., Cavallaro, A.: ‘Video tracking: theory and practice’ (John Wiley & Sons, USA, 2011).
    8. 8)
      • 8. Raol, J.R.: ‘Data fusion mathematics: theory and practice’ (CRC Press, USA, 2015).
    9. 9)
      • 9. Hsia, C.-H., Guo, J.-M., Chiang, J.-S.: ‘Improved low-complexity algorithm for 2-D integer lifting-based discrete wavelet transform using symmetric mask-based scheme’, IEEE Trans. Circuits Syst. Video Technol., 2009, 19, (8), pp. 12021208.
    10. 10)
      • 10. Li, P., Chaumette, F.: ‘Image cues fusion for object tracking based on particle filter’. Articulated Motion and Deformable Objects, Palma de Mallorca, Spain, 2004, pp. 99110.
    11. 11)
      • 11. Li, S.Z., Zhu, L., Zhang, Z., et al: ‘Statistical learning of multi-view face detection’. European Conf. Computer Vision, Berlin, Heidelberg, 2002.
    12. 12)
      • 12. Yuan, Y., Lu, Y., Wang, Q.: ‘Tracking as a whole: multi-target tracking by modeling group behavior with sequential detection’, IEEE Trans. Intell. Transp. Syst., 2017, 18, (12), pp. 33393349.
    13. 13)
      • 13. Felzenszwalb, P.F., Girshick, R.B., McAllester, D., et al: ‘Object detection with discriminatively trained part-based models’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (9), pp. 16271645.
    14. 14)
      • 14. Khan, Z.H., Gu, I.Y.-H., Backhouse, A.G.: ‘A robust particle filter-based method for tracking single visual object through complex scenes using dynamical object shape and appearance similarity’, J. Signal. Process. Syst., 2011, 65, (1), pp. 6379.
    15. 15)
      • 15. Comaniciu, D., Ramesh, V., Meer, P.: ‘Real-time tracking of non-rigid objects using mean shift’. IEEE Conf. Computer Vision and Pattern Recognition 2000 Proc., Hilton Head Island, SC, USA, 2000.
    16. 16)
      • 16. Martínez-del-Rincón, J., Orrite, C., Medrano, C.: ‘Rao–Blackwellized particle filter for colour-based tracking’, Pattern Recognit. Lett., 2011, 32, (2), pp. 210220.
    17. 17)
      • 17. Hu, M., Liu, Z., Zhang, J., et al: ‘Robust object tracking via multi-cue fusion’, Signal Process., 2017, 139, pp. 8695.
    18. 18)
      • 18. Leang, I., Herbin, S., Girard, B., et al: ‘On-line fusion of trackers for single-object tracking’, Pattern Recognit., 2018, 74, pp. 459473.
    19. 19)
      • 19. Vadakkepat, P., Jing, L.: ‘Improved particle filter in sensor fusion for tracking randomly moving object’, IEEE Trans. Instrum. Meas., 2006, 55, (5), pp. 18231832.
    20. 20)
      • 20. Wang, Q., Fang, J., Yuan, Y.: ‘Multi-cue based tracking’, Neurocomputing, 2014, 131, pp. 227236.
    21. 21)
      • 21. Prakash, O., Khare, A.: ‘Tracking of moving object using energy of biorthogonal wavelet transform’, Chiang Mai J. Sci., 2015, 42, (3), pp. 783795.
    22. 22)
      • 22. Cheng, F.-H., Chen, Y.-L.: ‘Real time multiple objects tracking and identification based on discrete wavelet transform’, Pattern Recognit., 2006, 39, (6), pp. 11261139.
    23. 23)
      • 23. Sugandi, B., Kim, H., Tan, J.K., et al: ‘Tracking of moving objects by using a low resolution image’. Second Int. Conf. Innovative Computing, Information and Control 2007 ICICIC'07, Kumamoto, Japan, 2007.
    24. 24)
      • 24. Gonzalez, R., Woods, R.: ‘Digital image processing’ (Pearson Prentice-Hall, Upper Saddle River, NJ, 2008).
    25. 25)
      • 25. Allen, T.G., Luettgen, M.R., Willsky, A.S.: ‘Multiscale appoaches to moving target detection in image sequences’, Opt. Eng., 1994, 33, (7), pp. 22482254.
    26. 26)
      • 26. Shaikh, S.H., Saeed, K., Chaki, N.: ‘Moving object detection approaches, challenges and object tracking’, in Zdonik, S., Shekhar, S., Katz, J., et al (Eds.): ‘Moving object detection using background subtraction’ (Springer, Cham, Switzerland, 2014), pp. 514.
    27. 27)
      • 27. Hua, G., Wu, Y.: ‘Multi-scale visual tracking by sequential belief propagation’. Proc. 2004 IEEE Computer Society Conf. Computer Vision and Pattern Recognition 2004 CVPR 2004, Washington, DC, USA, 2004.
    28. 28)
      • 28. Pérez, P., Hue, C., Vermaak, J., et al: ‘Color-based probabilistic tracking’, in Heyden, A., Sparr, G., Nielsen, M., et al (Eds.): ‘Computer vision – ECCV 2002’ (IEEE, Verlag Berlin Heidelberg, 2002), pp. 661675.
    29. 29)
      • 29. Roy, S.D., Tran, S.D., Davis, L.S., et al: ‘Multi-resolution tracking in space and time’. Sixth Indian Conf. Computer Vision, Graphics & Image Processing 2008 ICVGIP'08, Bhubaneswar, India, 2008.
    30. 30)
      • 30. Stone, L.D., Streit, R.L., Corwin, T.L., et al: ‘Bayesian multiple target tracking’ (Artech House, London, 2013).
    31. 31)
      • 31. Candy, J.V.: ‘Bayesian signal processing: classical, modern, and particle filtering methods’ (John Wiley & Sons, Hoboken, New Jersey, 2016).
    32. 32)
      • 32. Dunn, W.L., Shultis, J.K.: ‘Exploring Monte Carlo methods’ (Elsevier, USA, 2011).
    33. 33)
      • 33. Cappé, O., Moulines, E., Rydén, T.: ‘Inference in hidden Markov models’ (Springer Science & Business Media, New York, USA, 2006).
    34. 34)
      • 34. Lang, M., Guo, H., Odegard, J.E., et al: ‘Noise reduction using an undecimated discrete wavelet transform’, IEEE Signal Process. Lett., 1996, 3, (1), pp. 1012.
    35. 35)
      • 35. Bouwmans, T.: ‘Traditional and recent approaches in background modeling for foreground detection: an overview’, Comput. Sci. Rev., 2014, 11, pp. 3166.
    36. 36)
      • 36. Zhao, D., Liu, D., Yang, Y.: ‘An improved PIC algorithm of background reconstruction for detecting moving object’. Fifth Int. Conf. Fuzzy Systems and Knowledge Discovery 2008 FSKD'08, Shandong, China, 2008.
    37. 37)
      • 37. Kim, H., Sakamoto, R., Kitahara, I., et al: ‘Robust foreground extraction technique using Gaussian family model and multiple thresholds’. Asian Conf. Computer Vision, Tokyo, Japan, 2007.
    38. 38)
      • 38. Tavakkoli, A., Nicolescu, M., Bebis, G.: ‘A novelty detection approach for foreground region detection in videos with quasi-stationary backgrounds’. Int. Symp. Visual Computing, Lake Tahoe, NV, USA, 2006.
    39. 39)
      • 39. Butler, D.E., Bove, V.M., Sridharan, S.: ‘Real-time adaptive foreground/background segmentation’, EURASIP J. Adv. Signal Process., 2005, 2005, (14), p. 841926.
    40. 40)
      • 40. Kim, K., Chalidabhongse, T.H., Harwood, D., et al: ‘Background modeling and subtraction by codebook construction’, 2004 Int. Conf. Image Processing 2004 ICIP'04, Singapore, 2004.
    41. 41)
      • 41. Maddalena, L., Petrosino, A.: ‘The 3D SOBS+ algorithm for moving object detection’, Comput. Vis. Image Underst., 2014, 122, pp. 6573.
    42. 42)
      • 42. Fan, D., Cao, M., Lv, C.: ‘An updating method of self-adaptive background for moving objects detection in video’. Int. Conf. Audio, Language and Image Processing 2008 ICALIP 2008, Shanghai, China, 2008.
    43. 43)
      • 43. Barnich, O., Van Droogenbroeck, M.: ‘ViBe: a powerful random technique to estimate the background in video sequences’. IEEE Int. Conf. Acoustics, Speech and Signal Processing 2009 ICASSP 2009, Taipei, Taiwan, 2009.
    44. 44)
      • 44. St-Charles, P.-L., Bilodeau, G.-A., Bergevin, R.: ‘Flexible background subtraction with self-balanced local sensitivity’. Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, 2014.
    45. 45)
      • 45. St-Charles, P.-L., Bilodeau, G.-A., Bergevin, R.: ‘A self-adjusting approach to change detection based on background word consensus’. 2015 IEEE Winter Conf. Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2015.
    46. 46)
      • 46. Braham, M., Van Droogenbroeck, M.: ‘Deep background subtraction with scene-specific convolutional neural networks’. 2016 Int. Conf. Systems, Signals and Image Processing (IWSSIP), Bratislava, Slovakia, 2016.
    47. 47)
      • 47. Chen, Y., Wang, J., Zhu, B., et al: ‘Pixel-wise deep sequence learning for moving object detection’, IEEE Trans. Circuits Syst. Video Technol., 2017, early access.
    48. 48)
      • 48. Sakkos, D., Liu, H., Han, J., et al: ‘End-to-end video background subtraction with 3D convolutional neural networks’, Multimedia Tools Appl., 2017, 77, (1), pp. 7490.
    49. 49)
      • 49. Kim, W., Kim, C.: ‘Background subtraction for dynamic texture scenes using fuzzy color histograms’, IEEE Signal Process. Lett., 2012, 19, (3), pp. 127130.
    50. 50)
      • 50. Farcas, D., Marghes, C., Bouwmans, T.: ‘Background subtraction via incremental maximum margin criterion: a discriminative subspace approach’, Mach. Vis. Appl., 2012, 23, (6), pp. 10831101.
    51. 51)
      • 51. Liwicki, S., Tzimiropoulos, G., Zafeiriou, S., et al: ‘Euler principal component analysis’, Int. J. Comput. Vis., 2013, 101, (3), pp. 498518.
    52. 52)
      • 52. Bouwmans, T., Zahzah, E.H.: ‘Robust PCA via principal component pursuit: a review for a comparative evaluation in video surveillance’, Comput. Vis. Image Underst., 2014, 122, pp. 2234.
    53. 53)
      • 53. Javed, S., Mahmood, A., Bouwmans, T., et al: ‘Background–foreground modeling based on spatiotemporal sparse subspace clustering’, IEEE Trans. Image Process., 2017, 26, (12), pp. 58405854.
    54. 54)
      • 54. Javed, S., Mahmood, A., Bouwmans, T., et al: ‘Spatiotemporal low-rank modeling for complex scene background initialization’, IEEE Trans. Circuits Syst. Video Technol., 2016.
    55. 55)
      • 55. Sobral, A., Baker, C.G., Bouwmans, T., et al: ‘Incremental and multi-feature tensor subspace learning applied for background modeling and subtraction’. Int. Conf. Image Analysis and Recognition, Vilamoura, Algarve, Portugal, 2014.
    56. 56)
      • 56. He, J., Zhang, D., Balzano, L., et al: ‘Iterative Grassmannian optimization for robust image alignment’, Image Vis. Comput., 2014, 32, (10), pp. 800813.
    57. 57)
      • 57. Zhou, X., Yang, C., Yu, W.: ‘Moving object detection by detecting contiguous outliers in the low-rank representation’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (3), pp. 597610, https://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=6461861.
    58. 58)
      • 58. Cevher, V., Sankaranarayanan, A., Duarte, M.F., et al: ‘Compressive sensing for background subtraction’. European Conf. Computer Vision, Springer, Marseille, France, 2008.
    59. 59)
      • 59. Pejčić, N., Reljin, N., McDaniel, S., et al: ‘Detection of moving objects using incremental connectivity outlier factor algorithm’. Proc. 47th Annual Southeast Regional Conf., New York, NY, USA, 2009.
    60. 60)
      • 60. Gao, T., Liu, Z.-G., Gao, W.-C., et al: ‘A robust technique for background subtraction in traffic video’. Int. Conf. Neural Information Processing, Auckland, New Zealand, 2008.
    61. 61)
      • 61. Piccardi, M.: ‘Background subtraction techniques: a review’. 2004 IEEE int. Conf. Systems, Man and Cybernetics, Hague, Netherlands, 2004.
    62. 62)
      • 62. Bouwmans, T.: ‘Recent advanced statistical background modeling for foreground detection – a systematic survey’, Recent Pat. Comput. Sci., 2011, 4, (3), pp. 147176.
    63. 63)
      • 63. Hassanpour, H., Sedighi, M., Manashty, A.R.: ‘Video frame's background modeling: reviewing the techniques’, J. Signal Inf. Process., 2011, 2, (02), p. 72.
    64. 64)
      • 64. Prasad, P., Kumar, M., Rao, G.S.B.: ‘Design of biorthogonal wavelets based on parameterized filter for the analysis of X-ray images’, in Jain, L., Behera, H., Mandal, J., et al (Eds.): ‘Computational intelligence in data miningvol. 2, (Springer, New Delhi, 2015), pp. 99110.
    65. 65)
      • 65. Adams, M.D.: ‘Multiresolution signal and geometry processing: filter banks, wavelets, and subdivision (version: 2013-09-26)’ (Michael Adams, Canada, 2013), http://www.ece.uvic.ca/~mdadams/waveletbook.
    66. 66)
      • 66. Prasad, P., Prasad, D., Rao, G.S.: ‘Performance analysis of orthogonal and biorthogonal wavelets for edge detection of X-ray images’, Procedia Comput. Sci., 2016, 87, pp. 116121.
    67. 67)
      • 67. Nieminen, A., Heinonen, P., Neuvo, Y.: ‘A new class of detail-preserving filters for image processing’, IEEE Trans. Pattern Anal. Mach. Intell., 1987, (1), pp. 7490.
    68. 68)
      • 68. Starck, J.-L.: ‘Nonlinear multiscale transforms’, in Barth, T.J., Chan, T., Haimes, R. (Eds.): ‘Multiscale and multiresolution methods’ (Springer, Berlin, Heidelberg, 2002), pp. 239278.
    69. 69)
      • 69. Pantrigo, J.J., Hernández, J., Sánchez, A.: ‘Multiple and variable target visual tracking for video-surveillance applications’, Pattern Recognit. Lett., 2010, 31, (12), pp. 15771590.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2018.5376
Loading

Related content

content/journals/10.1049/iet-cvi.2018.5376
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address