access icon free Fusing target information from multiple views for robust visual tracking

In this study, the authors address the problem of tracking a single target in a calibrated multi-camera surveillance system with information on its location in the first frame of each view. Recently, tracking with online multiple instance learning (OMIL) has been shown to give promising tracking results. However, it may fail in a real surveillance system because of problems arising from target orientation, scale or illumination changes. In this study, the authors show that fusing target information from multiple views can avoid these problems and lead to a more robust tracker. At each camera node, an efficient OMIL algorithm is used to model target appearance. To update the OMIL-based classifier in one view, a co-training strategy is applied to generate a representative set of training bags from all views. Bags extracted from each view hold a unique weight depending on similarity of target appearance between the current view and the view which contains the classifier that needs to be updated. In addition, target motion on a camera's image plane is modelled by a modified particle filter guided by the corresponding object two-dimensional (2D) location and fused 3D location. Experimental results demonstrate that the proposed algorithm is robust for human tracking in challenging scenes.

Inspec keywords: image motion analysis; learning (artificial intelligence); object tracking; computer vision

Other keywords: taget tracking; particle filter; human tracking; target appearance; calibrated multicamera surveillance system; target motion; multiple views; online multiple instance learning; object two-dimensional location; cotraining strategy; OMIL-based classifier; robust visual tracking; fused 3D location; 2D location

Subjects: Knowledge engineering techniques; Computer vision and image processing techniques; Optical, image and video signal processing

References

    1. 1)
    2. 2)
    3. 3)
    4. 4)
    5. 5)
    6. 6)
    7. 7)
    8. 8)
    9. 9)
    10. 10)
    11. 11)
    12. 12)
    13. 13)
    14. 14)
      • 17. Vijayanarasimhan, S., Grauman, K.: ‘Keywords to visual categories: multiple-instance learning for weakly supervised object categorization’. IEEE Conf. Computer Vision Pattern Recognition, 23–28 June 2008, pp. 18.
    15. 15)
      • 14. Roth, P.M., Leistner, C., Berger, A., Bischof, H.: ‘Multiple instance learning from multiple cameras’. IEEE Conf. Computer Vision Pattern Recognition, 13–18 June 2010, pp. 1724.
    16. 16)
      • 26. Viola, P., Jones, M.: ‘Rapid object detection using a boosted cascade of simple features’. IEEE Conf. Computer Vision Pattern Recognition, 2001, pp. 511518.
    17. 17)
      • 11. Zeisl, B., Leistner, C., Saffari, A., Bischof, H.: ‘On-line semi-supervised multiple-instance boosting’. IEEE Conf. Computer Vision Pattern Recognition, 13–18 June 2010, pp. 18791886.
    18. 18)
      • 4. Leichter, I.: ‘Mean shift trackers with cross-bin metrics’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 34, (4), pp. 695706 (doi: 10.1109/TPAMI.2011.167).
    19. 19)
      • 18. Zefeng, N., Sunderrajan, S., Rahimi, A., Manjunath, B.S.: ‘Distributed particle filter tracking with online multiple instance learning in a camera sensor network’. Int. Conf. Image Processing, 26–29 September 2010, pp. 3740.
    20. 20)
      • 10. Zefeng, N., Sunderrajan, S., Rahimi, A., Manjunath, B.S.: ‘Particle filter tracking with online multiple instance learning’. Int. Conf. Pattern Recognition, 23–26 August 2010, pp. 26162619.
    21. 21)
      • 9. Babenko, B., Ming-Hsuan, Y., Belongie, S.: ‘Robust object tracking with online multiple instance learning’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (8), pp. 16191632 (doi: 10.1109/TPAMI.2010.226).
    22. 22)
      • 5. Wang, Z., Yang, X., Xu, Y., Yu, S.: ‘CamShift guided particle filter for visual tracking’, Pattern Recogn. Lett., 2009, 30, (4), pp. 407413 (doi: 10.1016/j.patrec.2008.10.017).
    23. 23)
      • 20. Khan, S.M., Shah, M.: ‘Tracking multiple occluding people by localizing on multiple scene planes’, IEEE Trans. Pattern Anal. Mach. Intell., 2009, 31, (3), pp. 505519 (doi: 10.1109/TPAMI.2008.102).
    24. 24)
      • 19. Bi, S., Chong, D., Kamal, A.T., Farrell, J.A., Roy-Chowdhury, A.K.: ‘Distributed camera networks’, IEEE. Signal Process. Mag., 2011, 28, (3), pp. 2031 (doi: 10.1109/MSP.2011.940413).
    25. 25)
      • 1. Yilmaz, A., Javed, O., Shah, M.: ‘Object tracking: a survey’, Acm Computing Surveys (CSUR), 2006, 38, (4), pp. 13 (doi: 10.1145/1177352.1177355).
    26. 26)
      • 22. Kirubarajan, T., Bar-Shalom, Y.: ‘Probabilistic data association techniques for target tracking in clutter’, Proc. IEEE, 2004, 92, (3), pp. 536557 (doi: 10.1109/JPROC.2003.823149).
    27. 27)
      • 16. Stuart, A., Ioannis, T., Thomas, H.: ‘Support vector machines for multiple-instance learning’. Proc. Advances in Neural Information Processing Systems, 2003.
    28. 28)
      • 21. Blum, A., Mitchell, T.: ‘Combining labeled and unlabeled data with co-training’. Proc. Conf. Computational Learning Theory, pp. 92100.
    29. 29)
      • 8. Babenko, B., Ming-Hsuan, Y., Belongie, S.: ‘Visual tracking with online multiple instance learning’. IEEE Conf. Computer Vision Pattern Recognition, 20–25 June 2009, pp. 983990.
    30. 30)
      • 13. Dietterich, T.G., Lathrop, R.H., Perez, L.T.: ‘Solving the multiple instance problem with axis-parallel rectangles’, Artif. Intell., 1997, 89, (1–2), pp. 3171 (doi: 10.1016/S0004-3702(96)00034-3).
    31. 31)
      • 24. Roth, P.M., Grabner, H., Skocaj, D., Bischol, H., Leonardis, A.: ‘On-line conservative learning for person detection’. Joint IEEE Int. Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, 16 October 2005, pp. 223230.
    32. 32)
      • 27. Salih, Y., Malik, A.S.: ‘Comparison of stochastic filtering methods for 3D tracking’, Pattern. Recogn., 2011, 44, (10–11), pp. 27112737 (doi: 10.1016/j.patcog.2011.03.027).
    33. 33)
      • 3. Comaniciu, D., Ramesh, V., Meer, P.: ‘Kernel-based object tracking’, IEEE Trans. Pattern Anal. Mach. Intell., 2003, 25, (5), pp. 564577 (doi: 10.1109/TPAMI.2003.1195991).
    34. 34)
      • 28. Cehovin, L., Kristan, M., Leonardis, A.: ‘Robust visual tracking using an adaptive coupled-layer visual model’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (4), pp. 941953 (doi: 10.1109/TPAMI.2012.145).
    35. 35)
      • 6. Adam, A., Rivlin, E., Shimshoni, I.: ‘Robust fragments-based tracking using the integral histogram’. IEEE Conf. Computer Vision Pattern Recognition, 17–22 June 2006, pp. 798805.
    36. 36)
      • 7. Grabner, H., Bischof, H.: ‘On-line boosting and vision’. IEEE Conf. Computer Vision Pattern Recognition, 17–22 June 2006, pp. 260267.
    37. 37)
      • 23. Rasmussen, C., Hager, G.D.: ‘Probabilistic data association methods for tracking complex visual objects’, IEEE Trans. Pattern Anal. Mach. Intell., 2001, 23, (6), pp. 560576 (doi: 10.1109/34.927458).
    38. 38)
      • 25. Javed, O., A'li, S., Shah, M.: ‘Online detection and classification of moving objects using progressively improving detectors’. IEEE Conf. Computer Vision Pattern Recognition, 20–25 June 2005, pp. 696701.
    39. 39)
      • 12. Grabner, H., Leistner, C., Bischof, H.: ‘Semi-supervised on-line boosting for robust tracking’. Proc. European Conf. Computer Vision, 2008, pp. 234247.
    40. 40)
      • 15. Paul, V., John, C.P., Cha, Z.: ‘Multiple instance boosting for object detection’. Proc. Advances in Neural Information Processing Systems, 2005.
    41. 41)
      • 2. Yang, H., Shao, L., Zheng, F., Wang, L., Song, Z.: ‘Recent advances and trends in visual tracking: a review’, Neurocomputing, 2011, 74, (18), pp. 38233831 (doi: 10.1016/j.neucom.2011.07.024).
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2013.0026
Loading

Related content

content/journals/10.1049/iet-cvi.2013.0026
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading