http://iet.metastore.ingenta.com
1887

Improved appearance updating method in multiple instance learning tracking

Improved appearance updating method in multiple instance learning tracking

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Multiple instance learning (MIL) tracker becomes recently very popular because of their great success in complex scenes. Dynamically reflecting the appearance changes of the tracked object, the appearance updating plays an important role on tracking. In the original MIL tracker, the appearance model is assumed to obey normal distribution and its updating rule consists of a simple linearly weighted sum of the original and the current target distributions in the current frame. However, this updating method is not proved theoretically. In this work, the authors deduce a novel appearance updating method by estimating the mean and the variance of the sum of two normal distributions being merged in maximum likelihood estimation. The method can be naturally extended to multivariable distributions, useful to track colour object. Experimental results on some benchmark video sequences show that the method achieve higher precision and reliability than the three state-of-art trackers.

References

    1. 1)
      • 1. Yilmaz, A., Javed, O., Shah, M.: ‘Object tracking: a survey’, ACM Comput. Surv., 2006, 38, (4), Article 13, doi: 10.1145/1177352.1177355 (doi: 10.1145/1177352.1177355).
    2. 2)
      • 2. Yang, H., Shao, L., Zheng, F., Wang, L., Song, Z.: ‘Recent advances and trends in visual tracking: a review’, Neurocomputing, 2011, 74, (18), pp. 38233831 (doi: 10.1016/j.neucom.2011.07.024).
    3. 3)
      • 3. Yu, Q., Dinh, T., Medioni, G.: ‘Online tracking and reacquisition using co-trained generative and discriminative trackers’. Proc. European Conf. on Computer Vision, 2008, vol. 2, pp. 678691.
    4. 4)
      • 4. Yang, M., Ho, J.: ‘Toward robust online visual trackingin Distributed video sensor networks, (ed. Bhanu, B., Ravishankar, C.V., Roy-Chowdhury, A.K., Aghajan, H., Terzopoulos, D.) (Springer-Verlag Ltd., London, 2011) pp. 119136.
    5. 5)
      • 5. Belagiannis, V., Navab, N., Ilic, S.: ‘Segmentation based particle filtering for real-time 2D object tracking’. Proc. European Conf. on Computer Vision, 2012, vol. 4, pp. 842855.
    6. 6)
      • 6. Ross, D., Lim, J., Lin, R., Yang, M.: ‘Incremental learning for robust visual tracking’, Int. J. Comput. Vis., 2008, 77, (1–3), pp. 125141 (doi: 10.1007/s11263-007-0075-7).
    7. 7)
      • 7. Lucas, B., Kanade, T.: ‘An iterative image registration technique with an application to stereo vision’. Proc. Imaging Understanding Workshop, 1981, pp. 121130.
    8. 8)
      • 8. Ghosh, A., Subudhi, B., Ghosh, S.: ‘Object detection from videos captured by moving camera by using fuzzy edge incorporated markov random field and local histogram matching’, IEEE Trans. Circuits Syst. Video Technol., 2012, 22, (8), pp. 11271135 (doi: 10.1109/TCSVT.2012.2190476).
    9. 9)
      • 9. Avidan, S.: ‘Support vector tracking’, IEEE Trans. Pattern Anal. Mach. Intell., 2004, 26, (8), pp. 10641072 (doi: 10.1109/TPAMI.2004.53).
    10. 10)
      • 10. Avidan, S.: ‘Ensemble tracking’, IEEE Trans. Pattern Anal. Mach. Intell., 2007, 29, (2), pp. 261271 (doi: 10.1109/TPAMI.2007.35).
    11. 11)
      • 11. Collins, R., Liu, Y., Leordeanu, M.: ‘Online selection of discriminative tracking features’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (10), pp. 16311643 (doi: 10.1109/TPAMI.2005.205).
    12. 12)
      • 12. Grabner, H., Leistner, C., Bischof, H.: ‘Semi-supervised on-line boosting for robust tracking’. Proc. European Conf. on Computer Vision, 2008, vol. 1, pp. 234247.
    13. 13)
      • 13. Babenko, B., Yang, M., Belongie, S.: ‘Visual tracking with online multiple instance learning’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2009, pp. 983990.
    14. 14)
      • 14. Babenko, B., Yang, M., Belongie, S.: ‘Robust object tracking with online multiple instance learning’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (8), pp. 16191632 (doi: 10.1109/TPAMI.2010.226).
    15. 15)
      • 15. Zhang, K., Song, H.: ‘Real-time visual tracking via online weighted multiple instance learning’, Pattern Recognit., 2013, 46, (1), pp. 397411 (doi: 10.1016/j.patcog.2012.07.013).
    16. 16)
      • 16. Babenko, B., Verma, N., Dollar, P., Belongie, S.: ‘Multiple instance learning with manifold bags’. Int. Conf. on Machine Learning, Bellevue, Washington, 2011.
    17. 17)
      • 17. Xie, Y., Qu, Y., Li, C., Zhang, W.: ‘Online multiple instance gradient feature selection for robust visual tracking’, Pattern Recognit. Lett., 2012, 33, (9), pp. 10751082 (doi: 10.1016/j.patrec.2012.01.020).
    18. 18)
      • 18. Qi, Z., Xu, Y., Wang, L.: ‘Online multiple instance boosting for object detection’, Neurocomputing, 2012, 74, (10), pp. 17691775 (doi: 10.1016/j.neucom.2011.02.011).
    19. 19)
      • 19. Levi, K., Weiss, Y.: ‘Learning object detection from a small number of examples: the importance of good features’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2004, vol. 2, pp. 5360.
    20. 20)
      • 20. Jepson, A., Fleet, D., Wl-Maraghi, T.: ‘Robust online appearance models for visual tracking’. IEEE Conf. on Computer Vision and Pattern Recognition, Kauai, 2001, vol. I, pp. 415422.
    21. 21)
      • 21. Viola, P., Jones, M.: ‘Rapid object detection using a boosted cascade of simple features’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2001, vol. 1, pp. 511518.
    22. 22)
      • 22. Zhang, Z., Chen, C., Sun, J., Chan, K.: ‘EM algorithms for Gaussian mixtures with split-and-merge operation’, Pattern Recognit., 2003, 36, (9), pp. 19731983 (doi: 10.1016/S0031-3203(03)00059-1).
    23. 23)
      • 23. Declercq, A., Piater, J.: ‘Online learning of gaussian mixture models - a two-level approach’. Proc. Int. Conf. on Computer Vision Theory and Applications, 2008, pp. 605611.
    24. 24)
      • 24. Santner, J., Leistner, C., Saffari, A., Pock, T., Bischof, H.: ‘PROST: Parallel robust online simple tracking’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 2010, pp. 723730.
    25. 25)
      • 25. http://www.vision.ee.ethz.ch/boostingTrackers/download.htm.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2013.0006
Loading

Related content

content/journals/10.1049/iet-cvi.2013.0006
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address