Multiple hypothesis tracking algorithm for multi-target multi-camera tracking with disjoint views

Multiple hypothesis tracking algorithm for multi-target multi-camera tracking with disjoint views

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

In this study, a multiple hypothesis tracking (MHT) algorithm for multi-target multi-camera tracking (MCT) with disjoint views is proposed. The authors' method forms track-hypothesis trees, and each branch of them represents a multi-camera track of a target that may move within a camera as well as move across cameras. Furthermore, multi-target tracking within a camera is performed simultaneously with the tree formation by manipulating a status of each track hypothesis. Each status represents three different stages of a multi-camera track: tracking, searching, and end-of-track. The tracking status means targets are tracked by a single camera tracker. In the searching status, the disappeared targets are examined if they reappear in other cameras. The end-of-track status does the target exited the camera network due to its lengthy invisibility. These three status assists MHT to form the track-hypothesis trees for multi-camera tracking. Furthermore, a gating technique which eliminates the unlikely observation-to-track association using space-time information has been introduced. In the experiments, the proposed method has been tested using two datasets, DukeMTMC and NLPR\_MCT, which demonstrates that the method outperforms the state-of-the-art method in terms of improvement of the accuracy. In addition, real-time and online performance of proposed method is also showed in this study.


    1. 1)
      • 1. Chen, X., Huang, K., Tan, T.: ‘Object tracking across non-overlapping views by learning inter-camera transfer models’, Pattern Recognit., 2014, 47, (3), pp. 11261137.
    2. 2)
      • 2. Javed, O., Rasheed, Z., Shafique, K., et al: ‘Tracking across multiple cameras with disjoint views’. Proc. Ninth IEEE Int. Conf. Computer Vision, Nice, France, 2003, p. 952.
    3. 3)
      • 3. Wang, Y., Velipasalar, S., Gursoy, M.C.: ‘Distributed wide-area multi-object tracking with non-overlapping camera views’, Multimedia Tools Appl., 2014, 73, (1), pp. 739.
    4. 4)
      • 4. Tesfaye, Y.T., Zemene, E., Prati, A., et al: ‘Multi-target tracking in multiple non-overlapping cameras using constrained dominant sets’, arXiv preprint arXiv:1706.06196, 2017.
    5. 5)
      • 5. Ristani, E., Tomasi, C.: ‘Tracking multiple people online and in real time’. Proc. Asian Conf. Computer Vision, Singapore, 2014, pp. 444459.
    6. 6)
      • 6. Wei, S.-E., Ramakrishna, V., Kanade, T., et al: ‘Convolutional pose machines’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 47244732.
    7. 7)
      • 7. Wang, X., Turetken, E., Fleuret, F., et al: ‘Tracking interacting objects using intertwined flows’, IEEE Trans. Pattern Anal. Mach. Intell., 2016, 38, (11), pp. 23122326.
    8. 8)
      • 8. Berclaz, J., Fleuret, F., Turetken, E., et al: ‘Multiple object tracking using k-shortest paths optimization’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (9), pp. 18061819.
    9. 9)
      • 9. Zhang, L., Li, Y., Nevatia, R.: ‘Global data association for multi-object tracking using network flows’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Anchorage, USA, 2008, pp. 18.
    10. 10)
      • 10. Pirsiavash, H., Ramanan, D., Fowlkes, C.C.: ‘Globally-optimal greedy algorithms for tracking a variable number of objects’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Colorado, USA, 2011, pp. 12011208.
    11. 11)
      • 11. Collins, R.T.: ‘Multitarget data association with higher-order motion models’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Providence, USA, 2012, pp. 17441751.
    12. 12)
      • 12. Kim, C., Li, F., Ciptadi, A., et al: ‘Multiple hypothesis tracking revisited’. Proc. IEEE Int. Conf. Computer Vision, Boston, USA, 2015, pp. 46964704.
    13. 13)
      • 13. Cox, I.J., Hingorani, S.L.: ‘An efficient implementation of Reid's multiple hypothesis tracking algorithm and its evaluation for the purpose of visual tracking’, IEEE Trans. Pattern Anal. Mach. Intell., 1996, 18, (2), pp. 138150.
    14. 14)
      • 14. Papageorgiou, D., Salpukas, M.: ‘The maximum weight independent set problem for data association in multiple hypothesis tracking’, Optim. Coop. Control Strateg., 381, 2009, pp. 235255.
    15. 15)
      • 15. Oh, S., Russell, S., Sastry, S.: ‘Markov chain Monte Carlo data association for multi-target tracking’, IEEE Trans. Autom. Control, 2009, 54, (3), pp. 481497.
    16. 16)
      • 16. Reid, D.: ‘An algorithm for tracking multiple targets’, IEEE Trans. Autom. Control, 1979, 24, (6), pp. 843854.
    17. 17)
      • 17. Song, Y.-M., Jeon, M.: ‘Online multiple object tracking with the hierarchically adopted GM-PHD filter using motion and appearance’. Proc. IEEE Int. Conf. Consumer Electronics, Seoul, Korea, 2016, pp. 14.
    18. 18)
      • 18. Javed, O., Shafique, K., Rasheed, Z., et al: ‘Modeling inter-camera space–time and appearance relationships for tracking across non-overlapping views’, Comput. Vis. Image Underst., 2008, 109, (2), pp. 146162.
    19. 19)
      • 19. Prosser, B.J., Gong, S., Xiang, T.: ‘Multi-camera matching using bi-directional cumulative brightness transfer functions’. Proc. BMVC, Leeds, UK, 2008, p. 74.
    20. 20)
      • 20. Gilbert, A., Bowden, R.: ‘Tracking objects across cameras by incrementally learning inter-camera colour calibration and patterns of activity’. Proc. Computer Vision ECCV, Graz, Austria, 2006, pp. 135136.
    21. 21)
      • 21. Srivastava, S., Ng, K.K., Delp, E.J.: ‘Color correction for object tracking across multiple cameras’. Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing, Prague, Czech Republic, 2011, pp. 18211824.
    22. 22)
      • 22. Kuo, C.-H., Huang, C., Nevatia, R.: ‘Inter-camera association of multi-target tracks by on-line learned appearance affinity models’. Proc. European Conf. Computer Vision, Crete, Greece, 2010, pp. 383396.
    23. 23)
      • 23. Zhang, S., Zhu, Y., Roy-Chowdhury, A.: ‘Tracking multiple interacting targets in a camera network’, Comput. Vis. Image Underst., 2015, 134, pp. 6473.
    24. 24)
      • 24. Chen, L., Yang, H., Zhu, J., et al: ‘Deep spatial-temporal fusion network for video-based person re-identification’. Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops, Hawaii, USA, 2017, pp. 6370.
    25. 25)
      • 25. Wu, C.-W., Zhong, M.-T., Tsao, Y., et al: ‘Track-clustering error evaluation for track-based multi-camera tracking system employing human re-identification’. Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops, Hawaii, USA, 2017, pp. 14161424.
    26. 26)
      • 26. Blackman, S., Popoli, R.: ‘Design and analysis of modern tracking systems (book)’ (Artech House, Norwood, MA, 1999).
    27. 27)
      • 27. Felzenszwalb, P.F., Girshick, R.B., McAllester, D., et al: ‘Object detection with discriminatively trained part-based models’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (9), pp. 16271645.
    28. 28)
      • 28. Lucas, B.D., Kanade, T.: ‘An iterative image registration technique with an application to stereo vision’. Seventh Int. Joint Conf. Artificial Intelligence, Vancouver, BC, Canada, 1981, pp. 674679.
    29. 29)
      • 29. Poore, A., Rijavec, N., Liggins, M., et al: ‘Data association problems posed as multidimensional assignment problems: problem formulation’. Proc. Optical Engineering and Photonics in Aerospace Sensing, Orlando, USA, 1993, pp. 552563.
    30. 30)
      • 30. Ristani, E., Solera, F., Zou, R., et al: ‘Performance measures and a data set for multi-target, multi-camera tracking’. Proc. European Conf. Computer Vision, Amsterdam, The Netherlands, 2016, pp. 1735.
    31. 31)
      • 31. Chen, W., Cao, L., Chen, X., et al: ‘An equalised global graphical model-based approach for multi-camera object tracking’. IEEE Trans. Circuits Syst. Video Technol., 2016.
    32. 32)
      • 32. Bernardin, K., Stiefelhagen, R.: ‘Evaluating multiple object tracking performance: The CLEAR MOT metrics’, EURASIP J. Image Video Process., 2008, 2008, p. 246309.
    33. 33)
      • 33. Cai, Y., Medioni, G.: ‘Exploring context information for inter-camera multiple target tracking’. Proc. IEEE Winter Conf. Applications of Computer Vision, Colorado, USA, 2014, pp. 761768.
    34. 34)
      • 34. Lee, Y.-G., Tang, Z., Hwang, J.-N.: ‘Online-learning-based human tracking across non-overlapping cameras’, IEEE Trans. Circuits Syst. Video Technol., 2017, doi: 10.1109/TCSVT.2017.2707399.
    35. 35)
      • 35. Chen, X., Bhanu, B.: ‘Integrating social grouping for multi-target tracking across cameras in a CRF model’, IEEE Trans. Circuits Syst. Video Technol., 2016, 27, (11), pp. 23822394.
    36. 36)
      • 36. Kim, H.K.W.P.J.Y., Kim, S.W., Ko, S.: ‘Improved pedestrian detection using joint aggregated channel features’. Proc. ICEIC, Danang, Vietnam, 2016, pp. 9293.

Related content

This is a required field
Please enter a valid email address