access icon free Calibration and object correspondence in camera networks with widely separated overlapping views

This study contributes in two ways to the research of multi-camera object tracking in the context of visual surveillance. Firstly, a semi-automatic scene calibration method is proposed to deal with mapping a network of cameras with overlapped fields of view onto a single ground plane view, even when the overlap is not substantial. The proposed method uses a semi-supervised approach that combines tracked blobs with user-selected line scene features to recover the homographies between camera views that are both simple and accurate. Then, based on the scene calibration information, the intersection points of the projected vertical axis of single camera blobs are used to make object correspondences across multiple views. The method works in mixed environments of both pedestrians and vehicles, and is shown to be accurate and robust against segmentation noise and occlusions. Finally, the advantage of the proposed method is demonstrated by quantitative tracking performance evaluation and comparison against previous methods.

Inspec keywords: object tracking; feature selection; road vehicles; calibration; video cameras; pedestrians; image segmentation; video surveillance

Other keywords: occlusions; single ground plane view; semiautomatic scene calibration method; semisupervised approach; visual surveillance; intersection points; user-selected line scene features; object correspondence; camera views; scene calibration information; multicamera object tracking; quantitative tracking performance evaluation; pedestrians; widely separated overlapping views; segmentation noise; homography recovery; vehicles; projected vertical axis; single camera blobs; camera network mapping

Subjects: Image recognition; Measurement standards and calibration; Video signal processing; Computer vision and image processing techniques

References

    1. 1)
      • 24. Available at http://www.cvlab.epfl.ch/data/pom, accessed July 2014.
    2. 2)
      • 1. Hartley, R., Zisserman, A.: ‘Multiple view geometry in computer vision’ (Cambridge University Press, Cambridge, 2003, 2nd edn.).
    3. 3)
      • 4. Zhang, Z., Scanlon, A.: ‘Video surveillance using a multi-camera tracking and fusion system’. IEEE Int. Workshop on Multi-camera and Multi-modal Sensor Fusion: Algorithms and Applications, Marseille, France, 2008, pp. 435456.
    4. 4)
      • 17. Borg, M., Thirde, D.J., Ferryman, J.M., et al: ‘Automated scene understanding for airport aprons’. The 18th Australian Joint Conf. on Artificial Intelligence (AI05) in Sydney, Australia, December, 2005, vol. 3809, pp. 593603.
    5. 5)
      • 14. Ben Shitrit, H., Berclaz, J., Fleuret, F., et al: ‘Tracking multiple people under global appearance constraints’, 2011 IEEE Int. Conf. on Computer Vision (ICCV), 2011, pp. 137144.
    6. 6)
      • 26. Available at http://www.mm.media.kyoto-u.ac.jp/datasets/shinpuhkan/index-e.html, accessed July 2014.
    7. 7)
      • 28. Available at http://www.openvisor.org/3dpes.asp, accessed July 2014.
    8. 8)
      • 22. Available at http://www.cvg.rdg.ac.uk/slides/pets.html, accessed March 2014.
    9. 9)
      • 13. Orghidan, R., Salvi, J., Gordan, M., et al: ‘Camera calibration using two or three vanishing points’. Federated Conf. on IEEE Computer Science and Information Systems (FedCSIS), 2012, pp. 123130.
    10. 10)
    11. 11)
      • 6. Ellis, T., Makris, D., Black, J.: ‘Learning a multi-camera topology’. Joint IEEE Int. Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, Nice, France, 2003, pp. 165171.
    12. 12)
      • 21. Available at http://www.homepages.inf.ed.ac.uk/rbf/CAVIARDATA1/ accessed March 2014.
    13. 13)
    14. 14)
      • 23. Available at http://www.cvpapers.com/datasets.html, accessed July 2014.
    15. 15)
      • 27. Available at https://www.gov.uk/imagery-library-for-intelligent-detection-systems, accessed July 2014.
    16. 16)
      • 29. Available at http://www.dipersec.king.ac.uk/MuCCD/, accessed June 2014.
    17. 17)
    18. 18)
      • 12. Li, B., Peng, K., Ying, X., Zha, H.: ‘Simultaneous vanishing point detection and camera calibration from single images’. ISVC’ 10, USA, 2010, pp. 151160.
    19. 19)
      • 19. Luo, X., Tan, R.T., Veltkamp, R.C.: ‘Multi-person tracking based on vertical reference lines and dynamic visibility analysis’. 18th IEEE Int. Conf. on Image Processing (ICIP), 2011, 2011, pp. 18771880.
    20. 20)
      • 25. Available at http://www.wide-baseline-camera-network-contest.org/?page_id=35, accessed July 2014.
    21. 21)
      • 30. Yin, F., Makris, D., Velastin, S.A.: ‘Performance evaluation of object tracking algorithms’. 10th IEEE Int. Workshop on Performance Evaluation of Tracking and Surveillance (PETS2007), Rio de Janeiro, Brazil, October 2007.
    22. 22)
      • 31. Available at http://www.viper-toolkit.sourceforge.net/docs/gt/, accessed March 2014.
    23. 23)
    24. 24)
      • 9. Liu, J., Collins, R.T., Liu, Y.: ‘Surveillance camera autocalibration based on pedestrian height distributions’. British Machine Vision Conf. (BMVC), Dundee, 2011.
    25. 25)
      • 7. Stauffer, C., Tieu, K.: ‘Automated multi-camera planar tracking correspondence modeling’. Proc. CVPR, 2003, vol. 1, pp. 1-2591-266.
    26. 26)
      • 5. Black, J., Ellis, T., Makris, D.: ‘Wide area surveillance with a multi camera network’. IEE Intelligent Distributed Surveillance Systems, London, 2004, pp. 2125.
    27. 27)
    28. 28)
    29. 29)
    30. 30)
      • 11. Dubrofsky, E., Woodham, R.J.: ‘Combining line and point correspondences for homography estimation’. Advances in Visual Computing, Berlin Heidelberg, 2008, pp. 202213.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2013.0301
Loading

Related content

content/journals/10.1049/iet-cvi.2013.0301
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading