access icon free Surveillance video synopsis generation method via keeping important relationship among objects

To reduce the efforts of human in browsing long surveillance videos, synopsis videos are proposed. Traditional synopsis video generation methods condense most of the activities in the video by simultaneously showing several actions, even when they originally occurred at different times. This inevitably causes ignorance of temporal relationship among objects. For example, two persons walk shoulder to shoulder and they are detected and tracked separately, but in the synopsis they never ‘met’. In this study, a trajectory mapping model is defined, whose energy function includes not only the cost caused by the synopsis video, but that of the original video. In this way, it tries to make the relationship between objects of the original video consistent with that of the synopsis. Finally, the video synopsis is generated by an energy minimisation method. Experiments show that the proposed video synopsis can reduce the spatiotemporal redundancies of the input video as much as possible. Moreover, it can keep the important relationship between objects and maintain the time consistency of important activities.

Inspec keywords: video surveillance

Other keywords: surveillance video synopsis generation method; temporal relationship; trajectory mapping model; spatiotemporal redundancy; energy minimisation method; energy function

Subjects: Image recognition; Video signal processing

References

    1. 1)
      • 15. Xin, L., Kejun, W., Wei, W., et al: ‘A multiple object tracking method using Kalman filter’. 2010 IEEE Int. Conf. on Information and Automation (ICIA), Harbin, China, June 2010, pp. 18621866.
    2. 2)
      • 9. Pritch, Y., Rav-Acha, A., Peleg, S.: ‘Non chronological video synopsis and indexing’, IEEE Trans. Pattern Anal. Mach. Intell., 2008, 30, (11), pp. 19711987.
    3. 3)
      • 7. Rav-Acha, A., Pritch, Y., Peleg, S.: ‘Making a long video short: dynamic video synopsis’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2006, vol. 1, pp. 435441.
    4. 4)
      • 10. Yongwei, N., Chunxia, X., Sun, H., et al: ‘Compact video synopsis via global spatiotemporal optimization’, IEEE Trans. Vis. Comput. Graph, 2013, 19, (10), pp. 16641676.
    5. 5)
      • 13. Tong, Y., Maosen, X., Caiwen, M., et al: ‘Object based video synopsis’. 2014 IEEE Workshop on Advanced Research and Technology in Industry Applications (WARTIA), Ottawa, Canada, September 2014, pp. 11381141.
    6. 6)
      • 8. Pritch, Y., Rav-Acha, A., Gutman, A., et al: ‘Webcam synopsis: peeking around the world’. IEEE 11th Int. Conf. on Computer Vision, 2007. ICCV 2007, Rio de Janeiro, Brazil, October 2007, pp. 18.
    7. 7)
      • 6. Chih-Hsien, H., Jen-Shiun, C., Chi-Fang, H., et al: ‘A complexity reduction method for video synopsis system’. Int. Symp. on Intelligent Signal Processing and Communications Systems, Naha, Japan, November 2013, pp. 16168.
    8. 8)
      • 2. Hong-Wen, K., Xiaoou, T., Matsushita, T., et al: ‘Space-time video montage’. Proc. Int. Conf. on Computer Vision and Pattern Recognition, New York, USA, June 2006, pp. 13311338.
    9. 9)
      • 1. Feris, R., Ying-Li, T., Hampapur, A.: ‘Capturing people in surveillance video’. Proc. Int. Conf. on Computer Vision and Pattern Recognition, Minneapolis, MN, June 2007, pp. 18.
    10. 10)
      • 3. Pritch, Y., Hendel, A., Peleg, S., et al: ‘Clustered synopsis of surveillance video’. IEEE Advanced Video and Signal Based Surveillance, Genova, September 2009, pp. 195200.
    11. 11)
      • 12. Chun-Rong, H., Hsing-Chen, C., Di-Kai, Y., et al: ‘Maximum a posteriori probability estimation for online surveillance video synopsis’, IEEE Trans. Circuits Syst. Video Technol., 2014, 24, (8), pp. 14171429.
    12. 12)
      • 5. Zhuang, L., Ishwar, P., Konrad, J.: ‘Video condensation by ribbon carving’, IEEE Trans. Image Process., 2009, 18, (11), pp. 25722583.
    13. 13)
      • 4. Vural, U., Akgul, Y.S.: ‘Eye-gaze based real-time surveillance video synopsis’, Pattern Recognit. Lett., 2009, 30, (12), pp. 11511159.
    14. 14)
      • 11. Chun-Rong, H., Hsing-Cheng, C., Pau-Choo, C.: ‘Online surveillance video synopsis’. 2012 IEEE Int. Symp. on Circuits and Systems (ISCAS), , Seoul, Korea (South), May 2012, pp. 18431846.
    15. 15)
      • 16. Lei, S., Junliang, X., Haizhou, A., et al: ‘A tracking based fast online complete video synopsis approach’. 2012 Int. Conf. on Pattern Recognition (ICPR), Tsukuba, Japan, November 2012, pp. 19561959.
    16. 16)
      • 14. Zivkovic, Z.: ‘Improved adaptive Gaussian mixture model for background subtraction’. Proc. 17th Int. Conf. on Pattern Recognition, 2004. ICPR 2004, 2004, vol. 2, pp. 2831.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2016.0128
Loading

Related content

content/journals/10.1049/iet-cvi.2016.0128
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading