access icon free Sequence-to-sequence alignment using a pendulum

Analysing two or more video sequences of dynamic scenes typically requires time synchronisation between sequences, where this alignment is not always possible using hardware. A particular method will most likely process the entire, frequently lengthy, imaged material, requiring additional processing which normally serves for synchronisation only. Software-based synchronisation methods impose, in basically all cases, certain assumptions about an imaged three-dimensional (3D) scene and are suited for the already imaged video material in the past. The authors argue that there are applications where the unsynchronised video sequences have not yet been taken. The time-efficient solution uses a pendulum consisting of a small ball, attached to a 50 cm string and suspended from a pivot so that it can swing freely. The authors estimate the time instant when the ball swings through the equilibrium position. The difference in these times for two cameras yields a subframe time difference between cameras. The proposed method yields subframe differences, statistically no different from ground truth data. 3D reconstruction results for synchronised data clearly outperform those which are unsynchronised. The proposed method relaxes any restrictions and assumptions about the 3D scene that will be imaged later on, yet it allows accurate subframe synchronisation in less than a second.

Inspec keywords: video signal processing; image sequences

Other keywords: software based synchronisation methods; imaged material; pendulum; 3D scene; imaged video material; subframe time difference; sequence-to-sequence alignment; video sequences; time synchronisation; equilibrium position

Subjects: Computer vision and image processing techniques; Video signal processing; Optical, image and video signal processing

References

    1. 1)
      • 26. Bradley, D., Atcheson, B., Ihrke, I., Heidrich, W.: ‘Synchronization and rolling shutter compensation for consumer video camera arrays’. IEEE Int. Workshop on Projector-Camera Systems, PROCAMS, 2009.
    2. 2)
      • 20. Pribanić, T., Heđi, A., Gračanin, V.: ‘Harmonic oscillations modelling for the purpose of camera synchronization’. Seventh Int. Conf. on Computer Graphics Theory and Applications and Int. Conf. on Information Visualization Theory and Applications (VISIGRAPP 2012), Rome, Italy/CD ROM, 24–26 February 2012, pp. 202206.
    3. 3)
      • 5. Rai, P.K., Tiwari, K., Guha, P., Mukerjee, A.: ‘A cost-effective multiple camera vision system using FireWire cameras and software synchronization’. 10th IEEE Int. Conf. on High Performance Computing (HiPC), Hyderabad, India, 2003.
    4. 4)
      • 6. U, J., Suter, D.: ‘Using synchronised FireWire cameras for multiple viewpoint digital video capture’. Technical Report MECSE-16-2004, Department of Electrical and Computer Systems Engineering, Monash University, Clayton, Austraila, 2004.
    5. 5)
      • 10. Meyer, B., Stich, T., Magnor, M., Pollefeys, M.: ‘Subframe temporal alignment of non-stationary cameras’. Proc. British Machine Vision Conf., 2008.
    6. 6)
      • 4. Zelnik-Manor, L., Irani, M.: ‘Event-based analysis of video’. Proc. Computer Vision and Pattern Recognition Conf., 2001, vol. 2, pp. 123.
    7. 7)
    8. 8)
    9. 9)
      • 24. Brownlee, K.A.: ‘Statistical theory and methodology in science and engineering’ (Wiley, 1960).
    10. 10)
      • 23. Hartley, R., Zisserman, A.: ‘Multiple view geometry in computer vision’ (Cambridge University Press, 2004).
    11. 11)
      • 22. http://en.wikipedia.org/wiki/Principal_component_analysis, accessed September 2014.
    12. 12)
    13. 13)
    14. 14)
      • 12. Wedge, D., Huynh, D., Kovesi, P.: ‘Using space-time interest points for video sequence synchronization’. Proc. IAPR Conf. on Machine Vision Applications, 2007, pp. 190194.
    15. 15)
    16. 16)
    17. 17)
      • 8. Tuytelaars, T., van Gool, L.: ‘Synchronizing video sequences’. IEEE Int. Conf. on Computer Vision and Pattern Recognition, 2004, pp. 762768.
    18. 18)
    19. 19)
      • 1. Vedula, S., Baker, S., Kanade, T.: ‘Spatio-temporal view interpolation’. Proc. Eurographics Workshop on Rendering, 2002, pp. 6576.
    20. 20)
      • 14. Caspi, Y., Irani, M.: ‘A step towards sequence-to-sequence alignment’. CVPR, 2000, pp. 682689.
    21. 21)
      • 15. Ushizaki, M., Okatani, T., Deguchi, K.: ‘Video synchronization based on co-occurrence of appearance changes in video sequences’. Proc. Int. Conf. on Pattern Recognition, 2006, pp. 7174.
    22. 22)
      • 16. Ukrainitz, Y., Irani, M.: ‘Aligning sequences and actions by maximizing space-time correlations’. Proc. ECCV, 2006, pp. 538550.
    23. 23)
    24. 24)
    25. 25)
    26. 26)
      • 21. Point Grey Research Inc.: http://www.ptgrey.com/ accessed September, 2014.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2014.0075
Loading

Related content

content/journals/10.1049/iet-cvi.2014.0075
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading