Measuring similarity between geo-tagged videos using largest common view

Measuring similarity between geo-tagged videos using largest common view

For access to this article, please select a purchase option:

Buy eFirst article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
Electronics Letters — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

This Letter presents a novel problem for discovering similar trajectories based on the field of view of the video data. The problem is important for many societal applications such as grouping moving objects, classifying geo-images, and identifying the interesting trajectory patterns. Prior works consider only either spatial locations or spatial relationship between two line-segments. However, these approaches show a limitation to find similar moving objects with common views. In this Letter, the authors propose a new algorithm that can group both spatial locations and points of view to identify similar trajectories. The authors also propose novel methods that reduce the computational cost for the proposed work. Experimental results using real-world datasets demonstrate that the proposed approach outperforms prior work and reduces the computational cost.


    1. 1)
    2. 2)
      • 2. Vlachos, M., Gunopulos, D., Kollios, G.: ‘Discovering similar multidimensional trajectories’. Int. Conf. of Data Engineering, San Jose, February 2002, pp. 673684.
    3. 3)
    4. 4)
      • 4. Yu, F., Xian, W., Chen, Y., et al: ‘BDD100K: a diverse driving video database with scalable annotation tooling’, arXiv preprint, 2018, arXiv:1805.04687.

Related content

This is a required field
Please enter a valid email address