Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

Evaluation of two-part algorithms for objects’ depth estimation

Evaluation of two-part algorithms for objects’ depth estimation

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

During the last decade, a wealth of research was devoted to building integrated vision systems capable of both recognising objects and providing their spatial information. Object recognition and pose estimation are among the most popular and challenging tasks in computer vision. Towards this end, in this work the authors propose a novel algorithm for objects’ depth estimation. Moreover, they comparatively study two common two-part approaches, namely the scale invariant feature transform SIFT and the speeded-up robust features algorithm, in the particular application of location assignment of an object in a scene relatively to the camera, based on the proposed algorithm. Experimental results prove the authors' claim that an accurate estimation of objects' depth in a scene can be obtained by taking into account extracted features’ distribution over the target's surface.

References

    1. 1)
      • Sivic, J., Zisserman, A.: `Video google: a text retrieval approach to object matching in videos', Proc. Ninth IEEE Int. Conf. on Computer Vision, October 2003, Nice, France, p. 1470–1477.
    2. 2)
    3. 3)
    4. 4)
    5. 5)
    6. 6)
    7. 7)
      • http://robotics.pme.duth.gr/rigas/Objects.rar.
    8. 8)
      • http://www.evolution.com/core/ViPR/.
    9. 9)
    10. 10)
      • Nister, D., Stewenius, H.: `Scalable recognition with a vocabulary tree', Proc. IEEE Comput. Society Conf. on Computer Vision and Pattern Recognition, June 2006, New York, USA, p. 2161–2168.
    11. 11)
      • Liao, M.Z.W., Ling, W., Chen, W.F.: `A novel affine invariant feature extraction for optical recognition', Int. Conf. on Mach. Learn. Cybern., Hong Kong, August 2007, China, 3, p. 1769–1773.
    12. 12)
    13. 13)
      • D, Xu, Li, Y.F.: `A new pose estimation method based on inertial and visual sensors for autonomous robots', IEEE Int. Conf. on Robot. Biomimetics. Sanya, December 2007, China, p. 405–410.
    14. 14)
      • Wu, C., Fraundorfer, F., Frahm, J.M., Pollefeys, M.: `3D model search and pose estimation from single images using VIP features', IEEE Comput. Soc. Conf. on Comput. Vis. Pattern Recognition Workshops, June 2008, Anchorage, Alaska, p. 1–8.
    15. 15)
      • Viola, P., Jones, M.: `Rapid object detection using a boosted cascade of simple features', Proc. IEEE Comput. Soc. Conf. on Computer Vision and Pattern Recognition, December 2001, Kauai Marriott, Hawaii, 1, p. 511–518.
    16. 16)
    17. 17)
    18. 18)
    19. 19)
      • Harris, C., Stephens, M.: `A combined corner and edge detection', Proc. Fourth Alvey Vis. Conf., August 1988, Manchester, UK, p. 147–151.
    20. 20)
      • Bay, H., Tuytelaars, T., Van Gool, L.: `Surf: speeded up robust features', Proc. Ninth Eur. Conf. on Comput. Vis., May 2006, Graz, Austria, 3951, p. 404.
    21. 21)
    22. 22)
      • http://www.ptgrey.com/products/grasshopper/index.asp.
    23. 23)
    24. 24)
      • May, S., Droeschel, D., Holz, D.: `3D pose estimation and mapping with time-of-flight cameras', IEEE Int. Conf. on Intelligent Robots and Systems, Workshop in 3D-Mapping, 2008, Nice, France, p. 1–8.
    25. 25)
      • Bjiörkman, M., Kragic, D.: `Combination of foveal and peripheral vision for object recognition and pose estimation', IEEE Int. Conf. on Robotics and Automation, April 2004, p. 5135–5140.
    26. 26)
      • Choi, C., Baek, S.M., Lee, S.: `Real-time 3D object pose estimation and tracking for natural landmark based visual servo', IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2008, Nice, France, p. 3983–3989.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2009.0094
Loading

Related content

content/journals/10.1049/iet-cvi.2009.0094
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address