Robust visual odometry estimation of road vehicle from dominant surfaces for large-scale mapping

Robust visual odometry estimation of road vehicle from dominant surfaces for large-scale mapping

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Intelligent Transport Systems — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Every urban environment contains a rich set of dominant surfaces which can provide a solid foundation for visual odometry estimation. In this work visual odometry is robustly estimated by computing the motion of camera mounted on a vehicle. The proposed method first identifies a planar region and dynamically estimates the plane parameters. The candidate region and estimated plane parameters are then tracked in the subsequent images and an incremental update of the visual odometry is obtained. The proposed method is evaluated on a navigation dataset of stereo images taken by a car mounted camera that is driven in a large urban environment. The consistency and resilience of the method has also been evaluated on an indoor robot dataset. The results suggest that the proposed visual odometry estimation can robustly recover the motion by tracking a dominant planar surface in the Manhattan environment. In addition to motion estimation solution a set of strategies are discussed for mitigating the problematic factors arising from the unpredictable nature of the environment. The analyses of the results as well as dynamic environmental strategies indicate a strong potential of the method for being part of an autonomous or semi-autonomous system.


    1. 1)
      • 1. Torr, P., Zisserman, A.: ‘Feature based methods for structure and motion estimation’. Vision Algorithms: Theory and Practice, 2000, pp. 278294.
    2. 2)
      • 2. Zhou, Z., Jin, H., Ma, Y.: ‘Robust plane-based structure from motion’. 2012 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2012, pp. 14821489.
    3. 3)
      • 3. Saurer, O., Fraundorfer, F., Pollefeys, M.: ‘Homography based visual odometry with known vertical direction and weak Manhattan world assumption’. Visual Control of Mobile Robots (ViCoMoR 2012), p. 25.
    4. 4)
      • 4. Liang, B., Pears, N.: ‘Visual navigation using planar homographies’. IEEE Int. Conf. Robotics and Automation, Proc. (ICRA'02), 2002, vol. 1, pp. 205210.
    5. 5)
      • 5. Kehoe, J.J., Watkins, A.S., Causey, R.S., Lind, R.: ‘State estimation using optical flow from parallax-weighted feature tracking’. Proc. AIAA Guidance, Navigation, and Control Conf. and Exhibit, Keystone, CO, 2006.
    6. 6)
      • 6. Vincent, E., Laganiére, R.: ‘Detecting planar homographies in an image pair’. Proc. Second Int. Symp. Image and Signal Processing and Analysis (ISPA), 2001, pp. 182187.
    7. 7)
      • 7. Irani, M., Anandan, P.: ‘About direct methods’. Vision Algorithms: Theory and Practice, 2000, pp. 267277.
    8. 8)
    9. 9)
      • 9. Stein, G.P., Mano, O., Shashua, A.: ‘A robust method for computing vehicle ego-motion’. Proc. IEEE Intelligent Vehicles Symp., IV, 2000, pp. 362368.
    10. 10)
      • 10. Ke, Q., Kanade, T.: ‘Transforming camera geometry to a virtual downward-looking camera: Robust ego-motion estimation and ground-layer detection’. Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition, 2003, vol. 1, pp. I390.
    11. 11)
      • 11. Malis, E.: ‘Improving vision-based control using efficient second-order minimization techniques’. IEEE Int. Conf. Robotics and Automation, Proceedings (ICRA'04), 2004, vol. 2, pp. 18431848.
    12. 12)
    13. 13)
    14. 14)
      • 14. Shum, H.-Y., Szeliski, R.: ‘Construction of panoramic image mosaics with global and local alignment’. Panoramic Vision: Sensors, Theory, and Applications, 2001, pp. 227268.
    15. 15)
    16. 16)
      • 16. Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: ‘A benchmark for the evaluation of RGB-D SLAM systems’. 2012 IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2012, pp. 573580.
    17. 17)
      • 17. Endres, F., Hess, J., Engelhard, N., Sturm, J., Cremers, D., Burgard, W.: ‘An evaluation of the RGB-D SLAM system’. 2012 IEEE Int. Conf. Robotics and Automation (ICRA), 2012, pp. 16911696.
    18. 18)
      • 18. Braillon, C., Pradalier, C., Crowley, J.L., Laugier, C.: ‘Real-time moving obstacle detection using optical flow models’, 2006 IEEE Intelligent Vehicles Symp., 2006, pp. 466471.
    19. 19)
      • 19. European Agency for Safety and Health at Work: ‘A review of accidents and injuries to road transport drivers’. Available at:

Related content

This is a required field
Please enter a valid email address