access icon free Accurate object distance estimation based on frequency-domain analysis with a stereo camera

Precise and robust distance measurement is one of the most important requirements for driving assistance systems and automated driving systems. In this study, the authors propose a new method for providing accurate distance measurements through frequency-domain analysis based on a stereo camera by exploiting key information obtained from the analysis of captured images. Moreover, the proposed method was extensively tested and evaluated on a real urban road, highway and tunnel. Based on these results, the authors show that the proposed method provides more precise distance information in real time compared with conventional algorithms. By applying the authors' methodology to measure the distances of various objects, it can be demonstrated that their algorithm offers an improvement of up to 10%.

Inspec keywords: distance measurement; road traffic; traffic engineering computing; image sensors; frequency-domain analysis; stereo image processing

Other keywords: stereo camera; real urban road; tunnel; frequency-domain analysis; accurate object distance estimation; highway; automated driving systems; robust distance measurement; driving assistance systems

Subjects: Mathematical analysis; Image sensors; Optical, image and video signal processing; Computer vision and image processing techniques; Mathematical analysis; Spatial variables measurement; Traffic engineering computing

References

    1. 1)
      • 21. Wang, Z.F., Zheng, Z.G.: ‘A region based stereo matching algorithm using cooperative optimization’. IEEE Int. Conf. Computer Vision and Pattern Recognition, June 2008, pp. 18.
    2. 2)
      • 20. Klaus, A., Sormann, M., Karner, K.: ‘Segment-based stereo matching using belief propagation and a self-adapting dissimilarity measure’. IEEE Int. Conf. Pattern Recognition (ICPR), August 2006, pp. 1518.
    3. 3)
      • 14. Scharstein, D., Szeliski, R.: ‘A taxonomy and evaluation of dense two-frame stereo correspondence algorithms’, Int. J. Comput. Vis., 2002, 47, (1–3), pp. 742.
    4. 4)
      • 29. Akin, A., Baz, I., Schmid, A., et al: ‘Dynamically adaptive real-time disparity estimation hardware using iterative refinement’, Integr. VLSI J., 2014, 47, (3), pp. 365376.
    5. 5)
      • 33. Ahlvers, U., Zoelzer, U., Rechmeier, S.: ‘FFT-based disparity estimation for stereo image coding’. Int. Conf. Image Processing, September 2003, pp. 761764.
    6. 6)
      • 34. Van Loan, C.: ‘Computational frameworks for the fast Fourier transform’ (SIAM, 1992).
    7. 7)
      • 32. Kang, S.N., Yoo, I.S., Shin, M., et al: ‘Accurate inter-vehicle distance measurement based on monocular camera and line laser’, IEICE Electron. Express, 2014, 1, (9), pp. 17.
    8. 8)
      • 18. Ogale, A.S., Aloimonos, Y.: ‘Robust contrast invariant stereo correspondence’. Proc. IEEE Int. Conf. Robotics and Automation, April 2005, pp. 819824.
    9. 9)
      • 19. Kolmogorov, V., Zabih, R.: ‘Computing visual correspondence with occlusions using graph cuts’. Proc. IEEE Int. Conf. Computer Vision, 2001, pp. 508515.
    10. 10)
      • 3. Davison, A.J.: ‘Real-time simultaneous localisation and mapping with a single camera’. Proc. Int. Conf. Computer Vision, October 2003, pp. 14031410.
    11. 11)
      • 38. Shen, L., Tan, P., Lin, S.: ‘Intrinsic image decomposition with non-local texture cues’. IEEE Int. Conf. Computer Vision and Pattern Recognition, June 2008, pp. 17.
    12. 12)
      • 31. Canny, J.: ‘A computational approach to edge detection’, IEEE Trans. Pattern Anal. Mach. Intell., 1986, 6, pp. 679698.
    13. 13)
      • 40. Nummiaro, K., Koller-Meier, E., Van Gool, L.: ‘An adaptive color-based particle filter’, Image Vis. Comput., 2003, 21, (1), pp. 99110.
    14. 14)
      • 16. Luan, X., Yu, F., Zhou, H., et al: ‘Illumination-robust area-based stereo matching with improved census transform’. Int. Conf. Measurement, Information and Control (MIC), May 2012, pp. 194197.
    15. 15)
      • 23. Hermann, S., Klette, R.: ‘Iterative semi-global matching for robust driver assistance systems’. Asian Conf. Computer Vision, November 2012, pp. 465478.
    16. 16)
      • 11. Geiger, A., Lenz, P., Urtasun, R.: ‘Are we ready for autonomous driving? The KITTI vision benchmark suite’. IEEE Computer Vision and Pattern Recognition, June 2012, pp. 33543361.
    17. 17)
      • 27. Mei, X., Sun, X., Zhou, M., et al: ‘On building an accurate stereo matching system on graphics hardware’. IEEE Int. Conf. Computer Vision Workshops (ICCV Workshops), November 2011, pp. 467474.
    18. 18)
      • 13. Zbontar, J., LeCun, Y.: ‘Stereo matching by training a convolutional neural network to compare image patches’, J. Mach. Learn. Res., 2016, 17, pp. 132.
    19. 19)
      • 37. Schneider, S., Himmelsbach, M., Luettel, T., et al: ‘Fusing vision and lidar-synchronization, correction and occlusion reasoning’. IEEE Intelligent Vehicles Symp. (IV), June 2010, pp. 388393.
    20. 20)
      • 12. Guney, F., Geiger, A.: ‘Displets: resolving stereo ambiguities using object knowledge’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2015, pp. 41654175.
    21. 21)
      • 15. Zabih, R., Woodfill, J.: ‘Non-parametric local transforms for computing visual correspondence’. European Conf. Computer Vision, May 1994, pp. 151158.
    22. 22)
      • 25. Spangenberg, R., Langner, T., Adfeldt, S., et al: ‘Large scale semi-global matching on the cpu’. IEEE Intelligent Vehicles Symp. (IV), June 2014, pp. 195201.
    23. 23)
      • 24. Spangenberg, R., Langner, T., Rojas, R.: ‘Weighted semi-global matching and center-symmetric census transform for robust driver assistance’. Int. Conf. Computer Analysis of Images and Patterns, August 2013, pp. 3441.
    24. 24)
      • 39. Julier, S.J., Uhlmann, J.K.: ‘New extension of the Kalman filter to nonlinear systems’. AeroSense'97, July 1997, pp. 182739.
    25. 25)
      • 22. Hirschmuller, H.: ‘Stereo processing by semiglobal matching and mutual information’, IEEE Trans. Pattern Anal. Mach. Intell., 2008, 30, (2), pp. 328341.
    26. 26)
      • 4. Pollefeys, M., Nistér, D., Frahm, J.M., et al: ‘Detailed real-time urban 3d reconstruction from video’, Int. J. Comput. Vis., 2008, 78, (2–3), pp. 143167.
    27. 27)
      • 2. Velodyne Lidar: ‘Velodyne HDL-64E product description’. Available at http://velodynelidar.com/hdl-64e.html, accessed December 2016.
    28. 28)
      • 26. Huang, S., Xiao, S., Feng, W.C.: ‘On the energy efficiency of graphics processing units for scientific computing’. IEEE Int. Symp. on Parallel & Distributed Processing, May 2009, pp. 18.
    29. 29)
      • 8. Urmson, C., Anhalt, J., Bagnell, D., et al: ‘Autonomous driving in urban environments: boss and the urban challenge’, J. Field Robot., 2008, 25, (8), pp. 425466.
    30. 30)
      • 28. Banz, C., Hesselbarth, S., Flatt, H., et al: ‘Real-time stereo vision system using semi-global matching disparity estimation: architecture and FPGA-implementation’. IEEE Int. Conf. Embedded Computer Systems (SAMOS), July 2010, pp. 93101.
    31. 31)
      • 17. Baydoun, M., Al-Alaoui, M.A.: ‘Enhancing stereo matching with varying illumination through histogram information and normalized cross correlation’. Int. Conf. Systems, Signals and Image Processing (IWSSIP), July 2013, pp. 59.
    32. 32)
      • 6. Henry, P., Krainin, M., Herbst, E., et al: ‘RGB-D mapping: using depth cameras for dense 3D modeling of indoor environments’. Experimental RoboticsSpringer, Berlin, Heideberg, pp. 477491.
    33. 33)
      • 36. Yang, Q., Tan, K.H., Ahuja, N.: ‘Real-time O(1) bilateral filtering’. IEEE Int. Conf. Computer Vision and Pattern Recognition, June 2009, pp. 557564.
    34. 34)
      • 9. Wolcott, R.W., Eustice, R.M.: ‘Fast LIDAR localization using multiresolution Gaussian mixture maps’. IEEE Int. Conf. Robotics and Automation (ICRA), May 2015, pp. 24142821.
    35. 35)
      • 35. Heo, Y.S., Lee, K.M., Lee, S.U.: ‘Joint depth map and color consistency estimation for stereo images with different illuminations and cameras’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (5), pp. 10941106.
    36. 36)
      • 10. Scharstein, D., Hirschmüller, H., Kitajima, Y.: ‘High-resolution stereo datasets with subpixel-accurate ground truth’. German Conf., Pattern Recognition, September 2014, pp. 3142.
    37. 37)
      • 30. Labayrade, R., Aubert, D., Tarel, J.P.: ‘Real time obstacle detection in stereovision on non flat road geometry through ‘v-disparity’ representation’. IEEE Intelligent Vehicles Symp. (IV), June 2002, pp. 646651.
    38. 38)
      • 5. Stellet, J.E., Schumacher, J., Lange, O., et al: ‘Statistical modelling of object detection in stereo vision-based driver assistance’. Intelligent Autonomous Systems 13, 2016, pp. 749761.
    39. 39)
      • 7. Newcombe, R.A., Izadi, S., Hilliges, O., et al: ‘KinectFusion: real-time dense surface mapping and tracking’. Int. Conf. Mixed and Augmented Reality (ISMAR), October 2013, pp. 127136.
    40. 40)
      • 1. Wei, J., Snider, J.M., Kim, J., et al: ‘Towards a viable autonomous driving research platform’. IEEE Intelligent Vehicles Symp. (IV), June 2013, pp. 763770.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-its.2016.0110
Loading

Related content

content/journals/10.1049/iet-its.2016.0110
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading