Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Coarse-to-fine 3D road model registration for traffic video augmentation

This study addresses the problem of non-perspective pose estimation from line correspondences in the traffic scenarios. A coarse-to-fine 3D road registration method is proposed for this problem in two stages. Firstly, the iterative closest point algorithm is exploited to estimate the pose coarsely. An objective function is then established to incorporate the feature correspondences for refining the coarse pose. Besides, the framework including road registration is employed for traffic video augmentation. The framework begins with the inputs of traffic videos, road information from Geographic Information Systems and 3D models of traffic elements (e.g. vehicles, pedestrians). Subsequently, 3D road model generation and point-to-line correspondence establishment are achieved in the preprossessing stage. After road and viewpoint registration, the 3D graphic engine is employed to simulate the traffic scene with the road, viewpoints and traffic elements. The augmented videos are generated by fusing the original frames and newly projected traffic elements. The authors demonstrate the superiority of the proposed registration method by the comparison to state-of-the-arts in both quantitative and qualitative experiments. In addition, the frames of the augmented videos validate the proposed method in the application.

References

    1. 1)
      • 21. Wang, Y.: ‘Real-time moving vehicle detection with cast shadow removal in video based on conditional random field’, IEEE Trans. Circuits Syst. Video Techn., 2009, 19, (3), pp. 437441.
    2. 2)
      • 8. Su, Y., Rambach, J., Minaskan, N., et al: ‘Deep multi-state object pose estimation for augmented reality assembly’. 2019 IEEE Int. Symp. on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Beijing, People's Republic of China, 2019, pp. 222227.
    3. 3)
      • 20. Doulamis, N.D.: ‘Coupled multi-object tracking and labeling for vehicle trajectory estimation and matching’, Multimedia Tools Appl., 2010, 50, (1), pp. 173198.
    4. 4)
      • 3. Li, H., Yao, J., Lu, X., et al: ‘Combining points and lines for camera pose estimation and optimization in monocular visual odometry’. 2017 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 2017, pp. 12891296.
    5. 5)
      • 40. wikipedia: ‘Rotation matrix – wikipedia, the free encyclopedia’, 2017, [accessed 17-January-2018]. Available at: https://en.wikipedia.org/wiki/Rotation_matrix.
    6. 6)
      • 4. Lecrosnier, L., Boutteau, R., Vasseur, P., et al: ‘Vision based vehicle relocalization in 3d line-feature map using perspective-n-line with a known vertical direction’. 2019 IEEE Intelligent Transportation Systems Conf. (ITSC), Auckland, New Zealand, 2019, pp. 12631269.
    7. 7)
      • 28. Miraldo, P., Dias, T.J., Ramalingam, S.: ‘A minimal closed-form solution for multi-perspective pose estimation using points and lines’. Computer Vision - ECCV 2018 - 15th European Conf., Proc., Part XVI, Munich, Germany, September 8–14, 2018, pp. 490507.
    8. 8)
      • 32. Abdellali, H., Frohlich, R., Kato, Z.: ‘Robust absolute and relative pose estimation of a central camera system from 2d-3d line correspondences’. 2019 IEEE Int. Conf. on Computer Vision Workshops, Seoul, Republic of Korea, 2019, pp. 895904.
    9. 9)
      • 34. Li, Y., Liu, Y., Su, Y., et al: ‘Three-dimensional traffic scenes simulation from road image sequences’, IEEE Trans. Intell. Transp. Syst., 2016, 17, (4), pp. 11211134.
    10. 10)
      • 13. Liu, Y., Xie, Z., Liu, H.: ‘LB-LSD: a length-based line segment detector for real-time applications’, Pattern Recognit. Lett., 2019, 128, pp. 247254.
    11. 11)
      • 15. Xu, C., Zhang, L., Cheng, L., et al: ‘Pose estimation from line correspondences: a complete analysis and a series of solutions’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (6), pp. 12091222.
    12. 12)
      • 11. Ke, T., Roumeliotis, S.I.: ‘An efficient algebraic solution to the perspective-three-point problem’. 2017 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA., 2017, pp. 46184626.
    13. 13)
      • 24. Çalışkan, A., Çevik, U.: ‘Three-dimensional modeling in medical image processing by using fractal geometry’, J. Comput., 2017, 12, (5), pp. 479485.
    14. 14)
      • 22. Jung, Y.K., Ho, Y.S.: ‘Traffic parameter extraction using video-based vehicle tracking’. Proc. 1999 IEEE/IEEJ/JSAI Int. Conf. on Intelligent Transportation Systems, Tokyo, Japan, 1999, pp. 764769.
    15. 15)
      • 37. Lee, D.T., Schachter, B.J.: ‘Two algorithms for constructing a delaunay triangulation’, Int. J. Comput. Inf. Sci., 1980, 9, (3), pp. 219242.
    16. 16)
      • 41. Waltz, R.A., Morales, J.L., Nocedal, J., et al: ‘An interior algorithm for nonlinear optimization that combines line search and trust region steps’, Math. Program., 2006, 107, (3), pp. 391408.
    17. 17)
      • 18. Maddern, W., Pascoe, G., Linegar, C., et al: ‘1 year, 1000 km: the Oxford robotCar dataset’, Int. J. Robotics Res. (IJRR), 2017, 36, (1), pp. 315.
    18. 18)
      • 27. Cui, Z., Liu, Y., Ren, F., et al: ‘Multi-model traffic scene simulation with road image sequences and gis information’. 2018 IEEE Intelligent Vehicles Symp. (IV), Changshu, People's Republic of China, 2018, pp. 19431948.
    19. 19)
      • 10. Wang, P., Xu, G., Cheng, Y., et al: ‘A simple, robust and fast method for the perspective-n-point problem’, Pattern Recognit. Lett., 2018, 108, pp. 3137.
    20. 20)
      • 2. Mur-Artal, R., Tardós, J.D.: ‘ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D cameras’, IEEE Trans. Robot., 2017, 33, (5), pp. 12551262.
    21. 21)
      • 31. Abdellali, H., Kato, Z.: ‘Absolute and relative pose estimation of a multi-view camera system using 2d-3d line pairs and vertical direction’. 2018 Digital Image Computing: Techniques and Applications, Canberra, Australia, 2018, pp. 18.
    22. 22)
      • 35. Drake, S.: ‘Converting gps coordinates to navigation coordinates (enu)’ (Published by DSTO Electronics and Surveillance Research Laboratory Edinburgh, Australia, 2002).
    23. 23)
      • 33. Abdellali, H., Frohlich, R., Kato, Z.: ‘A direct least-squares solution to multi-view absolute and relative pose from 2d-3d perspective line pairs’. 2019 IEEE Int. Conf. on Computer Vision Workshops, Seoul, Republic of Korea, 2019, pp. 21192128.
    24. 24)
      • 39. Besl, P.J., McKay, N.D.: ‘A method for registration of 3-d shapes’, IEEE Trans. Pattern Anal. Mach. Intell., 1992, 14, (2), pp. 239256.
    25. 25)
      • 14. Pribyl, B., Zemcík, P., Cadík, M.: ‘Absolute pose estimation from line correspondences using direct linear transformation’, Comput. Vis. Image Underst., 2017, 161, pp. 130144.
    26. 26)
      • 1. Buczko, M., Willert, V., Schwehr, J., et al: ‘Self-validation for automotive visual odometry’. 2018 IEEE Intelligent Vehicles Symp. IV 2018, Changshu, Suzhou, China, June 26–30, 2018, pp. 16.
    27. 27)
      • 26. Rodríguez-Gonzálvez, P., Muñoz-Nieto, A.L., del Pozo, S., et al4d reconstruction and visualization of cultural heritage: analyzing our legacy through time’, Int. Arch. Photogrammetry Remote Sens. Spatial Inf. Sci., 2017, 42, pp. 609.
    28. 28)
      • 30. Horanyi, N., Kato, Z.: ‘Multiview absolute pose using 3d - 2d perspective line correspondences and vertical direction’. 2017 IEEE Int. Conf. on Computer Vision Workshops ICCV 2017, Venice, Italy, October 22–29, 2017, pp. 24722480.
    29. 29)
      • 43. Schlegel, D., Colosi, M., Grisetti, G.: ‘Proslam: graph SLAM from a programmer's perspective’. 2018 IEEE Int. Conf. on Robotics and Automation (IEEE), Brisbane, QLD, Australia, 2018, pp. 19.
    30. 30)
      • 7. Adagolodjo, Y., Trivisonne, R., Haouchine, N., et al: ‘Silhouette-based pose estimation for deformable organs application to surgical augmented reality’. 2017 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 2017, pp. 539544.
    31. 31)
      • 9. Lu, Y., Kourian, S., Salvaggio, C., et al: ‘Single image 3d vehicle pose estimation for augmented reality’. 2019 IEEE Global Conf. on Signal and Information Processing (GlobalSIP), Ottawa, ON, Canada, 2019, pp. 15.
    32. 32)
      • 23. Beksi, W.J., Papanikolopoulos, N.: ‘A topology-based descriptor for 3d point cloud modeling: theory and experiments’, Image Vis. Comput., 2019, 88, pp. 8495.
    33. 33)
      • 12. Persson, M., Nordberg, K.: ‘Lambda twist: an accurate fast robust perspective three point (p3p) solver’. Computer Vision – ECCV 2018, Cham, 2018, pp. 334349.
    34. 34)
      • 17. Geiger, A., Lenz, P., Stiller, C., et al: ‘Vision meets robotics: the KITTI dataset’, Int. J. Robot. Res. (IJRR), 2013, 32, (11), pp. 12311237.
    35. 35)
      • 42. Ugray, Z., Lasdon, L., Plummer, J., et al: ‘Scatter search and local nlp solvers: a multistart framework for global optimization’, INFORMS J. Comput., 2007, 19, (3), pp. 328340.
    36. 36)
      • 25. Kyriakaki, G., Doulamis, A., Doulamis, N., et al: ‘4d reconstruction of tangible cultural heritage objects from web-retrieved images’, Int. J. Heritage Digital Era, 2014, 3, (2), pp. 431451.
    37. 37)
      • 29. Horanyi, N., Kato, Z.: ‘Generalized pose estimation from line correspondences with known vertical direction’. 2017 Int. Conf. on 3D Vision, Qingdao, People's Republic of China, 2017, pp. 244253.
    38. 38)
      • 36. Torr, P.H.S., Zisserman, A.: ‘Mlesac: a new robust estimator with application to estimating image geometry’, Comput. Vis. Image Understand., 2000, 78, (1), pp. 138156.
    39. 39)
      • 19. Huang, X., Cheng, X., Geng, Q., et al: ‘The apolloscape dataset for autonomous driving’. 2018 IEEE Conf. on Computer Vision and Pattern Recognition Workshops CVPR Workshops 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 954960.
    40. 40)
      • 16. Lee, G.H.: ‘A minimal solution for non-perspective pose estimation from line correspondences’. Computer Vision - ECCV 2016 - 14th European Conf. Proc. Part V, Amsterdam, The Netherlands, October 11–14, 2016, pp. 170185.
    41. 41)
      • 38. Rublee, E., Rabaud, V., Konolige, K., et al: ‘Orb: an efficient alternative to sift or surf’. 2011 IEEE Int. Conf. on Computer Vision (ICCV), Barcelona, Spain, 2011, pp. 25642571.
    42. 42)
      • 5. Kendall, A., Grimes, M., Cipolla, R.: ‘Posenet: a convolutional network for real-time 6-dof camera relocalization’. 2015 IEEE Int. Conf. on Computer Vision (ICCV), Santiago, Chile, 2015, pp. 29382946.
    43. 43)
      • 6. Marchand, É., Uchiyama, H., Spindler, F.: ‘Pose estimation for augmented reality: a hands-on survey’, IEEE Trans. Vis. Comput. Graph., 2016, 22, (12), pp. 26332651.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2019.1036
Loading

Related content

content/journals/10.1049/iet-ipr.2019.1036
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address