Static map reconstruction and dynamic object tracking for a camera and laser scanner system
Static map reconstruction and dynamic object tracking for a camera and laser scanner system
- Author(s): Cheng Zou 1 ; Bingwei He 1 ; Liwei Zhang 1 ; Jianwei Zhang 2
- DOI: 10.1049/iet-cvi.2017.0308
For access to this article, please select a purchase option:
Buy article PDF
Buy Knowledge Pack
IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.
Thank you
Your recommendation has been sent to your librarian.
- Author(s): Cheng Zou 1 ; Bingwei He 1 ; Liwei Zhang 1 ; Jianwei Zhang 2
-
-
View affiliations
-
Affiliations:
1:
School of Mechanical Engineering and Automation , Fuzhou University , Fuzhou , People's Republic of China ;
2: TAMS, Department of Informatics , University of Hamburg , Hamburg , Germany
-
Affiliations:
1:
School of Mechanical Engineering and Automation , Fuzhou University , Fuzhou , People's Republic of China ;
- Source:
Volume 12, Issue 4,
June
2018,
p.
384 – 392
DOI: 10.1049/iet-cvi.2017.0308 , Print ISSN 1751-9632, Online ISSN 1751-9640
The vision-based mobile robot's simultaneous localisation and mapping and navigation capability in dynamic environments are highly problematic elements of robot vision applications. The goal of this study is to reconstruct a static map and track the dynamic object for a camera and laser scanner system. An improved automatic calibration is designed to merge image and laser point clouds. Then, the fusion data is exploited to detect the slowly moved object and reconstruct static map. Tracking-by-detection requires the correct assignment of noisy detection results to object trajectories. In the proposed method, occluded regions are combined 3D motion models with object appearance to manage difficulties in crowded scenes. The proposed method was validated by experimental results gathered in a real environment and on publicly available data.
Inspec keywords: image reconstruction; SLAM (robots); object detection; object tracking; robot vision
Other keywords: tracking-by-detection; simultaneous localisation and mapping and navigation capability; dynamic object tracking; 3D motion models; vision-based mobile robot; static map reconstruction
Subjects: Computer vision and image processing techniques; Optical, image and video signal processing; Mobile robots
References
-
-
1)
-
1. Zou, D., Tan, P.: ‘Coslam: collaborative visual slam in dynamic environments’, IEEE Trans. Softw. Eng., 2013, 35, (2), pp. 354–366.
-
-
2)
-
2. Arras, K.O., Grzonka, S., Luber, M., et al: ‘Efficient people tracking in laser range data using a multi-hypothesis leg-tracker with adaptive occlusion probabilities’. IEEE Int. Conf. on Robotics and Automation, 2008 (ICRA 2008), 2008, pp. 1710–1715.
-
-
3)
-
3. Hornung, A., Kai, M.W., Bennewitz, M., et al: ‘Octomap: an efficient probabilistic 3d mapping framework based on octrees’, Auton. Robots, 2013, 34, (3), pp. 189–206.
-
-
4)
-
4. Pomerleau, F., Krüsi, P., Colas, F., et al: ‘Long-term 3d map maintenance in dynamic environments’. Int. Conf. on Robotics and Automation, 2014, pp. 3712–3719.
-
-
5)
-
5. Dewan, A., Caselitz, T., Tipaldi, G.D., et al: ‘Motion-based detection and tracking in 3D LiDAR scans’. Int. Conf. on Robotics and Automation, 2016, pp. 4508–4513.
-
-
6)
-
6. Zou, C., He, B., Zhang, L., et al: ‘An automatic calibration between an omni-directional camera and a laser rangefinder for dynamic scenes reconstruction’. IEEE Int. Conf. on Robotics and Biomimetics, 2016, pp. 1528–1534.
-
-
7)
-
7. Unnikrishnan, R., Hebert, M.: ‘Fast extrinsic calibration of a laser rangefinder to a camera’ (Carnegie Mellon University, 2005).
-
-
8)
-
8. Scaramuzza, D., Harati, A., Siegwart, R.: ‘Extrinsic self calibration of a camera and a 3d laser range finder from natural scenes’. Int. Conf. on Intelligent Robots and Systems, 2007, pp. 4164–4169.
-
-
9)
-
9. Honegger, D., Meier, L., Tanskanen, P., et al: ‘An open source and open hardware embedded metric optical flow CMOS camera for indoor and outdoor applications’. Int. Conf. on Robotics and Automation, 2013, pp. 1736–1741.
-
-
10)
-
10. Ma, C., Yang, X., Zhang, C., et al: ‘Long-term correlation tracking’. IEEE Conf. on Computer Vision and Pattern Recognition, 2015, pp. 5388–5396.
-
-
11)
-
11. Liu, S., Zhang, T., Cao, X., et al: ‘Structural correlation filter for robust visual tracking’. IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 4312–4320.
-
-
12)
-
12. Qi, Y., Zhang, S., Qin, L., et al: ‘Hedged deep tracking’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 4303–4311.
-
-
13)
-
13. Valmadre, J., Bertinetto, L., Henriques, J.F., et al: ‘End-to-end representation learning for correlation filter based tracking’ (arXiv preprint arXiv:170406036, 2017).
-
-
14)
-
14. Reid, D.: ‘An algorithm for tracking multiple targets’, Trans. Autom. Control, 1979, 24, (6), pp. 843–854.
-
-
15)
-
15. Fortmann, T., Bar.Shalom, Y., Scheffe, M.: ‘Sonar tracking of multiple targets using joint probabilistic data association’, IEEE J. Ocean. Eng., 1983, 8, (3), pp. 173–184.
-
-
16)
-
16. Munkres, J.: ‘Algorithms for the assignment and transportation problems’, J. Soc. Ind. Appl. Math., 1957, 5, (1), pp. 32–38.
-
-
17)
-
17. Andriyenko, A., Roth, S., Schindler, K.: ‘An analytical formulation of global occlusion reasoning for multi-target tracking’. Int. Conf. on Computer Vision Workshops, 2011, pp. 1839–1846.
-
-
18)
-
18. Choi, W.: ‘Near-online multi-target tracking with aggregated local flow descriptor’. Int. Conf. on Computer Vision, 2016, pp. 3029–3037.
-
-
19)
-
19. Tang, S., Andriluka, M., Andres, B., et al: ‘Multi people tracking with lifted multicut and person re-identification’. IEEE Conf. on Computer Vision and Pattern Recognition, 2017..
-
-
20)
-
20. Mei, C., Rives, P.: ‘Calibration between a central catadioptric camera and a laser range finder for robotic applications’. Int. Conf. on Robotics and Automation, 2006, pp. 532–537.
-
-
21)
-
21. Zhang, Q., Pless, R.: ‘Extrinsic calibration of a camera and laser range finder (improves camera calibration)’. Int. Conf. on Intelligent Robots and Systems, 2004, pp. 2301–2306.
-
-
22)
-
22. Huang, L., Barth, M.: ‘A novel multi-planar LiDAR and computer vision calibration procedure using 2d patterns for automated navigation’. Intelligent Vehicles Symp., 2009, pp. 117–122.
-
-
23)
-
23. Yang, M.Y., Förstner, W.: ‘Plane detection in point cloud data’. Int. Conf. on Machine Control Guidance, 2010, vol. 1, pp. 95–104.
-
-
24)
-
24. Ioannou, Y., Taati, B., Harrap, R., et al: ‘Difference of normals as a multi-scale operator in unorganized point clouds’. 2012 Second Int. Conf. on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2012, pp. 501–508.
-
-
25)
-
25. Rusu, R.B.: ‘Semantic 3d object maps for everyday manipulation in human living environments’, KI-Künstliche Intelligenz, 2010, 24, (4), pp. 345–348.
-
-
26)
-
26. Marquardt, D.W.: ‘An algorithm for least-squares estimation of nonlinear parameters’, J. Soc. Ind. Appl. Math., 1963, 11, (2), pp. 431–441.
-
-
27)
-
27. Zhang, J., Singh, S.: ‘Loam: LiDAR odometry and mapping in real-time’. Robotics: Science and Systems Conf. (RSS 2014), 2014, vol. 2.
-
-
28)
-
28. Liu, C., Yuen, J., Torralba, A., et al: ‘Sift flow: dense correspondence across different scenes’. European Conf. on Computer Vision, 2008, pp. 28–42.
-
-
29)
-
29. Vedaldi, A., Soatto, S.: ‘Quick shift and kernel methods for mode seeking’. European Conf. on Computer Vision, 2008, pp. 705–718.
-
-
30)
-
30. Chen, J., Sheng, H., Zhang, Y., et al: ‘Enhancing detection model for multiple hypothesis tracking’. Conf. on Computer Vision and Pattern Recognition Workshops, 2017, pp. 2143–2152.
-
-
31)
-
31. Geiger, A., Lenz, P., Urtasun, R.: ‘Are we ready for autonomous driving? The KITTI vision benchmark suite’. Conf. on Computer Vision and Pattern Recognition, 2012, pp. 3354–3361.
-
-
32)
-
32. Postica, G., Romanoni, A., Matteucci, M.: ‘Robust moving objects detection in LiDAR data exploiting visual cues’. Int. Conf. on Intelligent Robots and Systems, 2016, pp. 1093–1098.
-
-
33)
-
33. Azim, A., Aycard, O.: ‘Detection, classification and tracking of moving objects in a 3d environment’. 2012 IEEE Intelligent Vehicles Symp., 2012, pp. 802–807.
-
-
34)
-
34. Pomerleau, F., Colas, F., Siegwart, R., et al: ‘Comparing ICP variants on real-world data sets’, Auton. Robots, 2013, 34, (3), pp. 133–148.
-
-
35)
-
35. Lee, B., Erdenee, E., Jin, S., et al: ‘Multi-class multi-object tracking using changing point detection’. European Conf. on Computer Vision, 2016, pp. 68–83.
-
-
36)
-
36. Ju, H.Y., Lee, C.R., Yang, M.H., et al: ‘Online multi-object tracking via structural constraint event aggregation’. IEEE Int. Conf. on Computer Vision and Pattern Recognition, 2016, pp. 1392–1400.
-
-
37)
-
37. Choi, W.: ‘Near-online multi-target tracking with aggregated local flow descriptor’. Int. Conf. on Computer Vision, 2015, pp. 3029–3037.
-
-
38)
-
38. Yoon, J.H., Yang, M.H., Lim, J., et al: ‘Bayesian multi-object tracking using motion context from multiple objects’. Winter Conf. on Applications of Computer Vision, 2015, pp. 33–40.
-
-
39)
-
39. Chen, T., Lu, S., Lin, Y., et al: ‘S-CNN: Subcategory-aware convolutional networks for object detection’, IEEE Trans. Pattern Anal. Mach. Intell., PP, (99), 2017, p. 1.
-
-
40)
-
40. Wang, S., Fowlkes, C.C.: ‘Learning optimal parameters for multi-target tracking with contextual interactions’, Int. J. Comput. Vis., 2016, pp. 1–18.
-
-
1)

Related content
