http://iet.metastore.ingenta.com
1887

Static map reconstruction and dynamic object tracking for a camera and laser scanner system

Static map reconstruction and dynamic object tracking for a camera and laser scanner system

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The vision-based mobile robot's simultaneous localisation and mapping and navigation capability in dynamic environments are highly problematic elements of robot vision applications. The goal of this study is to reconstruct a static map and track the dynamic object for a camera and laser scanner system. An improved automatic calibration is designed to merge image and laser point clouds. Then, the fusion data is exploited to detect the slowly moved object and reconstruct static map. Tracking-by-detection requires the correct assignment of noisy detection results to object trajectories. In the proposed method, occluded regions are combined 3D motion models with object appearance to manage difficulties in crowded scenes. The proposed method was validated by experimental results gathered in a real environment and on publicly available data.

References

    1. 1)
      • 1. Zou, D., Tan, P.: ‘Coslam: collaborative visual slam in dynamic environments’, IEEE Trans. Softw. Eng., 2013, 35, (2), pp. 354366.
    2. 2)
      • 2. Arras, K.O., Grzonka, S., Luber, M., et al: ‘Efficient people tracking in laser range data using a multi-hypothesis leg-tracker with adaptive occlusion probabilities’. IEEE Int. Conf. on Robotics and Automation, 2008 (ICRA 2008), 2008, pp. 17101715.
    3. 3)
      • 3. Hornung, A., Kai, M.W., Bennewitz, M., et al: ‘Octomap: an efficient probabilistic 3d mapping framework based on octrees’, Auton. Robots, 2013, 34, (3), pp. 189206.
    4. 4)
      • 4. Pomerleau, F., Krüsi, P., Colas, F., et al: ‘Long-term 3d map maintenance in dynamic environments’. Int. Conf. on Robotics and Automation, 2014, pp. 37123719.
    5. 5)
      • 5. Dewan, A., Caselitz, T., Tipaldi, G.D., et al: ‘Motion-based detection and tracking in 3D LiDAR scans’. Int. Conf. on Robotics and Automation, 2016, pp. 45084513.
    6. 6)
      • 6. Zou, C., He, B., Zhang, L., et al: ‘An automatic calibration between an omni-directional camera and a laser rangefinder for dynamic scenes reconstruction’. IEEE Int. Conf. on Robotics and Biomimetics, 2016, pp. 15281534.
    7. 7)
      • 7. Unnikrishnan, R., Hebert, M.: ‘Fast extrinsic calibration of a laser rangefinder to a camera’ (Carnegie Mellon University, 2005).
    8. 8)
      • 8. Scaramuzza, D., Harati, A., Siegwart, R.: ‘Extrinsic self calibration of a camera and a 3d laser range finder from natural scenes’. Int. Conf. on Intelligent Robots and Systems, 2007, pp. 41644169.
    9. 9)
      • 9. Honegger, D., Meier, L., Tanskanen, P., et al: ‘An open source and open hardware embedded metric optical flow CMOS camera for indoor and outdoor applications’. Int. Conf. on Robotics and Automation, 2013, pp. 17361741.
    10. 10)
      • 10. Ma, C., Yang, X., Zhang, C., et al: ‘Long-term correlation tracking’. IEEE Conf. on Computer Vision and Pattern Recognition, 2015, pp. 53885396.
    11. 11)
      • 11. Liu, S., Zhang, T., Cao, X., et al: ‘Structural correlation filter for robust visual tracking’. IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 43124320.
    12. 12)
      • 12. Qi, Y., Zhang, S., Qin, L., et al: ‘Hedged deep tracking’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 43034311.
    13. 13)
      • 13. Valmadre, J., Bertinetto, L., Henriques, J.F., et al: ‘End-to-end representation learning for correlation filter based tracking’ (arXiv preprint arXiv:170406036, 2017).
    14. 14)
      • 14. Reid, D.: ‘An algorithm for tracking multiple targets’, Trans. Autom. Control, 1979, 24, (6), pp. 843854.
    15. 15)
      • 15. Fortmann, T., Bar.Shalom, Y., Scheffe, M.: ‘Sonar tracking of multiple targets using joint probabilistic data association’, IEEE J. Ocean. Eng., 1983, 8, (3), pp. 173184.
    16. 16)
      • 16. Munkres, J.: ‘Algorithms for the assignment and transportation problems’, J. Soc. Ind. Appl. Math., 1957, 5, (1), pp. 3238.
    17. 17)
      • 17. Andriyenko, A., Roth, S., Schindler, K.: ‘An analytical formulation of global occlusion reasoning for multi-target tracking’. Int. Conf. on Computer Vision Workshops, 2011, pp. 18391846.
    18. 18)
      • 18. Choi, W.: ‘Near-online multi-target tracking with aggregated local flow descriptor’. Int. Conf. on Computer Vision, 2016, pp. 30293037.
    19. 19)
      • 19. Tang, S., Andriluka, M., Andres, B., et al: ‘Multi people tracking with lifted multicut and person re-identification’. IEEE Conf. on Computer Vision and Pattern Recognition, 2017..
    20. 20)
      • 20. Mei, C., Rives, P.: ‘Calibration between a central catadioptric camera and a laser range finder for robotic applications’. Int. Conf. on Robotics and Automation, 2006, pp. 532537.
    21. 21)
      • 21. Zhang, Q., Pless, R.: ‘Extrinsic calibration of a camera and laser range finder (improves camera calibration)’. Int. Conf. on Intelligent Robots and Systems, 2004, pp. 23012306.
    22. 22)
      • 22. Huang, L., Barth, M.: ‘A novel multi-planar LiDAR and computer vision calibration procedure using 2d patterns for automated navigation’. Intelligent Vehicles Symp., 2009, pp. 117122.
    23. 23)
      • 23. Yang, M.Y., Förstner, W.: ‘Plane detection in point cloud data’. Int. Conf. on Machine Control Guidance, 2010, vol. 1, pp. 95104.
    24. 24)
      • 24. Ioannou, Y., Taati, B., Harrap, R., et al: ‘Difference of normals as a multi-scale operator in unorganized point clouds’. 2012 Second Int. Conf. on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2012, pp. 501508.
    25. 25)
      • 25. Rusu, R.B.: ‘Semantic 3d object maps for everyday manipulation in human living environments’, KI-Künstliche Intelligenz, 2010, 24, (4), pp. 345348.
    26. 26)
      • 26. Marquardt, D.W.: ‘An algorithm for least-squares estimation of nonlinear parameters’, J. Soc. Ind. Appl. Math., 1963, 11, (2), pp. 431441.
    27. 27)
      • 27. Zhang, J., Singh, S.: ‘Loam: LiDAR odometry and mapping in real-time’. Robotics: Science and Systems Conf. (RSS 2014), 2014, vol. 2.
    28. 28)
      • 28. Liu, C., Yuen, J., Torralba, A., et al: ‘Sift flow: dense correspondence across different scenes’. European Conf. on Computer Vision, 2008, pp. 2842.
    29. 29)
      • 29. Vedaldi, A., Soatto, S.: ‘Quick shift and kernel methods for mode seeking’. European Conf. on Computer Vision, 2008, pp. 705718.
    30. 30)
      • 30. Chen, J., Sheng, H., Zhang, Y., et al: ‘Enhancing detection model for multiple hypothesis tracking’. Conf. on Computer Vision and Pattern Recognition Workshops, 2017, pp. 21432152.
    31. 31)
      • 31. Geiger, A., Lenz, P., Urtasun, R.: ‘Are we ready for autonomous driving? The KITTI vision benchmark suite’. Conf. on Computer Vision and Pattern Recognition, 2012, pp. 33543361.
    32. 32)
      • 32. Postica, G., Romanoni, A., Matteucci, M.: ‘Robust moving objects detection in LiDAR data exploiting visual cues’. Int. Conf. on Intelligent Robots and Systems, 2016, pp. 10931098.
    33. 33)
      • 33. Azim, A., Aycard, O.: ‘Detection, classification and tracking of moving objects in a 3d environment’. 2012 IEEE Intelligent Vehicles Symp., 2012, pp. 802807.
    34. 34)
      • 34. Pomerleau, F., Colas, F., Siegwart, R., et al: ‘Comparing ICP variants on real-world data sets’, Auton. Robots, 2013, 34, (3), pp. 133148.
    35. 35)
      • 35. Lee, B., Erdenee, E., Jin, S., et al: ‘Multi-class multi-object tracking using changing point detection’. European Conf. on Computer Vision, 2016, pp. 6883.
    36. 36)
      • 36. Ju, H.Y., Lee, C.R., Yang, M.H., et al: ‘Online multi-object tracking via structural constraint event aggregation’. IEEE Int. Conf. on Computer Vision and Pattern Recognition, 2016, pp. 13921400.
    37. 37)
      • 37. Choi, W.: ‘Near-online multi-target tracking with aggregated local flow descriptor’. Int. Conf. on Computer Vision, 2015, pp. 30293037.
    38. 38)
      • 38. Yoon, J.H., Yang, M.H., Lim, J., et al: ‘Bayesian multi-object tracking using motion context from multiple objects’. Winter Conf. on Applications of Computer Vision, 2015, pp. 3340.
    39. 39)
      • 39. Chen, T., Lu, S., Lin, Y., et al: ‘S-CNN: Subcategory-aware convolutional networks for object detection’, IEEE Trans. Pattern Anal. Mach. Intell., PP, (99), 2017, p. 1.
    40. 40)
      • 40. Wang, S., Fowlkes, C.C.: ‘Learning optimal parameters for multi-target tracking with contextual interactions’, Int. J. Comput. Vis., 2016, pp. 118.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0308
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0308
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address