access icon free Static map reconstruction and dynamic object tracking for a camera and laser scanner system

The vision-based mobile robot's simultaneous localisation and mapping and navigation capability in dynamic environments are highly problematic elements of robot vision applications. The goal of this study is to reconstruct a static map and track the dynamic object for a camera and laser scanner system. An improved automatic calibration is designed to merge image and laser point clouds. Then, the fusion data is exploited to detect the slowly moved object and reconstruct static map. Tracking-by-detection requires the correct assignment of noisy detection results to object trajectories. In the proposed method, occluded regions are combined 3D motion models with object appearance to manage difficulties in crowded scenes. The proposed method was validated by experimental results gathered in a real environment and on publicly available data.

Inspec keywords: image reconstruction; SLAM (robots); object detection; object tracking; robot vision

Other keywords: tracking-by-detection; simultaneous localisation and mapping and navigation capability; dynamic object tracking; 3D motion models; vision-based mobile robot; static map reconstruction

Subjects: Computer vision and image processing techniques; Optical, image and video signal processing; Mobile robots

References

    1. 1)
      • 38. Yoon, J.H., Yang, M.H., Lim, J., et al: ‘Bayesian multi-object tracking using motion context from multiple objects’. Winter Conf. on Applications of Computer Vision, 2015, pp. 3340.
    2. 2)
      • 21. Zhang, Q., Pless, R.: ‘Extrinsic calibration of a camera and laser range finder (improves camera calibration)’. Int. Conf. on Intelligent Robots and Systems, 2004, pp. 23012306.
    3. 3)
      • 3. Hornung, A., Kai, M.W., Bennewitz, M., et al: ‘Octomap: an efficient probabilistic 3d mapping framework based on octrees’, Auton. Robots, 2013, 34, (3), pp. 189206.
    4. 4)
      • 33. Azim, A., Aycard, O.: ‘Detection, classification and tracking of moving objects in a 3d environment’. 2012 IEEE Intelligent Vehicles Symp., 2012, pp. 802807.
    5. 5)
      • 27. Zhang, J., Singh, S.: ‘Loam: LiDAR odometry and mapping in real-time’. Robotics: Science and Systems Conf. (RSS 2014), 2014, vol. 2.
    6. 6)
      • 32. Postica, G., Romanoni, A., Matteucci, M.: ‘Robust moving objects detection in LiDAR data exploiting visual cues’. Int. Conf. on Intelligent Robots and Systems, 2016, pp. 10931098.
    7. 7)
      • 13. Valmadre, J., Bertinetto, L., Henriques, J.F., et al: ‘End-to-end representation learning for correlation filter based tracking’ (arXiv preprint arXiv:170406036, 2017).
    8. 8)
      • 25. Rusu, R.B.: ‘Semantic 3d object maps for everyday manipulation in human living environments’, KI-Künstliche Intelligenz, 2010, 24, (4), pp. 345348.
    9. 9)
      • 24. Ioannou, Y., Taati, B., Harrap, R., et al: ‘Difference of normals as a multi-scale operator in unorganized point clouds’. 2012 Second Int. Conf. on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2012, pp. 501508.
    10. 10)
      • 30. Chen, J., Sheng, H., Zhang, Y., et al: ‘Enhancing detection model for multiple hypothesis tracking’. Conf. on Computer Vision and Pattern Recognition Workshops, 2017, pp. 21432152.
    11. 11)
      • 2. Arras, K.O., Grzonka, S., Luber, M., et al: ‘Efficient people tracking in laser range data using a multi-hypothesis leg-tracker with adaptive occlusion probabilities’. IEEE Int. Conf. on Robotics and Automation, 2008 (ICRA 2008), 2008, pp. 17101715.
    12. 12)
      • 6. Zou, C., He, B., Zhang, L., et al: ‘An automatic calibration between an omni-directional camera and a laser rangefinder for dynamic scenes reconstruction’. IEEE Int. Conf. on Robotics and Biomimetics, 2016, pp. 15281534.
    13. 13)
      • 9. Honegger, D., Meier, L., Tanskanen, P., et al: ‘An open source and open hardware embedded metric optical flow CMOS camera for indoor and outdoor applications’. Int. Conf. on Robotics and Automation, 2013, pp. 17361741.
    14. 14)
      • 31. Geiger, A., Lenz, P., Urtasun, R.: ‘Are we ready for autonomous driving? The KITTI vision benchmark suite’. Conf. on Computer Vision and Pattern Recognition, 2012, pp. 33543361.
    15. 15)
      • 17. Andriyenko, A., Roth, S., Schindler, K.: ‘An analytical formulation of global occlusion reasoning for multi-target tracking’. Int. Conf. on Computer Vision Workshops, 2011, pp. 18391846.
    16. 16)
      • 22. Huang, L., Barth, M.: ‘A novel multi-planar LiDAR and computer vision calibration procedure using 2d patterns for automated navigation’. Intelligent Vehicles Symp., 2009, pp. 117122.
    17. 17)
      • 11. Liu, S., Zhang, T., Cao, X., et al: ‘Structural correlation filter for robust visual tracking’. IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 43124320.
    18. 18)
      • 29. Vedaldi, A., Soatto, S.: ‘Quick shift and kernel methods for mode seeking’. European Conf. on Computer Vision, 2008, pp. 705718.
    19. 19)
      • 35. Lee, B., Erdenee, E., Jin, S., et al: ‘Multi-class multi-object tracking using changing point detection’. European Conf. on Computer Vision, 2016, pp. 6883.
    20. 20)
      • 34. Pomerleau, F., Colas, F., Siegwart, R., et al: ‘Comparing ICP variants on real-world data sets’, Auton. Robots, 2013, 34, (3), pp. 133148.
    21. 21)
      • 4. Pomerleau, F., Krüsi, P., Colas, F., et al: ‘Long-term 3d map maintenance in dynamic environments’. Int. Conf. on Robotics and Automation, 2014, pp. 37123719.
    22. 22)
      • 8. Scaramuzza, D., Harati, A., Siegwart, R.: ‘Extrinsic self calibration of a camera and a 3d laser range finder from natural scenes’. Int. Conf. on Intelligent Robots and Systems, 2007, pp. 41644169.
    23. 23)
      • 19. Tang, S., Andriluka, M., Andres, B., et al: ‘Multi people tracking with lifted multicut and person re-identification’. IEEE Conf. on Computer Vision and Pattern Recognition, 2017..
    24. 24)
      • 1. Zou, D., Tan, P.: ‘Coslam: collaborative visual slam in dynamic environments’, IEEE Trans. Softw. Eng., 2013, 35, (2), pp. 354366.
    25. 25)
      • 20. Mei, C., Rives, P.: ‘Calibration between a central catadioptric camera and a laser range finder for robotic applications’. Int. Conf. on Robotics and Automation, 2006, pp. 532537.
    26. 26)
      • 36. Ju, H.Y., Lee, C.R., Yang, M.H., et al: ‘Online multi-object tracking via structural constraint event aggregation’. IEEE Int. Conf. on Computer Vision and Pattern Recognition, 2016, pp. 13921400.
    27. 27)
      • 15. Fortmann, T., Bar.Shalom, Y., Scheffe, M.: ‘Sonar tracking of multiple targets using joint probabilistic data association’, IEEE J. Ocean. Eng., 1983, 8, (3), pp. 173184.
    28. 28)
      • 5. Dewan, A., Caselitz, T., Tipaldi, G.D., et al: ‘Motion-based detection and tracking in 3D LiDAR scans’. Int. Conf. on Robotics and Automation, 2016, pp. 45084513.
    29. 29)
      • 18. Choi, W.: ‘Near-online multi-target tracking with aggregated local flow descriptor’. Int. Conf. on Computer Vision, 2016, pp. 30293037.
    30. 30)
      • 23. Yang, M.Y., Förstner, W.: ‘Plane detection in point cloud data’. Int. Conf. on Machine Control Guidance, 2010, vol. 1, pp. 95104.
    31. 31)
      • 12. Qi, Y., Zhang, S., Qin, L., et al: ‘Hedged deep tracking’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 43034311.
    32. 32)
      • 14. Reid, D.: ‘An algorithm for tracking multiple targets’, Trans. Autom. Control, 1979, 24, (6), pp. 843854.
    33. 33)
      • 16. Munkres, J.: ‘Algorithms for the assignment and transportation problems’, J. Soc. Ind. Appl. Math., 1957, 5, (1), pp. 3238.
    34. 34)
      • 10. Ma, C., Yang, X., Zhang, C., et al: ‘Long-term correlation tracking’. IEEE Conf. on Computer Vision and Pattern Recognition, 2015, pp. 53885396.
    35. 35)
      • 26. Marquardt, D.W.: ‘An algorithm for least-squares estimation of nonlinear parameters’, J. Soc. Ind. Appl. Math., 1963, 11, (2), pp. 431441.
    36. 36)
      • 39. Chen, T., Lu, S., Lin, Y., et al: ‘S-CNN: Subcategory-aware convolutional networks for object detection’, IEEE Trans. Pattern Anal. Mach. Intell., PP, (99), 2017, p. 1.
    37. 37)
      • 28. Liu, C., Yuen, J., Torralba, A., et al: ‘Sift flow: dense correspondence across different scenes’. European Conf. on Computer Vision, 2008, pp. 2842.
    38. 38)
      • 37. Choi, W.: ‘Near-online multi-target tracking with aggregated local flow descriptor’. Int. Conf. on Computer Vision, 2015, pp. 30293037.
    39. 39)
      • 40. Wang, S., Fowlkes, C.C.: ‘Learning optimal parameters for multi-target tracking with contextual interactions’, Int. J. Comput. Vis., 2016, pp. 118.
    40. 40)
      • 7. Unnikrishnan, R., Hebert, M.: ‘Fast extrinsic calibration of a laser rangefinder to a camera’ (Carnegie Mellon University, 2005).
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0308
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0308
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading