access icon free Patterns of approximated localised moments for visual loop closure detection

In the context of autonomous mobile robot navigation, loop closing is defined as the correct identification of a previously visited location. Loop closing is essential for the accurate self-localisation of a robot; however, it is also challenging due to perceptual aliasing, which occurs when the robot traverses in environments with visually similar places (e.g. forests, parks, office corridors). In this study, the authors apply the local Zernike moments (ZMs) for loop closure detection. When computed locally, ZMs provide a high discrimination ability, which enables the distinguishing of similar-looking places. Particularly, they show that increasing the density over which the local ZMs are computed improves loop closing accuracy significantly. Furthermore, they present an approximation of ZMs that allows the usage of integral images, which enable real-time operation. Experiments on real datasets with strong perceptual aliasing show that the proposed ZM-based descriptor outperforms state-of-the-art methods in terms of loop closure accuracy. They also release the source-code of the implementation for research purposes.

Inspec keywords: SLAM (robots); path planning; mobile robots; object detection

Other keywords: loop closing; ZM-based descriptor; Zernike moments; integral images; visual loop closure detection; autonomous mobile robot navigation; perceptual aliasing

Subjects: Computer vision and image processing techniques; Mobile robots; Spatial variables control; Optical, image and video signal processing

References

    1. 1)
      • 6. Erhan, C., Sariyanidi, E., Sencan, O., et al: ‘An online visual loop closure detection method for indoor robotic navigation’. Proc. SPIE 9406, Intelligent Robots and Computer Vision XXXII: Algorithms and Techniques, San Francisco, CA, USA, February 2015, p. 940607.
    2. 2)
      • 3. Sariyanidi, E., Dagli, V., Tek, S.C., et al: ‘Local Zernike moments: a new representation for face recognition’. Proc. 19th IEEE Int. Conf. Image Processing (ICIP), Orlando, FL, USA, September 2012, pp. 585588.
    3. 3)
      • 13. Singh, G., Kosecka, J.: ‘Visual loop closing using gist descriptors in Manhattan world’. Omnidirectional Robot Vision Workshop, IEEE ICRA, 2010.
    4. 4)
      • 10. Bay, H., Ess, A., Tuytelaars, T., et al: ‘Speeded-up robust features (SURF)’, Comput. Vis. Image Underst., 2008, 110, (3), pp. 346359.
    5. 5)
      • 23. Milford, M.: ‘Vision-based place recognition: how low can you go?’, Int. J. Robot. Res., 2013, 32, (7), pp. 766789.
    6. 6)
      • 1. Sünderhauf, N., Protzel, P.: ‘BRIEF-Gist – closing the loop by simple means’. Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), San Francisco, CA, USA, September 2011, pp. 12341241.
    7. 7)
      • 9. Lowe, D.G.: ‘Distinctive image features from scale-invariant keypoints’, Int. J. Comput. Vis., 2004, 60, (2), pp. 91110.
    8. 8)
      • 21. Oliva, A., Torralba, A.: ‘Modeling the shape of the scene: a holistic representation of the spatial envelope’, Int. J. Comput. Vis., 2001, 42, (3), pp. 145175.
    9. 9)
      • 18. Lategahn, H., Beck, J., Kitt, B., et al: ‘How to learn an illumination robust image feature for place recognition’. Proc. IEEE Intelligent Vehicles Symp. (IV), Gold Coast City, Australia, June 2013, pp. 285291.
    10. 10)
      • 33. Geiger, A., Lenz, P., Urtasun, R.: ‘Are we ready for autonomous driving? The KITTI vision benchmark suite’. Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, June 2012, pp. 33543361.
    11. 11)
      • 7. Lowry, S., Sünderhauf, N., Newman, P., et al: ‘Visual place recognition: a survey’, IEEE Trans. Robot., 2016, 32, (1), pp. 119.
    12. 12)
      • 8. Sivic, J., Zisserman, A.: ‘Video google: a text retrieval approach to object matching in videos’. Proc. 9th Int. Conf. Computer Vision (ICCV), Nice, France, October 2003, pp. 14701477.
    13. 13)
      • 30. Jarrett, K., Kavukcuoglu, K., Ranzato, M.A., et al: ‘What is the best multi-stage architecture for object recognition?’. Proc. IEEE 12th Int. Conf. Computer Vision (ICCV), Kyoto, October 2009, pp. 21462153.
    14. 14)
      • 4. Sariyanidi, E., Gunes, H., Gokmen, M., et al: ‘Local Zernike moment representation for facial affect recognition’. Proc. British Machine Vision Conf. (BMVC), Bristol, UK, September 2013, pp. 108.1108.11.
    15. 15)
      • 35. Labbe, M., Michaud, F.: ‘Appearance-based loop closure detection for online large-scale and long-term operation’, IEEE Trans. Robot., 2013, 29, (3), pp. 734745.
    16. 16)
      • 28. Chen, Z., Sun, S.K.: ‘A Zernike moment phase-based descriptor for local image representation and matching’, IEEE Trans. Image Process., 2010, 19, (1), pp. 205219.
    17. 17)
      • 16. Milford, M.J., Wyeth, G.F.: ‘SeqSLAM: visual route-based navigation for sunny summer days and stormy winter nights’. Proc. IEEE Int. Conf. Robotics and Automation (ICRA), St. Paul, MN, USA, May 2012, pp. 16431649.
    18. 18)
      • 25. Wiskott, L., Fellous, J.M., Kuiger, N., et al: ‘Face recognition by elastic bunch graph matching’, IEEE Trans. Pattern Anal. Mach. Intell., 1997, 19, (7), pp. 775779.
    19. 19)
      • 22. Oliva, A., Torralba, A.: ‘Building the gist of a scene: the role of global image features in recognition’, Vis. Percept. Prog. Brain Res., 2006, 155, pp. 2336.
    20. 20)
      • 32. Viola, P., Jones, M.: ‘Rapid object detection using a boosted cascade of simple features’. Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR), December 2001, pp. I-511I-518.
    21. 21)
      • 19. Arroyo, R., Alcantarilla, P.F., Bergasa, L.M., et al: ‘Fast and effective visual place recognition using binary codes and disparity information’. Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), Chicago, IL, USA, September 2014, pp. 30893094.
    22. 22)
      • 24. Yang, X., Cheng, K.-T.: ‘Local difference binary for ultrafast and distinctive feature description’, IEEE Trans. Pattern Anal. Mach. Intell., 2014, 36, (1), pp. 188194.
    23. 23)
      • 26. Teh, C.H., Chin, R.T.: ‘On image analysis by the methods of moments’, IEEE Trans. Pattern Anal. Mach. Intell., 1988, 10, (4), pp. 496513.
    24. 24)
      • 31. Dalal, N., Triggs, B.: ‘Histograms of oriented gradients for human detection’. Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, June 2005, pp. I-886I-893.
    25. 25)
      • 14. Liu, Y., Zhang, H.: ‘Visual loop closure detection with a compact image descriptor’. Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), Algarve, Portugal, October 2012, pp. 10511056.
    26. 26)
      • 2. Cummins, M., Newman, P.: ‘FAB-MAP: probabilistic localization and mapping in the space of appearance’, Int. J. Robot. Res., 2008, 27, (6), pp. 647665.
    27. 27)
      • 29. Li, S., Lee, M.C., Pun, C.M.: ‘Complex Zernike moments features for shape-based image retrieval’, IEEE Trans. Syst. Man Cybern., 2009, 39, (1), pp. 227237.
    28. 28)
      • 17. Liu, Y., Zhang, H.: ‘Performance evaluation of whole-image descriptors in visual loop closure detection’. Proc. IEEE Int. Conf. Information and Automation (ICIA), Yinchuan, China, August 2013, pp. 716722.
    29. 29)
      • 34. Angeli, A., Filliat, D., Doncieux, S., et al: ‘Fast and incremental method for loop-closure detection using bags of visual words’, IEEE Trans. Robot., 2012, 24, (5), pp. 10271037.
    30. 30)
      • 20. Negre Carrasco, P.L., Bonin-Font, F., Oliver-Codina, G.: ‘Global image signature for visual loop-closure detection’, Auton. Robots, 2016, 40, (8), pp. 14031417.
    31. 31)
      • 15. Badino, H., Huber, D., Kanade, T.: ‘Real-time topometric localization’. Proc. IEEE Int. Conf. Robotics and Automation (ICRA), St. Paul, MN, USA, May 2012, pp. 16351642.
    32. 32)
      • 12. Calonder, M., Lepetit, V., Strecha, C., et al: ‘BRIEF: binary robust independent elementary features’. Proc. 11th European Conf. Computer Vision (ECCV), Crete, Greece, September 2010, pp. 778792.
    33. 33)
      • 36. Glover, A., Maddern, W., Warren, M., et al: ‘OpenFABMAP: an open source toolbox for appearance-based loop closure detection’. Proc. IEEE Int. Conf. Robotics and Automation (ICRA), St. Paul, MN, USA, May 2012, pp. 47304735.
    34. 34)
      • 11. Galvez-Lopez, D., Tardós, J.D.: ‘Bags of binary words for fast place recognition in image sequences’, IEEE Trans. Robot., 2012, 28, (5), pp. 11881197.
    35. 35)
      • 5. Sariyanidi, E., Sencan, O., Temeltas, H.: ‘Loop closure detection using local Zernike moment patterns’. Proc. SPIE 8662, Intelligent Robots and Computer Vision XXX: Algorithms and Techniques, Burlingame, CA, USA, February 2013, p. 866207.
    36. 36)
      • 27. Teague, M.R.: ‘Image analysis via the general theory of moments’, J. Opt. Soc. Am., 1980, 70, (8), pp. 920930.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2016.0237
Loading

Related content

content/journals/10.1049/iet-cvi.2016.0237
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading