Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

SRP-AKAZE: an improved accelerated KAZE algorithm based on sparse random projection

SRP-AKAZE: an improved accelerated KAZE algorithm based on sparse random projection

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The AKAZE algorithm is a typical image registration algorithm that has the advantage of high computational efficiency based on non-linear diffusion. However, it is weaker than the scale-invariant feature transformation (SIFT) algorithm in terms of robustness and stability. We propose a new and improved version of the AKAZE algorithm by using the SIFT descriptor based on sparse random projection (SRP). The proposed method not only retains the advantage of high efficiency of the AKAZE algorithm in feature detection but also has the stability of the SIFT descriptor. Moreover, the computational complexity due to the high dimension of the SIFT descriptor, which limits the speed of feature matching, is drastically reduced by the SRP strategy. Experiments on several benchmark image datasets demonstrate that the proposed algorithm can significantly improve the stability of the AKAZE algorithm, and the results suggest the better matching performance and robustness of the feature descriptor.

References

    1. 1)
      • 14. Bellavia, F., Tegolo, D., Valenti, C.: ‘Improving Harris corner selection strategy’, IET Comput. Vis., 2011, 5, (2), pp. 8796.
    2. 2)
      • 23. Hua, L., Xu, W., Wang, T., et al: ‘Vehicle recognition using improved SIFT and multi-view model’, J. Xi'an Jiaotong Univ., 2013, 47, (4), pp. 9299.
    3. 3)
      • 4. Sun, Y., Zhao, L., Huang, S., et al: ‘L2-SIFT: SIFT feature extraction and matching for large images in large-scale aerial photogrammetry’, ISPRS J. Photogramm. Remote Sens., 2014, 91, pp. 116.
    4. 4)
      • 24. Imre, E., Hilton, A.: ‘Order statistics of RANSAC and their practical application’, Int. J. Comput. Vis., 2015, 111, (3), pp. 276297.
    5. 5)
      • 12. Chi, Y., Xiong, Z., Chang, Q., et al: ‘Improving Hessian matrix detector for SURF’, IEICE Trans. Inf. Syst., 2011, 94, (4), pp. 921925.
    6. 6)
      • 25. Image retrieval datasets: affine covariant regions. Available at http://www.robots.ox.ac.uk/~vgg/data/data-aff.html, accessed 8 August 2018.
    7. 7)
      • 6. Rublee, E., Rabaud, V., Konolige, K., et al: ‘ORB: an efficient alternative to SIFT or SURF’. Proc. Int. Conf. Computer Vision, Barcelona, Spain, November 2011, pp. 25642571.
    8. 8)
      • 1. Rohlfing, T.: ‘Image similarity and tissue overlaps as surrogates for image registration accuracy: widely used but unreliable’, IEEE Trans. Med. Imaging, 2012, 31, (3), pp. 153163.
    9. 9)
      • 10. Alcantarilla, P.F., Nuevo, J., Bartoli, A.: ‘Fast explicit diffusion for accelerated features in non-linear scale SPA-CES’. Proc. British Machine Vision Conf., Bristol, England, 2013, pp. 13.113.11.
    10. 10)
      • 15. Grewenig, S., Weickert, J., Bruhn, A.: ‘From box filtering to fast explicit diffusion’. Joint Pattern Recognition Symp., Darmstadt, Germany, September 2010, pp. 533542.
    11. 11)
      • 16. Li, C., Ma, L.: ‘A new framework for feature descriptor based on SIFT’, Pattern Recognit. Lett., 2009, 30, (5), pp. 544557.
    12. 12)
      • 17. Liu, Y., Lan, C., Li, C., et al: ‘S-AKA-ZE: an effective point-based method for image matching’, Optik, 2016, 127, (14), pp. 56705681.
    13. 13)
      • 7. Leutenegger, S., Chli, M., Siegwart, R.: ‘BRISK: binary robust invariant scalable key points’. Proc. Int. Conf. Computer Vision, Barcelona, Spain, 2011, pp. 25482555.
    14. 14)
      • 13. Wu, M.: ‘Research on optimization of image fast feature point matching algorithm’, EURASIP J. Image Video Process., 2018, 2018, pp. 106.11106.27.
    15. 15)
      • 2. Chen, S., Li, X., Zhao, L., et al: ‘Medium–low resolution multisource remote sensing image registration based on SIFT and robust regional mutual information’, Int. J. Remote Sens., 2018, 39, (10), pp. 32153242.
    16. 16)
      • 20. Mrázek, P., Navara, M.: ‘Selection of optimal stopping time for non-linear diffusion filtering’, Int. J. Comput. Vis., 2003, 52, (2–3), pp. 189203.
    17. 17)
      • 26. Balntas, V., Lenc, K., Vedaldi, A., et al: ‘HPatches: a benchmark and evaluation of handcrafted and learned local descriptors’. Proc. Conf. Computer Vision Pattern Recognition (CVPR), Honolulu, HI, USA, July 2017, pp. 38523861.
    18. 18)
      • 18. Yeo, C., Ahammad, P., Ramchandran, K.: ‘Rate-efficient visual correspondences using random projections’. IEEE Int. Conf. Image Processing, San Diego, CA, USA, October 2008, pp. 217220.
    19. 19)
      • 19. Strecha, C., Bronstein, A.M., Bronstein, M.M., et al: ‘LDAHash: improved matching with smaller descriptors’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (1), pp. 6678.
    20. 20)
      • 8. Mukherjee, P., Lall, B.: ‘Saliency and KAZE features assisted object segmentation’, Image Vis. Comput., 2017, 61, pp. 8297.
    21. 21)
      • 22. Donoho, D.L.: ‘Compressed sensing’, IEEE Trans. Inf. Theory, 2006, 52, (4), pp. 12891306.
    22. 22)
      • 9. Alcantarilla, P.F., Bartoli, A., Davison, A.J.: ‘KAZE features’. European Conf. Computer Vision, Berlin, October 2012, pp. 214227.
    23. 23)
      • 11. Zhu, D.: ‘SIFT algorithm analysis and optimization’. Int. Conf. Image Analysis and Signal Processing, Zhejiang, China, April 2010, pp. 415419.
    24. 24)
      • 27. CALTECH 256. Available at http://www.vision.caltech.edu/Image_Datasets/Caltech256/, accessed 8 August 2018.
    25. 25)
      • 21. Ke, Y., Sukthankar, R.: ‘PCA-SIFT: a more distinctive representation for local image descriptors’. Computer Vision and Pattern Recognition (CVPR 2004), Washington, DC, USA, 27 June–2 July 2004, vol. 2, pp. 506513.
    26. 26)
      • 3. Lowe, D.G.: ‘Distinctive image features from scale-invariant keypoints’, Int. J. Comput. Vis., 2004, 60, (2), pp. 91110.
    27. 27)
      • 5. Gui, Y., Su, A., Du, J.: ‘Point-pattern matching method using SURF and shape context’, Optik, 2013, 124, (14), pp. 18691873.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2019.0622
Loading

Related content

content/journals/10.1049/iet-cvi.2019.0622
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address