access icon free Scale-invariant feature matching based on pairs of feature points

On the basis of feature points pairing, a scale-invariant feature matching method is proposed in this study. The distance between two features is used to compute feature pairs' support region size, which is different from the methods using detectors to provide information to find the support region. Moreover, to achieve rotation invariance, a sub-region division method based on intensity order is introduced. For comparison to the popular descriptors scale-invariant feature transform and speeded-up robust features, the authors also choose the detected points by difference of Gaussian and fast Hessain detectors as feature points to start the authors' method. Additional experiments compare the reported method with similar proposed methods, such as Tell's and Fan's. The experimental results show that the authors' proposed descriptor outperforms the popular descriptors under various image transformations, especially on images with scale and viewpoint transformations.

Inspec keywords: feature extraction; image matching; Gaussian processes

Other keywords: detector difference-of-Gaussian method; scale invariant feature matching; subregion division method; image transformation; feature point pairing; detector fast-Hessian

Subjects: Other topics in statistics; Image recognition; Computer vision and image processing techniques; Other topics in statistics

References

    1. 1)
    2. 2)
      • 25. Fergus, R., Perona, P., Zisserman, A.: ‘Object class recognition by unsupervised scale-invariant learning’. IEEE Conf. Computer Vision and Pattern Recognition, Madison, America, 2003, pp. 264271.
    3. 3)
      • 15. Lindeberg, T., Garding, J.: ‘Shape-adapted smoothing in estimation of 3D depth cues from affine distortions of local 2D brightness structure’. Eur. Conf. Comput. Vision, Stockholm, Sweden, 1994, pp. 389400.
    4. 4)
    5. 5)
      • 19. Dorkó, G., Schmid, C.: ‘Maximally stable local description for scale selection’. Eur. Conf. Comput. Vision, Graz, Austria, 2006, pp. 504516.
    6. 6)
      • 8. Fan, B., Wu, F.C., Hu, Z.Y.: ‘Aggregating gradient distributions into intensity orders: A novel local image descriptor’. IEEE Conf. Computer Vision and Pattern Recognition, Providence, Rhode Island, 2011, pp. 23772384.
    7. 7)
    8. 8)
    9. 9)
    10. 10)
    11. 11)
      • 18. Okada, K., Comaniciu, D., Krishnan, A.: ‘Scale selection for anisotropic scale-space: application to volumetric tumor characterization’. IEEE Conf. Computer Vision and Pattern Recognition, Washington DC, America, 2004, pp. 594601.
    12. 12)
    13. 13)
      • 23. Tell, D., Carlsson, S.: ‘Wide baseline point matching using affine invariants computed from intensity profiles’. Eur. Conf. Comput. Vision, Dublin, Ireland, 2000, pp. 814828.
    14. 14)
    15. 15)
    16. 16)
    17. 17)
    18. 18)
    19. 19)
    20. 20)
    21. 21)
      • 9. Wang, Z.H., Fan, B., Wu, F.C.: ‘Local intensity order pattern for feature description’. Int. Conf. Comput. Vis., Barcelona, Spain, 2011, pp. 603610.
    22. 22)
    23. 23)
    24. 24)
    25. 25)
      • 14. Harris, C., Stephens, M.: ‘A combined corner and edge detector’. Proc. Fourth Alvey Vision Conf., Manchester, England, 1998, pp. 147151.
    26. 26)
    27. 27)
    28. 28)
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2014.0369
Loading

Related content

content/journals/10.1049/iet-cvi.2014.0369
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading