Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Automatic adaptation of SIFT for robust facial recognition in uncontrolled lighting conditions

The scale invariant feature transform (SIFT), which was proposed by David Lowe, is a powerful method that extracts and describes local features called keypoints from images. These keypoints are invariant to scale, translation, and rotation, and partially invariant to image illumination variation. Despite their robustness against these variations, strong lighting variation is a difficult challenge for SIFT-based facial recognition systems, where significant degradation of performance has been reported. To develop a robust system under these conditions, variation in lighting must be first eliminated. Additionally, SIFT parameter default values that remove unstable keypoints and inadequately matched keypoints are not well-suited to images with illumination variation. SIFT keypoints can also be incorrectly matched when using the original SIFT matching method. To overcome this issue, the authors propose propose a method for removing the illumination variation in images and correctly setting SIFT's main parameter values (contrast threshold, curvature threshold, and match threshold) to enhance SIFT feature extraction and matching. The proposed method is based on an estimation of comparative image lighting quality, which is evaluated through an automatic estimation of gamma correction value. Through facial recognition experiments, the authors find significant results that clearly illustrate the importance of the proposed robust recognition system.

References

    1. 1)
      • 2. Lowe, D.G.: ‘Distinctive image features from scale-invariant keypoints’, Int. J. Comput. Vis., 2004, 60, (2), pp. 91110.
    2. 2)
      • 9. Alarcon-Ramirez, A., Chouikha, M.F.: ‘Implementation of a new methodology to reduce the effects of changes of illumination in face recognition-based authentication’, Int. J. Cryptogr. Inf. Sec., 2012, 2, (2), pp. 1325.
    3. 3)
      • 39. Tan, X., Triggs, W.: ‘Enhanced local texture feature sets for face recognition under difficult lighting conditions’, IEEE Trans. Image Process., 2010, 19, (6), pp. 16351650.
    4. 4)
      • 37. Lenc, L., Kral, P.: ‘Unconstrained facial images: database for face recognition under real-world conditions’. Mexican Int. Conf. on Artificial Intelligence, 2015, pp. 349361.
    5. 5)
      • 18. Guang-hui, W., Shu-bi, Z., Hua-bin, W., et al: ‘An algorithm of parameters adaptive scale-invariant feature for high precision matching of multi-source remote sensing image’, Joint Urban Remote Sensing Event, Shanghai, China, May 2009, pp. 17.
    6. 6)
      • 6. Sadeghipour, E., Sahragard, N.: ‘Face recognition based on improved SIFT algorithm’, Int. J. Adv. Comput. Sci. Appl., 2016, 7, (1), pp. 547551.
    7. 7)
      • 13. Michael, M., Martin, J.T., Tim, M.: ‘Scale invariant feature transform: A graphical parameter analysis’. Proc. British Machine Vision Conf. (BMVC) 2010 UK Postgraduate Workshop, 2010, pp. 5.15.11.
    8. 8)
      • 19. Geng, C., Jiang, X.: ‘SIFT features for face recognition’. 2nd IEEE Int. Conf. on Computer Science and Information Technology, 2009 (ICCSIT 2009), 2009, pp. 598602.
    9. 9)
      • 8. Mahamdioua, M., Benmohammed, M.: ‘Robust sift for dark face images recognition’. 2016 Int. Symp. on Signal, Image, Video and Communications (ISIVC), 2016, pp. 5358.
    10. 10)
      • 22. Krig, S.: ‘Interest point detector and feature descriptor survey’ in ‘Computer vision metrics’ (Springer, New York, 2016), pp. 187246.
    11. 11)
      • 20. Križaj, J., Štruc, V., Pavešić, N.: ‘Adaptation of SIFT features for face recognition under varying illumination’. MIPRO, 2010 Proc. 33rd Int. Convention, 2010, pp. 691694.
    12. 12)
      • 25. http://www.robots.ox.ac.uk/~vgg/practicals/instance-recognition/index.html, accessed June 2016.
    13. 13)
      • 21. Luo, J., Ma, Y., Takikawa, E., et al: ‘Person-specific SIFT features for face recognition’. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, 2007 (ICASSP 2007), 2007, pp. II-593II-596.
    14. 14)
      • 12. Vishnupriya, S, Lakshmi, K.: ‘Face recognition under varying lighting conditions and noise using texture based and SIFT feature sets’, Int. J. Comput. Sci. Technol., 2012, 3, (4), pp. 457461.
    15. 15)
      • 34. Han, H., Shiguang, S., Xilin, C., et al: ‘A comparative study on illumination preprocessing in face recognition’, Pattern Recognit., 2013, 46, (6), pp. 16911699.
    16. 16)
      • 24. http://www.vlfeat.org/api/sift.html, accessed June 2016.
    17. 17)
      • 15. Cesetti, A., Frontoni, E., Mancini, A., et al: ‘A vision-based guidance system for UAV navigation and safe landing using natural landmarks’, J. Intell. Robot. Syst., 2010, 57, (1-4), pp. 233257.
    18. 18)
      • 3. Lowe, D.G.: ‘Object recognition from local scale-invariant features’. Proc. Seventh IEEE Int. Conf. On Computer Vision, 1999, vol. 2, pp. 11501157.
    19. 19)
      • 31. Asadi Amiri, S., Hassanpour, H., Pouyan, A.K.: ‘Texture based image enhancement using gamma correction’, Middle-East J. Sci. Res., 2010, 6, pp. 569574.
    20. 20)
      • 28. Lee, K.C., Ho, J., Kriegman, D.: ‘Acquiring linear subspaces for face recognition under variable lighting’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (5), pp. 684698.
    21. 21)
      • 16. Park, U., Pankanti, S., Jain, A.K.: ‘Fingerprint verification using SIFT features’. Proc. SPIE Defense and Security Symp., 2008, vol. 6944.
    22. 22)
      • 5. Kaur, H., , Kaur, G.: ‘A review on feature extraction techniques of face recognition’, Int. J. Technol. Res. Eng., 2016, 3, (10), p. 1.
    23. 23)
      • 27. CroppedYale database: http://vision.ucsd.edu/~leekc/ExtYaleDatabase/ExtYaleB.html, accessed April 2015.
    24. 24)
      • 26. Struc, V.: ‘The INface toolbox v2.0. The matlab toolbox for illumination invariant face recognition’, Toolbox description and user manual, Ljubljana, 2011.
    25. 25)
      • 4. Mikolajczyk, K., Schmid, C.: ‘A performance evaluation of local descriptors’. Int. Conf. on Computer Vision & Pattern Recognition (CVPR'03), IEEE Computer Society, Madison, USA, 2003, vol. 2, pp. 257263.
    26. 26)
      • 11. Cruz, C., Sucar, L.E., Morales, E.F.: ‘Real-time face recognition for human-robot interaction’. 8th IEEE Int. Conf. on Automatic Face & Gesture Recognition, 2008 (FG'08), September 2008, pp. 16.
    27. 27)
      • 36. Yin, Y., Liu, L., Sun, X.: ‘SDUMLA-HMT: SDUMLA-HMT: a multimodal biometric database’ in Sun, Z., Lai, J., Chen, X., et al (Eds.): Chinese Conference on Biometric Recognition. (Springer, Berlin, Heidelberg, 2011), pp. 260268.
    28. 28)
      • 30. Hany, F.: ‘Blind inverse gamma correction’, IEEE Trans. Image Process., 2001, 10, (10), pp. 14281433.
    29. 29)
      • 33. Mahamdioua, M., Benmohammed, M.: ‘New mean-variance gamma method for automatic gamma correction’, Int. J. Image, Graph. Signal Process., 2017, 9, (3), pp. 4154.
    30. 30)
      • 10. Maeng, H., Liao, S., Kang, D., et al: ‘Nighttime face recognition at long distance: cross-distance and cross-spectral matching’. Asian Conf. on Computer Vision, 2012, pp. 708721.
    31. 31)
      • 41. Haghighat, M., Zonouz, S., Abdel-Mottaleb, M.: ‘CloudID: trustworthy cloud-based and cross-enterprise biometric identification’, Expert Syst. Appl., 2015, 42, (21), pp. 79057916.
    32. 32)
      • 1. Wang, H., Li, S.Z., Wang, Y., et al: ‘Self quotient image for face recognition’. Int. Conf. on Image Processing, 2004 (ICIP'04), 2004, vol. 2, pp. 13971400.
    33. 33)
      • 23. https://github.com/vedaldi/practical-object-instance-recognition, accessed June 2016.
    34. 34)
      • 35. Tan, X., Triggs, B.: ‘Enhanced local texture feature sets for face recognition under difficult lighting conditions’. IEEE Int. Workshop on Analysis and Modeling of Faces and Gestures (AMFG'07), 2007 (LNCS, 4778), pp. 168182.
    35. 35)
      • 32. Khunteta, A., Ghosh, D., Ribhu, C.: ‘Fuzzy approach to image exposure level estimation and contrast enhancement in dark images via exposure level optimization’, Int. J. Latest Trends Eng. Sci. Technol., 2014, 1, (5), pp. 7279.
    36. 36)
      • 38. Tan, X., Triggs, B.: TT code http://parnec.nuaa.edu.cn/xtan/Publication.htm accessed December 2016.
    37. 37)
      • 7. Wu, J., Cui, Z., Sheng, V.S., et al: ‘A comparative study of SIFT and its variants’, Meas. Sci. Rev., 2013, 13, (3), pp. 122131.
    38. 38)
      • 17. Tang, C.Y., Wu, Y.L., Hor, M.K., et al: ‘Modified sift descriptor for image matching under interference’. Int. Conf. on Machine Learning and Cybernetics, 2008, pp. 32943300.
    39. 39)
      • 14. Battiato, S., Gallo, G., Puglisi, G., et al: ‘SIFT features tracking for video stabilization’. 14th Int. Conf. on Image Analysis and Processing, 2007 (ICIAP 2007), 2007, pp. 825830.
    40. 40)
      • 29. Gonzalez, R.C., Woods, R.E.: ‘Digital image processing’ (Prentice Hall, Upper Saddle River, 2008, 3rd edn.).
    41. 41)
      • 40. Baudat, G., Anouar, F.: ‘Generalized discriminant analysis using a kernel approach’, Neural Comput., 2000, 12, (10), pp. 23852404.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0190
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0190
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address