http://iet.metastore.ingenta.com
1887

Automatic adaptation of SIFT for robust facial recognition in uncontrolled lighting conditions

Automatic adaptation of SIFT for robust facial recognition in uncontrolled lighting conditions

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The scale invariant feature transform (SIFT), which was proposed by David Lowe, is a powerful method that extracts and describes local features called keypoints from images. These keypoints are invariant to scale, translation, and rotation, and partially invariant to image illumination variation. Despite their robustness against these variations, strong lighting variation is a difficult challenge for SIFT-based facial recognition systems, where significant degradation of performance has been reported. To develop a robust system under these conditions, variation in lighting must be first eliminated. Additionally, SIFT parameter default values that remove unstable keypoints and inadequately matched keypoints are not well-suited to images with illumination variation. SIFT keypoints can also be incorrectly matched when using the original SIFT matching method. To overcome this issue, the authors propose propose a method for removing the illumination variation in images and correctly setting SIFT's main parameter values (contrast threshold, curvature threshold, and match threshold) to enhance SIFT feature extraction and matching. The proposed method is based on an estimation of comparative image lighting quality, which is evaluated through an automatic estimation of gamma correction value. Through facial recognition experiments, the authors find significant results that clearly illustrate the importance of the proposed robust recognition system.

References

    1. 1)
      • 1. Wang, H., Li, S.Z., Wang, Y., et al: ‘Self quotient image for face recognition’. Int. Conf. on Image Processing, 2004 (ICIP'04), 2004, vol. 2, pp. 13971400.
    2. 2)
      • 2. Lowe, D.G.: ‘Distinctive image features from scale-invariant keypoints’, Int. J. Comput. Vis., 2004, 60, (2), pp. 91110.
    3. 3)
      • 3. Lowe, D.G.: ‘Object recognition from local scale-invariant features’. Proc. Seventh IEEE Int. Conf. On Computer Vision, 1999, vol. 2, pp. 11501157.
    4. 4)
      • 4. Mikolajczyk, K., Schmid, C.: ‘A performance evaluation of local descriptors’. Int. Conf. on Computer Vision & Pattern Recognition (CVPR'03), IEEE Computer Society, Madison, USA, 2003, vol. 2, pp. 257263.
    5. 5)
      • 5. Kaur, H., , Kaur, G.: ‘A review on feature extraction techniques of face recognition’, Int. J. Technol. Res. Eng., 2016, 3, (10), p. 1.
    6. 6)
      • 6. Sadeghipour, E., Sahragard, N.: ‘Face recognition based on improved SIFT algorithm’, Int. J. Adv. Comput. Sci. Appl., 2016, 7, (1), pp. 547551.
    7. 7)
      • 7. Wu, J., Cui, Z., Sheng, V.S., et al: ‘A comparative study of SIFT and its variants’, Meas. Sci. Rev., 2013, 13, (3), pp. 122131.
    8. 8)
      • 8. Mahamdioua, M., Benmohammed, M.: ‘Robust sift for dark face images recognition’. 2016 Int. Symp. on Signal, Image, Video and Communications (ISIVC), 2016, pp. 5358.
    9. 9)
      • 9. Alarcon-Ramirez, A., Chouikha, M.F.: ‘Implementation of a new methodology to reduce the effects of changes of illumination in face recognition-based authentication’, Int. J. Cryptogr. Inf. Sec., 2012, 2, (2), pp. 1325.
    10. 10)
      • 10. Maeng, H., Liao, S., Kang, D., et al: ‘Nighttime face recognition at long distance: cross-distance and cross-spectral matching’. Asian Conf. on Computer Vision, 2012, pp. 708721.
    11. 11)
      • 11. Cruz, C., Sucar, L.E., Morales, E.F.: ‘Real-time face recognition for human-robot interaction’. 8th IEEE Int. Conf. on Automatic Face & Gesture Recognition, 2008 (FG'08), September 2008, pp. 16.
    12. 12)
      • 12. Vishnupriya, S, Lakshmi, K.: ‘Face recognition under varying lighting conditions and noise using texture based and SIFT feature sets’, Int. J. Comput. Sci. Technol., 2012, 3, (4), pp. 457461.
    13. 13)
      • 13. Michael, M., Martin, J.T., Tim, M.: ‘Scale invariant feature transform: A graphical parameter analysis’. Proc. British Machine Vision Conf. (BMVC) 2010 UK Postgraduate Workshop, 2010, pp. 5.15.11.
    14. 14)
      • 14. Battiato, S., Gallo, G., Puglisi, G., et al: ‘SIFT features tracking for video stabilization’. 14th Int. Conf. on Image Analysis and Processing, 2007 (ICIAP 2007), 2007, pp. 825830.
    15. 15)
      • 15. Cesetti, A., Frontoni, E., Mancini, A., et al: ‘A vision-based guidance system for UAV navigation and safe landing using natural landmarks’, J. Intell. Robot. Syst., 2010, 57, (1-4), pp. 233257.
    16. 16)
      • 16. Park, U., Pankanti, S., Jain, A.K.: ‘Fingerprint verification using SIFT features’. Proc. SPIE Defense and Security Symp., 2008, vol. 6944.
    17. 17)
      • 17. Tang, C.Y., Wu, Y.L., Hor, M.K., et al: ‘Modified sift descriptor for image matching under interference’. Int. Conf. on Machine Learning and Cybernetics, 2008, pp. 32943300.
    18. 18)
      • 18. Guang-hui, W., Shu-bi, Z., Hua-bin, W., et al: ‘An algorithm of parameters adaptive scale-invariant feature for high precision matching of multi-source remote sensing image’, Joint Urban Remote Sensing Event, Shanghai, China, May 2009, pp. 17.
    19. 19)
      • 19. Geng, C., Jiang, X.: ‘SIFT features for face recognition’. 2nd IEEE Int. Conf. on Computer Science and Information Technology, 2009 (ICCSIT 2009), 2009, pp. 598602.
    20. 20)
      • 20. Križaj, J., Štruc, V., Pavešić, N.: ‘Adaptation of SIFT features for face recognition under varying illumination’. MIPRO, 2010 Proc. 33rd Int. Convention, 2010, pp. 691694.
    21. 21)
      • 21. Luo, J., Ma, Y., Takikawa, E., et al: ‘Person-specific SIFT features for face recognition’. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, 2007 (ICASSP 2007), 2007, pp. II-593II-596.
    22. 22)
      • 22. Krig, S.: ‘Interest point detector and feature descriptor survey’ in ‘Computer vision metrics’ (Springer, New York, 2016), pp. 187246.
    23. 23)
      • 23. https://github.com/vedaldi/practical-object-instance-recognition, accessed June 2016.
    24. 24)
      • 24. http://www.vlfeat.org/api/sift.html, accessed June 2016.
    25. 25)
      • 25. http://www.robots.ox.ac.uk/~vgg/practicals/instance-recognition/index.html, accessed June 2016.
    26. 26)
      • 26. Struc, V.: ‘The INface toolbox v2.0. The matlab toolbox for illumination invariant face recognition’, Toolbox description and user manual, Ljubljana, 2011.
    27. 27)
      • 27. CroppedYale database: http://vision.ucsd.edu/~leekc/ExtYaleDatabase/ExtYaleB.html, accessed April 2015.
    28. 28)
      • 28. Lee, K.C., Ho, J., Kriegman, D.: ‘Acquiring linear subspaces for face recognition under variable lighting’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (5), pp. 684698.
    29. 29)
      • 29. Gonzalez, R.C., Woods, R.E.: ‘Digital image processing’ (Prentice Hall, Upper Saddle River, 2008, 3rd edn.).
    30. 30)
      • 30. Hany, F.: ‘Blind inverse gamma correction’, IEEE Trans. Image Process., 2001, 10, (10), pp. 14281433.
    31. 31)
      • 31. Asadi Amiri, S., Hassanpour, H., Pouyan, A.K.: ‘Texture based image enhancement using gamma correction’, Middle-East J. Sci. Res., 2010, 6, pp. 569574.
    32. 32)
      • 32. Khunteta, A., Ghosh, D., Ribhu, C.: ‘Fuzzy approach to image exposure level estimation and contrast enhancement in dark images via exposure level optimization’, Int. J. Latest Trends Eng. Sci. Technol., 2014, 1, (5), pp. 7279.
    33. 33)
      • 33. Mahamdioua, M., Benmohammed, M.: ‘New mean-variance gamma method for automatic gamma correction’, Int. J. Image, Graph. Signal Process., 2017, 9, (3), pp. 4154.
    34. 34)
      • 34. Han, H., Shiguang, S., Xilin, C., et al: ‘A comparative study on illumination preprocessing in face recognition’, Pattern Recognit., 2013, 46, (6), pp. 16911699.
    35. 35)
      • 35. Tan, X., Triggs, B.: ‘Enhanced local texture feature sets for face recognition under difficult lighting conditions’. IEEE Int. Workshop on Analysis and Modeling of Faces and Gestures (AMFG'07), 2007 (LNCS, 4778), pp. 168182.
    36. 36)
      • 36. Yin, Y., Liu, L., Sun, X.: ‘SDUMLA-HMT: SDUMLA-HMT: a multimodal biometric database’ in Sun, Z., Lai, J., Chen, X., et al (Eds.): Chinese Conference on Biometric Recognition. (Springer, Berlin, Heidelberg, 2011), pp. 260268.
    37. 37)
      • 37. Lenc, L., Kral, P.: ‘Unconstrained facial images: database for face recognition under real-world conditions’. Mexican Int. Conf. on Artificial Intelligence, 2015, pp. 349361.
    38. 38)
      • 38. Tan, X., Triggs, B.: TT code http://parnec.nuaa.edu.cn/xtan/Publication.htm accessed December 2016.
    39. 39)
      • 39. Tan, X., Triggs, W.: ‘Enhanced local texture feature sets for face recognition under difficult lighting conditions’, IEEE Trans. Image Process., 2010, 19, (6), pp. 16351650.
    40. 40)
      • 40. Baudat, G., Anouar, F.: ‘Generalized discriminant analysis using a kernel approach’, Neural Comput., 2000, 12, (10), pp. 23852404.
    41. 41)
      • 41. Haghighat, M., Zonouz, S., Abdel-Mottaleb, M.: ‘CloudID: trustworthy cloud-based and cross-enterprise biometric identification’, Expert Syst. Appl., 2015, 42, (21), pp. 79057916.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0190
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0190
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address