Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Augmented SRC for face recognition under quality distortions

In the last two decades, numerous methods have been developed to offer a formulation to the face recognition problem under scene-dependent conditions. However, these methods have not considered image quality degradations resulting from capture, processing, and transmission such as blur and occlusion due to packet loss, under the same scene variations. Although deep neural networks are achieving state-of-the-art results on face recognition, the existing networks are susceptible to quality distortions. In this work, the authors propose an augmented sparse representation classifier (SRC) framework to improve the performance of the conventional SRC in the presence of Gaussian blur, camera shake blur, and block occlusions, while preserving its robustness to scene-dependent variations. In their evaluation of the SRC framework, they present a feature sparsity concentration and classification index that is capable of assessing the quality of features in terms of recognition accuracy as well as class-based sparsity concentration. For this purpose, they consider three main types of features including image raw pixels, histogram of oriented gradients and deep learning visual geometry group (VGG) Face. The obtained performance results show that the proposed method outperforms state-of-the-art sparse-based and blur-invariant methods.

References

    1. 1)
      • 59. Duda, R.O., Hart, P.E., Stork, D.G.: ‘Pattern classification’ (John Wiley & Sons, Inc., New York, 2001).
    2. 2)
      • 34. Zhang, Z., Klassen, E., Srivastava, A., et al: ‘Blurring-invariant Riemannian metrics for comparing signals and images’. Proc. IEEE Int. Conf. Computer Vision, Barcelona, Spain, 2011, pp. 17701775.
    3. 3)
      • 45. Szegedy, C., Liu, W., Jia, Y., et al: ‘Going deeper with convolutions’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Boston, USA, 2015, pp. 19.
    4. 4)
      • 11. Liao, S., Jain, A.K., Li, S.Z.: ‘Partial face recognition: alignment-free approach’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (5), pp. 11931205.
    5. 5)
      • 20. Dai, X., Zhang, H., Shu, H., et al: ‘Image recognition by combined invariants of Legendre moment’. Proc. IEEE Int. Conf. Information and Automation, Harbin, China, 2010, pp. 17931798.
    6. 6)
      • 38. Taigman, Y., Yang, M., Ranzato, M., et al: ‘DeepFace: closing the gap to human-level performance in face verification’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Washington DC, USA, 2014, pp. 17011708.
    7. 7)
      • 1. Edgell, J., Trimpe, A.: ‘Limitations of facial recognition technology’, 2013. Available at http://www.fedtechmagazine.com/article/2013/11/4-limitationsfacial-recognition-technology, accessed on July 2017.
    8. 8)
      • 49. Georghiades, A.S., Belhumeur, P.N., Kriegman, D.J.: ‘From few to many: illumination cone models for face recognition under variable lighting and pose’, IEEE Trans. Pattern Anal. Mach. Intell., 2001, 23, (6), pp. 643660.
    9. 9)
      • 2. Hua, G., Yang, M.H., Learned-Miller, E., et al: ‘Introduction to the special section on real-world face recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (10), pp. 19211924.
    10. 10)
      • 60. Ho, J., Yang, M.H., Lim, J., et al: ‘Clustering appearances of objects under varying illumination conditions’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Wisconsin, USA, 2003, vol. 1, p. I.
    11. 11)
      • 66. Felzenszwalb, P.F., Girshick, R.B., McAllester, D., et al: ‘Object detection with discriminatively trained part-based models’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (9), pp. 16271645.
    12. 12)
      • 35. Zhang, Z., Klassen, E., Srivastava, A.: ‘Gaussian blurring-invariant comparison of signals and images’, IEEE Trans. Image Process., 2013, 22, (8), pp. 31453157.
    13. 13)
      • 27. Ojansivu, V., Heikkila, J.: ‘A method for blur and affine invariant object recognition using phase-only bispectrum’. Proc. Int. Conf. Image Analysis and Recognition, Povoa de Varzim, Portugal, 2008, pp. 527536.
    14. 14)
      • 43. Grm, K., Štruc, V., Artiges, A., et al: ‘Strengths and weaknesses of deep learning models for face recognition against image degradations’, IET Biometrics, 2017, 7, (1), pp. 8189.
    15. 15)
      • 61. Cortes, C., Vapnik, V.: ‘Support vector machine’, Mach. Learn., 1995, 20, (3), pp. 273297.
    16. 16)
      • 41. Schroff, F., Kalenichenko, D., Philbin, J.: ‘FaceNet: a unified embedding for face recognition and clustering’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Boston, USA, 2015, pp. 815823.
    17. 17)
      • 58. He, X., Yan, S., Hu, Y., et al: ‘Face recognition using Laplacianfaces’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (3), pp. 328340.
    18. 18)
      • 47. Fernandez, E.: ‘Performance analysis of deep neural networks on objects with occlusions’ (Massachussets Institute of Technology, USA, 2016).
    19. 19)
      • 29. Pedone, M., Flusser, J., Heikkila, J.: ‘Blur invariant translational image registration for n-fold symmetric blurs’, IEEE Trans. Image Process., 2013, 22, (9), pp. 36763689.
    20. 20)
      • 22. Zhu, H., Liu, M., Ji, H., et al: ‘Combined invariants to blur and rotation using Zernike moment descriptors’, Pattern Anal. Appl., 2010, 13, (3), pp. 309319.
    21. 21)
      • 5. Holappa, J., Ahonen, T., Pietikainen, M.: ‘An optimized illumination normalization method for face recognition’. Proc. IEEE Int. Conf. Biometrics: Theory, Applications and Systems, Virginia, USA, 2008, pp. 16.
    22. 22)
      • 48. Samaria, F., Harter, A.: ‘Parameterisation of a stochastic model for human face identification’. Proc. IEEE Workshop on Applications of Computer Vision, Florida, USA, 1994, pp. 138142.
    23. 23)
      • 65. Deniz, O., Bueno, G., Salido, J., et al: ‘Face recognition using histograms of oriented gradients’, Pattern Recognit. Lett., 2011, 32, (12), pp. 15981603.
    24. 24)
      • 53. Dalal, N., Triggs, B.: ‘Histograms of oriented gradients for human detection’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, 2005, pp. 886893.
    25. 25)
      • 62. Lee, K.C., Ho, J., Kriegman, D.J.: ‘Acquiring linear subspaces for face recognition under variable lighting’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (5), pp. 684698.
    26. 26)
      • 14. Flusser, J., Suk, T.: ‘Degraded image analysis: an invariant approach’, IEEE Trans. Pattern Anal. Mach. Intell., 1998, 20, (6), pp. 590603.
    27. 27)
      • 42. Dodge, S., Karam, L.: ‘Understanding how image quality affects deep neural networks’. Proc. Int. Conf. Quality of Multimedia Experience, Lisbon, Portugal, 2016, pp. 16.
    28. 28)
      • 32. Xiao, B., Ma, J.F., Cui, J.: ‘Combined blur, translation, scale and rotation invariant image recognition by radon and pseudo-Fourier–Mellin transforms’, Pattern Recognit., 2012, 45, (1), pp. 314321.
    29. 29)
      • 25. Li, Q., Zhu, H., Liu, Q.: ‘Image recognition by combined affine and blur Tchebichef moment invariants’, Proc. Int. Congr. Image Signal Process., 2011, 3, pp. 15171521.
    30. 30)
      • 26. Ojansivu, V., Heikkila, J.: ‘Image registration using blur-invariant phase correlation’, IEEE Signal Process. Lett., 2007, 14, (7), pp. 449452.
    31. 31)
      • 54. Ahonen, T., Hadid, A., Pietikäinen, M.: ‘Face recognition with local binary patterns’, European conference on computer vision, Prague, Czech Republic, 2004, pp. 469481.
    32. 32)
      • 37. Vageeswaran, P., Mitra, K., Chellappa, R.: ‘Blur and illumination robust face recognition via set-theoretic characterization’, IEEE Trans. Image Process., 2013, 22, (4), pp. 13621372.
    33. 33)
      • 52. Levin, A., Weiss, Y., Durand, F., et al: ‘Understanding and evaluating blind deconvolution algorithms’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Florida, USA, 2009, pp. 18.
    34. 34)
      • 9. Yang, M., Zhang, L., Yang, J., et al: ‘Robust sparse coding for face recognition’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Colorado, USA, 2011, pp. 625632.
    35. 35)
      • 16. Zhang, Y., Wen, C., Zhang, Y., et al: ‘Determination of blur and affine combined invariants by normalization’, Pattern Recognit., 2002, 35, (1), pp. 211221.
    36. 36)
      • 28. Tang, S., Wang, Y., Chen, Y.W.: ‘Blur invariant phase correlation in X-ray digital subtraction angiography’. Proc. IEEE/ICME Int. Conf. Complex Medical Engineering, Beijing, China, 2007, pp. 17151719.
    37. 37)
      • 7. Wagner, A., Wright, J., Ganesh, A., et al: ‘Toward a practical face recognition system: robust alignment and illumination by sparse representation’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (2), pp. 372386.
    38. 38)
      • 36. Flusser, J., Farokhi, S., CyrilHöschl, I.H., et al: ‘Recognition of images degraded by Gaussian blur’, IEEE Trans. Image Process., 2016, 25, (2), pp. 790806.
    39. 39)
      • 12. Wei, C.P., Chen, C.F., Wang, Y.C.F.: ‘Robust face recognition with structurally incoherent low-rank matrix decomposition’, IEEE Trans. Image Process., 2014, 23, (8), pp. 32943307.
    40. 40)
      • 15. Flusser, J., Zitová, B.: ‘Combined invariants to linear filtering and rotation’, Int. J. Pattern Recognit. Artif. Intell., 1999, 13, (8), pp. 11231135.
    41. 41)
      • 55. Simonyan, K., Parkhi, O.M., Vedaldi, A., et al: ‘Fisher vector faces in the wild’. Proc. British Machine Vision Conf., Bristol, UK, 2013, vol. 2, p. 4.
    42. 42)
      • 18. Zhang, H., Shu, H., Han, G.N., et al: ‘Blurred image recognition by Legendre moment invariants’, IEEE Trans. Image Process., 2010, 19, (3), pp. 596611.
    43. 43)
      • 57. Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: ‘Eigenfaces vs. Fisherfaces: recognition using class specific linear projection’, IEEE Trans. Pattern Anal. Mach. Intell., 1997, 19, (7), pp. 711720.
    44. 44)
      • 56. Turk, M., Pentland, A.: ‘Face recognition using eigenfaces’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Lahaina, Hawaii, 1991, pp. 586591.
    45. 45)
      • 19. Wee, C.Y., Paramesran, R.: ‘Derivation of blur-invariant features using orthogonal Legendre moments’, IET Comput. Vis., 2007, 1, (2), pp. 6677.
    46. 46)
      • 23. Chen, B., Shu, H., Zhang, H., et al: ‘Combined invariants to similarity transformation and to blur using orthogonal Zernike moments’, IEEE Trans. Image Process., 2011, 20, (2), pp. 345360.
    47. 47)
      • 33. Gopalan, R., Taheri, S., Turaga, P.K., et al: ‘A blur-robust descriptor with applications to face recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (6), pp. 12201226.
    48. 48)
      • 50. Huang, G.B., Ramesh, M., Berg, T., et al: ‘Labeled faces in the wild: a database for studying face recognition in unconstrained environments’. Technical Report 07-49, University of Massachusetts, Amherst, 2007.
    49. 49)
      • 51. Mounsef, J., Karam, L.: ‘Augmented sparse representation classifier for blurred face recognition’. Proc. IEEE Int. Conf. Image Processing, Athens, Greece, 2018, pp. 778782.
    50. 50)
      • 21. Dai, X., Zhang, H., Shu, H., et al: ‘Blurred image registration by combined invariant of Legendre moment and Harris–Laplace detector’. Fourth Pacific-Rim Symp. Image and Video Technology, Singapore, 2010, pp. 300305.
    51. 51)
      • 64. Albiol, A., Monzo, D., Martin, A., et al: ‘Face recognition using HoG–EBGM’, Pattern Recognit. Lett., 2008, 29, (10), pp. 15371543.
    52. 52)
      • 10. Deng, W., Hu, J., Guo, J.: ‘Extended SRC: undersampled face recognition via intraclass variant dictionary’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (9), pp. 18641870.
    53. 53)
      • 3. Shan, S., Gao, W., Cao, B., et al: ‘Illumination normalization for robust face recognition against varying lighting conditions’. Proc. IEEE Int. Workshop on Analysis and Modeling of Faces and Gestures, California, USA, 2003, pp. 157164.
    54. 54)
      • 31. Ojansivu, V., Heikkilä, J.: ‘Blur insensitive texture classification using local phase quantization’. Proc. Int. Conf. Image and Signal Processing, Cherbourg-Octeville, France, 2008, pp. 236243.
    55. 55)
      • 6. Aggarwal, G., Biswas, S., Chellappa, R.: ‘UMD experiments with FRGC data’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, California, USA, 2005, p. 172.
    56. 56)
      • 17. Suk, T., Flusser, J.: ‘Combined blur and affine moment invariants and their use in pattern recognition’, Pattern Recognit., 2003, 36, (12), pp. 28952907.
    57. 57)
      • 30. Pedone, M., Flusser, J., Heikkila, J.: ‘Registration of images with n-fold dihedral blur’, IEEE Trans. Image Process., 2015, 24, (3), pp. 10361045.
    58. 58)
      • 40. Parkhi, O.M., Vedaldi, A., Zisserman, A., et al: ‘Deep face recognition’. Proc. British Machine Vision Conf., 2015, pp. 1–12.
    59. 59)
      • 8. Wright, J., Yang, A.Y., Ganesh, A., et al: ‘Robust face recognition via sparse representation’, IEEE Trans. Pattern Anal. Mach. Intell., 2009, 31, (2), pp. 210227.
    60. 60)
      • 44. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ‘ImageNet classification with deep convolutional neural networks’. Advances in Neural Information Processing Systems, Nevada, USA, 2012, pp. 10971105.
    61. 61)
      • 13. Flusser, J., Suk, T., Saic, S.: ‘Recognition of blurred images by the method of moments’, IEEE Trans. Image Process., 1996, 5, (3), pp. 533538.
    62. 62)
      • 24. Ji, H., Zhu, H.: ‘Degraded image analysis using Zernike moment invariants’. Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing, Taipei, Taiwan, 2009, pp. 19411944.
    63. 63)
      • 63. Hassner, T., Harel, S., Paz, E., et al: ‘Effective face frontalization in unconstrained images’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Boston, USA, 2015, pp. 42954304.
    64. 64)
      • 4. Tan, X., Triggs, B.: ‘Enhanced local texture feature sets for face recognition under difficult lighting conditions’, IEEE Trans. Image Process., 2010, 19, (6), pp. 16351650.
    65. 65)
      • 46. Iandola, F.N., Song.Han, M.W., Moskewicz, K.A., et al: ‘SqueezeNet: AlexNet-level accuracy with 50 × fewer parameters and <0.5 mB model size’, arXiv preprint arXiv:160207360, 2016.
    66. 66)
      • 39. Sun, Y., Wang, X., Tang, X.: ‘Deep learning face representation from predicting 10,000 classes’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Ohio, USA, 2014, pp. 18911898.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-bmt.2018.5242
Loading

Related content

content/journals/10.1049/iet-bmt.2018.5242
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address