access icon free Personalised-face neutralisation using best-matched face shape with a neutral-face database

Conventional personalised-face neutralisation methods use facial-expression databases; however, the database creation and maintenance is a tedious process, and should be minimised. Moreover, face-shape template should be also considerably used due to its crucial factor. This study proposes a personalised-face neutralisation method using best-matched face-shape template with neutral-face database. In personalised-face neutralisation, the best-matched face-shape template which is assumed as the most similar to the neutralisation expression face is found based on coarse-to-fine concept, and used for warping textures. Additionally, closed eyes are detected and opened up by using eye shape of the best-matched face shape, and mixed intensities of original closed-eye and the best-matched one. To evaluate the performance of the proposed method, experiments were performed using the CMU Multi-PIE database and the results reveal that the proposed method reduces gradient mean square error 0.07% on average and improves face recognition accuracy by 1.13% approximately comparing with the conventional method, while requiring only a single neutral database without expression images.

Inspec keywords: object detection; visual databases; emotion recognition; face recognition; eye; mean square error methods; image texture; gradient methods

Other keywords: CMU MultiPIE database; texture warping; neutral-face database; closed eyes detection; gradient mean square error reduction; eye shape; facial-expression databases; database creation; best-matched face-shape template; face recognition accuracy; neutralisation expression face; coarse-to-fine concept; database maintenance; personalised-face neutralisation method

Subjects: Spatial and pictorial databases; Interpolation and function approximation (numerical analysis); Image recognition; Computer vision and image processing techniques; Interpolation and function approximation (numerical analysis)

References

    1. 1)
      • 28. Han, X., Yap, M.H., Palmer, I.: ‘Face recognition in the presence of expressions’, J. Softw. Eng. Appl., 2012, 5, pp. 321329.
    2. 2)
      • 30. Ramachandran, M., Zhou, S.K., Jhalani, D., et al: ‘A method for converting a smiling face to a neutral face with application to face recognition’. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, 2005, vol. 2, pp. ii/977ii/980.
    3. 3)
      • 32. Hsieh, C.K., Lai, S.H., Chen, Y.C.: ‘Expression-invariant face recognition with constrained optical flow warping’, IEEE Trans. Multim., 2009, 11, (4), pp. 600610.
    4. 4)
      • 25. Lee, H.S., Kim, D.: ‘Expression-invariant face recognition by facial expression transformations’, Pattern Recognit. Lett., 2008, 29, (13), pp. 17971805.
    5. 5)
      • 24. Tsai, P.-H., Jan, T.: ‘Expression-invariant face recognition system using subspace model analysis’. Proc. IEEE Conf. Systems, Man and Cybernetics, 2005, vol. 2, pp. 17121717.
    6. 6)
      • 21. Martinez, A.M.: ‘Recognizing imprecisely localized, partially occluded and expression variant faces from a single sample per class’, IEEE Trans. Pattern Anal. Mach. Intell., 2002, 24, (6), pp. 748763.
    7. 7)
      • 29. Mohammadzade, H., Hatzinakos, D.: ‘Expression subspace projection for face recognition from single sample per person’, IEEE Trans. Affect. Comput., 2012, 4, (1), pp. 6982.
    8. 8)
      • 4. Blanz, V., Vetter, T.: ‘Face recognition based on fitting a 3D morphable model’, IEEE Trans. Pattern Anal. Mach. Intell., 2003, 25, (9), pp. 10631074.
    9. 9)
      • 11. Barlett, M.S., Movellan, J.R., Sejnowski, T.J.: ‘Face recognition by independent component analysis’, IEEE Trans. Neural Netw., 2002, 13, (6), pp. 14501464.
    10. 10)
      • 12. Yang, M.H.: ‘Kernel eigenfaces vs. kernel Fisherfaces: face recognition using kernel methods’. Proc. Int. Conf. Automatic Face and Gesture Recognition, 2002, pp. 215220.
    11. 11)
      • 36. Petpairote, C., Madarasmi, S.: ‘Improved face recognition with expressions by warping to the best neutral face’. Int. Conf. on Emerging Trends in Computer and Image Processing, 2014, pp. 510.
    12. 12)
      • 40. Cristinacce, D., Cootes, T.F.: ‘Feature detection and tracking with constrained local models’. Proc., British Machine Vision Conf., 2006, vol. 3, pp. 929938.
    13. 13)
      • 34. Petpairote, C., Madarasmi, S.: ‘Face recognition improvement by converting facial expressions to neutral faces’. Int. Symp. on Communications and Information Technologies, 2013, pp. 439444.
    14. 14)
      • 37. Gao, Y., Leung, M.K.H., Hui, S.C., et al: ‘Facial expression recognition from line-based caricatures’, IEEE Trans. Syst. Man Cybern. A, Syst. Humans, 2003, 33, (3), pp. 407412.
    15. 15)
      • 18. Lin, J., Ming, J., Crookes, D.: ‘Robust face recognition with partial occlusion, illumination variation and limited training data by optimal feature selection’, IET Comput. Vis., 2011, 5, (1), pp. 2332.
    16. 16)
      • 22. Martinez, A.M.: ‘Matching expression variant faces’, Vis. Res., 2003, 43, (9), pp. 10471060.
    17. 17)
      • 13. Shan, C., Gong, S., McOwan, P.W.: ‘Facial expression recognition based on local binary patterns: a comprehensive study’, Image Vis. Comput., 2009, 27, (6), pp. 803816.
    18. 18)
      • 31. Li, X., Mori, G., Zhang, H.: ‘Expression-invariant face recognition with expression classification’. Proc. Third Canadian Conf. Computer and Robot Vision, 2006, pp. 7783.
    19. 19)
      • 8. Turk, M., Pentland, A.: ‘Eigenfaces for recognition’, J. Cogn. Neurosci., 1991, 3, (1), pp. 7186.
    20. 20)
      • 27. Alex, A.T., Asari, V.K., Mathew, A.: ‘Gradient feature matching for expression invariant face recognition using single reference image’. IEEE Int. Conf. on Systems, Man and Cybernetics, 2012, pp. 851856.
    21. 21)
      • 33. Hsieh, C.K., Lai, S.H., Chen, Y.C.: ‘An optical flow-based approach to robust face recognition under expression variations’, IEEE Trans. Image Process., 2010, 19, (1), pp. 233240.
    22. 22)
      • 38. Chen, F., Xu, Y., Zhang, D., et al: ‘2D facial landmarks model design by combining key points and inserted points’, Expert Syst. Appl., 2015, 42, pp. 78587868.
    23. 23)
      • 20. Gorodnichy, D.O.: ‘Video-based framework for face recognition in video’. Proc. Canadian Conf. Computer and Robot Vision, 2005, pp. 330338.
    24. 24)
      • 26. Tsai, P., Tran, T.P., Cao, L.: ‘Expression-invariant facial identification’. Proc. of the IEEE Int. Conf. on Systems, Man and Cybernetics, 2009, pp. 51515155.
    25. 25)
      • 23. Martinez, A.M.: ‘Recognizing expression variant faces from a single sample image per class’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2003.
    26. 26)
      • 19. Parkhi, O.M., Vedaldi, A., Zisserman, A.: ‘Deep face recognition’. British Machine Vision Conf., 2015, pp. 112.
    27. 27)
      • 5. Amberg, B., Knothe, R., Vetter, T.: ‘Expression invariant 3D face recognition with a morphable model’. Eighth IEEE Int. Conf. Automatic Face and Gesture Recognition, 2008, pp. 16.
    28. 28)
      • 41. Matthews, I., Baker, S.: ‘Active appearance models revisited’, Int. J. Comput. Vis., 2004, 60, (2), pp. 135164.
    29. 29)
      • 1. Hassaballah, M., Aly, S.: ‘Face recognition: challenges, achievements and future directions’, IET Comput. Vis., 2015, 9, (4), pp. 614626.
    30. 30)
      • 7. Zhu, X., Lei, Z., Yan, J., et al: ‘High-fidelity pose and expression normalization for face recognition in the wild’. IEEE Conf. On Computer Vision and Pattern Recognition, 2015, pp. 787796.
    31. 31)
      • 17. Ruiz-del-Solar, J., Quinteros, J.: ‘Illumination compensation and normalization in eigenspace-based face recognition: a comparative study of different pre-processing approaches’, Pattern Recognit. Lett., 2008, 29, (14), pp. 19661979.
    32. 32)
      • 10. Martinez, A.M., Kak, A.C.: ‘PCA versus LDA’, IEEE Trans. Pattern Anal. Mach. Intell., 2001, 23, (2), pp. 228233.
    33. 33)
      • 16. Asthana, A., Sanderson, C., Gedeon, T., et al: ‘Learning-based face synthesis for pose-robust recognition from single image’. Proc. British Machine Vision Conf., 2009, pp. 110.
    34. 34)
      • 9. Patil, A.M., Kolhe, S.R., Patil, P.M.: ‘Face recognition by PCA technique’. Int. Conf. on Emerging Trends in Engineering and Technology, 2009, pp. 192195.
    35. 35)
      • 35. Chen, Y., Bai, R., Hua, C.: ‘Personalised face neutralisation based on subspace bilinear regression’, IET Comput. Vis., 2014, 8, (4), pp. 329337.
    36. 36)
      • 2. Murtaza, M., Sharif, M., Raza, M., et al: ‘Analysis of face recognition under varying facial expression: a survey’, Int. Arab J. Inf. Technol., 2013, 10, (4), pp. 378388.
    37. 37)
      • 14. Naseem, I., Togneri, R., Bennamoun, M.: ‘Linear regression for face recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (11), pp. 21062112.
    38. 38)
      • 3. Ahlberg, J.: ‘CANDIDE-3 – an updated parameterized face’. Report No. LiTH-ISY-R-2326, Department of Electrical Engineering, Linkoping University, Sweden, 2001.
    39. 39)
      • 39. Cootes, T.F., Edwards, G.J., Taylor, C.J.: ‘Active appearance models’, IEEE Trans. Pattern Anal. Mach. Intell., 2001, 23, (6), pp. 681685.
    40. 40)
      • 42. Gross, R., Matthews, I., Cohn, J.F., et al: ‘Multi-PIE’, Image Vis. Comput., 2010, 28, (5), pp. 807813.
    41. 41)
      • 15. Chai, X., Shan, S., Chen, X., et al: ‘Locally linear regression for pose-invariant face recognition’, IEEE Trans. Image Process., 2007, 16, (7), pp. 17161725.
    42. 42)
      • 6. Ter-Haar, F.B., Veltkamp, R.C.: ‘Expression modeling for expression-invariant face recognition’, Comput. Graph., 2010, 34, (3), pp. 231241.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0352
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0352
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading