http://iet.metastore.ingenta.com
1887

Personalised-face neutralisation using best-matched face shape with a neutral-face database

Personalised-face neutralisation using best-matched face shape with a neutral-face database

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Conventional personalised-face neutralisation methods use facial-expression databases; however, the database creation and maintenance is a tedious process, and should be minimised. Moreover, face-shape template should be also considerably used due to its crucial factor. This study proposes a personalised-face neutralisation method using best-matched face-shape template with neutral-face database. In personalised-face neutralisation, the best-matched face-shape template which is assumed as the most similar to the neutralisation expression face is found based on coarse-to-fine concept, and used for warping textures. Additionally, closed eyes are detected and opened up by using eye shape of the best-matched face shape, and mixed intensities of original closed-eye and the best-matched one. To evaluate the performance of the proposed method, experiments were performed using the CMU Multi-PIE database and the results reveal that the proposed method reduces gradient mean square error 0.07% on average and improves face recognition accuracy by 1.13% approximately comparing with the conventional method, while requiring only a single neutral database without expression images.

References

    1. 1)
      • 1. Hassaballah, M., Aly, S.: ‘Face recognition: challenges, achievements and future directions’, IET Comput. Vis., 2015, 9, (4), pp. 614626.
    2. 2)
      • 2. Murtaza, M., Sharif, M., Raza, M., et al: ‘Analysis of face recognition under varying facial expression: a survey’, Int. Arab J. Inf. Technol., 2013, 10, (4), pp. 378388.
    3. 3)
      • 3. Ahlberg, J.: ‘CANDIDE-3 – an updated parameterized face’. Report No. LiTH-ISY-R-2326, Department of Electrical Engineering, Linkoping University, Sweden, 2001.
    4. 4)
      • 4. Blanz, V., Vetter, T.: ‘Face recognition based on fitting a 3D morphable model’, IEEE Trans. Pattern Anal. Mach. Intell., 2003, 25, (9), pp. 10631074.
    5. 5)
      • 5. Amberg, B., Knothe, R., Vetter, T.: ‘Expression invariant 3D face recognition with a morphable model’. Eighth IEEE Int. Conf. Automatic Face and Gesture Recognition, 2008, pp. 16.
    6. 6)
      • 6. Ter-Haar, F.B., Veltkamp, R.C.: ‘Expression modeling for expression-invariant face recognition’, Comput. Graph., 2010, 34, (3), pp. 231241.
    7. 7)
      • 7. Zhu, X., Lei, Z., Yan, J., et al: ‘High-fidelity pose and expression normalization for face recognition in the wild’. IEEE Conf. On Computer Vision and Pattern Recognition, 2015, pp. 787796.
    8. 8)
      • 8. Turk, M., Pentland, A.: ‘Eigenfaces for recognition’, J. Cogn. Neurosci., 1991, 3, (1), pp. 7186.
    9. 9)
      • 9. Patil, A.M., Kolhe, S.R., Patil, P.M.: ‘Face recognition by PCA technique’. Int. Conf. on Emerging Trends in Engineering and Technology, 2009, pp. 192195.
    10. 10)
      • 10. Martinez, A.M., Kak, A.C.: ‘PCA versus LDA’, IEEE Trans. Pattern Anal. Mach. Intell., 2001, 23, (2), pp. 228233.
    11. 11)
      • 11. Barlett, M.S., Movellan, J.R., Sejnowski, T.J.: ‘Face recognition by independent component analysis’, IEEE Trans. Neural Netw., 2002, 13, (6), pp. 14501464.
    12. 12)
      • 12. Yang, M.H.: ‘Kernel eigenfaces vs. kernel Fisherfaces: face recognition using kernel methods’. Proc. Int. Conf. Automatic Face and Gesture Recognition, 2002, pp. 215220.
    13. 13)
      • 13. Shan, C., Gong, S., McOwan, P.W.: ‘Facial expression recognition based on local binary patterns: a comprehensive study’, Image Vis. Comput., 2009, 27, (6), pp. 803816.
    14. 14)
      • 14. Naseem, I., Togneri, R., Bennamoun, M.: ‘Linear regression for face recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (11), pp. 21062112.
    15. 15)
      • 15. Chai, X., Shan, S., Chen, X., et al: ‘Locally linear regression for pose-invariant face recognition’, IEEE Trans. Image Process., 2007, 16, (7), pp. 17161725.
    16. 16)
      • 16. Asthana, A., Sanderson, C., Gedeon, T., et al: ‘Learning-based face synthesis for pose-robust recognition from single image’. Proc. British Machine Vision Conf., 2009, pp. 110.
    17. 17)
      • 17. Ruiz-del-Solar, J., Quinteros, J.: ‘Illumination compensation and normalization in eigenspace-based face recognition: a comparative study of different pre-processing approaches’, Pattern Recognit. Lett., 2008, 29, (14), pp. 19661979.
    18. 18)
      • 18. Lin, J., Ming, J., Crookes, D.: ‘Robust face recognition with partial occlusion, illumination variation and limited training data by optimal feature selection’, IET Comput. Vis., 2011, 5, (1), pp. 2332.
    19. 19)
      • 19. Parkhi, O.M., Vedaldi, A., Zisserman, A.: ‘Deep face recognition’. British Machine Vision Conf., 2015, pp. 112.
    20. 20)
      • 20. Gorodnichy, D.O.: ‘Video-based framework for face recognition in video’. Proc. Canadian Conf. Computer and Robot Vision, 2005, pp. 330338.
    21. 21)
      • 21. Martinez, A.M.: ‘Recognizing imprecisely localized, partially occluded and expression variant faces from a single sample per class’, IEEE Trans. Pattern Anal. Mach. Intell., 2002, 24, (6), pp. 748763.
    22. 22)
      • 22. Martinez, A.M.: ‘Matching expression variant faces’, Vis. Res., 2003, 43, (9), pp. 10471060.
    23. 23)
      • 23. Martinez, A.M.: ‘Recognizing expression variant faces from a single sample image per class’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2003.
    24. 24)
      • 24. Tsai, P.-H., Jan, T.: ‘Expression-invariant face recognition system using subspace model analysis’. Proc. IEEE Conf. Systems, Man and Cybernetics, 2005, vol. 2, pp. 17121717.
    25. 25)
      • 25. Lee, H.S., Kim, D.: ‘Expression-invariant face recognition by facial expression transformations’, Pattern Recognit. Lett., 2008, 29, (13), pp. 17971805.
    26. 26)
      • 26. Tsai, P., Tran, T.P., Cao, L.: ‘Expression-invariant facial identification’. Proc. of the IEEE Int. Conf. on Systems, Man and Cybernetics, 2009, pp. 51515155.
    27. 27)
      • 27. Alex, A.T., Asari, V.K., Mathew, A.: ‘Gradient feature matching for expression invariant face recognition using single reference image’. IEEE Int. Conf. on Systems, Man and Cybernetics, 2012, pp. 851856.
    28. 28)
      • 28. Han, X., Yap, M.H., Palmer, I.: ‘Face recognition in the presence of expressions’, J. Softw. Eng. Appl., 2012, 5, pp. 321329.
    29. 29)
      • 29. Mohammadzade, H., Hatzinakos, D.: ‘Expression subspace projection for face recognition from single sample per person’, IEEE Trans. Affect. Comput., 2012, 4, (1), pp. 6982.
    30. 30)
      • 30. Ramachandran, M., Zhou, S.K., Jhalani, D., et al: ‘A method for converting a smiling face to a neutral face with application to face recognition’. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, 2005, vol. 2, pp. ii/977ii/980.
    31. 31)
      • 31. Li, X., Mori, G., Zhang, H.: ‘Expression-invariant face recognition with expression classification’. Proc. Third Canadian Conf. Computer and Robot Vision, 2006, pp. 7783.
    32. 32)
      • 32. Hsieh, C.K., Lai, S.H., Chen, Y.C.: ‘Expression-invariant face recognition with constrained optical flow warping’, IEEE Trans. Multim., 2009, 11, (4), pp. 600610.
    33. 33)
      • 33. Hsieh, C.K., Lai, S.H., Chen, Y.C.: ‘An optical flow-based approach to robust face recognition under expression variations’, IEEE Trans. Image Process., 2010, 19, (1), pp. 233240.
    34. 34)
      • 34. Petpairote, C., Madarasmi, S.: ‘Face recognition improvement by converting facial expressions to neutral faces’. Int. Symp. on Communications and Information Technologies, 2013, pp. 439444.
    35. 35)
      • 35. Chen, Y., Bai, R., Hua, C.: ‘Personalised face neutralisation based on subspace bilinear regression’, IET Comput. Vis., 2014, 8, (4), pp. 329337.
    36. 36)
      • 36. Petpairote, C., Madarasmi, S.: ‘Improved face recognition with expressions by warping to the best neutral face’. Int. Conf. on Emerging Trends in Computer and Image Processing, 2014, pp. 510.
    37. 37)
      • 37. Gao, Y., Leung, M.K.H., Hui, S.C., et al: ‘Facial expression recognition from line-based caricatures’, IEEE Trans. Syst. Man Cybern. A, Syst. Humans, 2003, 33, (3), pp. 407412.
    38. 38)
      • 38. Chen, F., Xu, Y., Zhang, D., et al: ‘2D facial landmarks model design by combining key points and inserted points’, Expert Syst. Appl., 2015, 42, pp. 78587868.
    39. 39)
      • 39. Cootes, T.F., Edwards, G.J., Taylor, C.J.: ‘Active appearance models’, IEEE Trans. Pattern Anal. Mach. Intell., 2001, 23, (6), pp. 681685.
    40. 40)
      • 40. Cristinacce, D., Cootes, T.F.: ‘Feature detection and tracking with constrained local models’. Proc., British Machine Vision Conf., 2006, vol. 3, pp. 929938.
    41. 41)
      • 41. Matthews, I., Baker, S.: ‘Active appearance models revisited’, Int. J. Comput. Vis., 2004, 60, (2), pp. 135164.
    42. 42)
      • 42. Gross, R., Matthews, I., Cohn, J.F., et al: ‘Multi-PIE’, Image Vis. Comput., 2010, 28, (5), pp. 807813.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0352
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0352
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address