Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Facial expression recognition considering individual differences in facial structure and texture

Facial expression recognition (FER) plays an important role in human–computer interaction. The recent years have witnessed an increasing trend of various approaches for the FER, but these approaches usually do not consider the effect of individual differences to the recognition result. When the face images change from neutral to a certain expression, the changing information constituted of the structural characteristics and the texture information can provide rich important clues not seen in either face image. Therefore it is believed to be of great importance for machine vision. This study proposes a novel FER algorithm by exploiting the structural characteristics and the texture information hiding in the image space. Firstly, the feature points are marked by an active appearance model. Secondly, three facial features, which are feature point distance ratio coefficient, connection angle ratio coefficient and skin deformation energy parameter, are proposed to eliminate the differences among the individuals. Finally, a radial basis function neural network is utilised as the classifier for the FER. Extensive experimental results on the Cohn–Kanade database and the Beihang University (BHU) facial expression database show the significant advantages of the proposed method over the existing ones.

References

    1. 1)
    2. 2)
      • 21. Borg, I., Groenen, P.: ‘Modern multidimensional scaling: theory and applications’ (Springer Verlag, 2005).
    3. 3)
      • 20. Comon, P.: ‘Independent component analysis’. Higher-Order Statistics, 1992, pp. 2938.
    4. 4)
    5. 5)
      • 16. Sim, T., Baker, S., Bsat, M.: ‘The CMU pose, illumination, and expression (PIE) database’. Proc. Fifth IEEE Int. Conf. on Automatic Face and Gesture Recognition, 2002, pp. 4651.
    6. 6)
      • 30. Valstar, M.F., Pantic, M.: ‘Biologically vs. logic inspired encoding of facial actions and emotions in video’. Proc. IEEE Int. Conf. on Multimedia and Expo, 2006, pp. 325328.
    7. 7)
    8. 8)
    9. 9)
    10. 10)
    11. 11)
    12. 12)
    13. 13)
    14. 14)
      • 3. Mao, X., Xue, Y.: ‘Human-computer affective interaction’ (Science Press, 2011).
    15. 15)
      • 5. Xue, Y., Mao, X., Caleanu, C.D., Lv, S.: ‘Layered fuzzy facial expression generation of virtual agent’, Chin. J. Electron., 2010, 19, (1), pp. 6974.
    16. 16)
      • 25. Hinton, G., Roweis, S.: ‘Stochastic neighbor embedding’, Adv. Neural Inf. Process. Syst., 2002, 15, pp. 833840.
    17. 17)
      • 36. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: ‘The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression’. Proc. 2010 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops, 2010, pp. 94101.
    18. 18)
    19. 19)
    20. 20)
      • 17. Martinez, A.R., Aleix, M.: ‘The AR face database’, CVC Technical Report, 1998, p. 24.
    21. 21)
      • 34. Liu, Z., Shan, Y., Zhang, Z.: ‘Expressive expression mapping with ratio images’. Proc. 28th Annual Conf. on Computer Graphics and Interactive Techniques, 2001, pp. 271276.
    22. 22)
    23. 23)
    24. 24)
    25. 25)
    26. 26)
      • 18. Jolliffe, I.T.: ‘Principal component analysis’ (Springer-Verlag New York, 1986).
    27. 27)
    28. 28)
      • 15. Xue, Y., Mao, X., Zhang, F.: ‘Design and realization of BHU facial expression database’, J. Beijing Univ. Aeronaut. Astronaut., 2007, 2, pp. 224228.
    29. 29)
    30. 30)
      • 35. Kanade, T., Cohn, J.F., Tian, Y.: ‘Comprehensive database for facial expression analysis’. Proc. Fourth IEEE Int. Conf. on Automatic Face and Gesture Recognition, 2000, pp. 4653.
    31. 31)
      • 29. Park, S., Shin, J., Kim, D.: ‘Facial expression analysis with facial expression deformation’. Proc. 19th Int. Conf. on Pattern Recognition, 2008, pp. 14.
    32. 32)
      • 14. Paul, E., Friesen, W.V.: ‘Investigator's guide to the facial action coding system, part two’ (CA: Consulting Psychologists Press, Palo Alto, 1978).
    33. 33)
    34. 34)
    35. 35)
      • 37. Chew, S., Lucey, P., Lucey, S., Saragih, J., Cohn, J.F., Sridharan, S.: ‘Person-independent facial expression detection using constrained local models’. Proc. 2011 IEEE Int. Conf. on Automatic Face & Gesture Recognition and Workshops, 2011, pp. 915920.
    36. 36)
      • 33. Cootes, T.F., Edwards, G.J., Taylor, C.J.: ‘Active appearance models’. Computer Vision-ECCV'98, 1998, pp. 484498.
    37. 37)
    38. 38)
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2013.0171
Loading

Related content

content/journals/10.1049/iet-cvi.2013.0171
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address