© The Institution of Engineering and Technology
Facial expression recognition (FER) plays an important role in human–computer interaction. The recent years have witnessed an increasing trend of various approaches for the FER, but these approaches usually do not consider the effect of individual differences to the recognition result. When the face images change from neutral to a certain expression, the changing information constituted of the structural characteristics and the texture information can provide rich important clues not seen in either face image. Therefore it is believed to be of great importance for machine vision. This study proposes a novel FER algorithm by exploiting the structural characteristics and the texture information hiding in the image space. Firstly, the feature points are marked by an active appearance model. Secondly, three facial features, which are feature point distance ratio coefficient, connection angle ratio coefficient and skin deformation energy parameter, are proposed to eliminate the differences among the individuals. Finally, a radial basis function neural network is utilised as the classifier for the FER. Extensive experimental results on the Cohn–Kanade database and the Beihang University (BHU) facial expression database show the significant advantages of the proposed method over the existing ones.
References
-
-
1)
-
11. Lajevardi, S.M., Hussain, Z.M.: ‘Automatic facial expression recognition: feature extraction and selection’, Signal Image Video Process., 2012, 6, (1), pp. 159–169 (doi: 10.1007/s11760-010-0177-5).
-
2)
-
21. Borg, I., Groenen, P.: ‘Modern multidimensional scaling: theory and applications’ (Springer Verlag, 2005).
-
3)
-
20. Comon, P.: ‘Independent component analysis’. Higher-Order Statistics, 1992, pp. 29–38.
-
4)
-
P. Belhumeur ,
J. Hespanha ,
D. Kriegman
.
Eigenfaces vs fusherfaces: recognition using class specific linear projection.
IEEE Trans. Pattern Anal. Mach. Intell.
,
7 ,
711 -
720
-
5)
-
16. Sim, T., Baker, S., Bsat, M.: ‘The CMU pose, illumination, and expression (PIE) database’. Proc. Fifth IEEE Int. Conf. on Automatic Face and Gesture Recognition, 2002, pp. 46–51.
-
6)
-
30. Valstar, M.F., Pantic, M.: ‘Biologically vs. logic inspired encoding of facial actions and emotions in video’. Proc. IEEE Int. Conf. on Multimedia and Expo, 2006, pp. 325–328.
-
7)
-
S.T. Roweis ,
L.K. Saul
.
Nonlinear dimensionality reduction by locally linear embedding.
Science.
,
5500 ,
2323 -
2326
-
8)
-
26. Kotsia, I., Zafeiriou, S., Pitas, I.: ‘Texture and shape information fusion for facial expression and facial action unit recognition’, Pattern Recognit., 2008, 41, (3), pp. 833–851 (doi: 10.1016/j.patcog.2007.06.026).
-
9)
-
I. Kotsia ,
I. Pitas
.
Facial expression recognition in image sequences using geometric deformation features and support vector machines.
IEEE Trans. Image Process.
,
172 -
187
-
10)
-
10. Lajevardi, S.M., Wu, H.: ‘Facial expression recognition in perceptual color space’, IEEE Trans. Image Process., 2012, 21, (8), pp. 3721–3733 (doi: 10.1109/TIP.2012.2197628).
-
11)
-
38. Jeni, L.A., Lörincz, A., Nagy, T., et al: ‘3D shape estimation in video sequences provides high precision evaluation of facial expressions’, Image Vis. Comput., 2012, 30, (10), pp. 785–795 (doi: 10.1016/j.imavis.2012.02.003).
-
12)
-
31. Song, M., Tao, D., Liu, Z., Li, X., Zhou, M.: ‘Image ratio features for facial expression recognition application’, IEEE Trans. Syst. Man Cybern. B, Cybern., 2010, 40, (3), pp. 779–788 (doi: 10.1109/TSMCB.2009.2029076).
-
13)
-
1. Wang, X., Mao, X., Mitsuru, I.: ‘Human ace analysis with nonlinear manifold learning’, J. Electron. Inf. Technol., 2011, 33, (10), pp. 2531–2535 (doi: 10.3724/SP.J.1146.2011.00153).
-
14)
-
3. Mao, X., Xue, Y.: ‘Human-computer affective interaction’ (Science Press, 2011).
-
15)
-
5. Xue, Y., Mao, X., Caleanu, C.D., Lv, S.: ‘Layered fuzzy facial expression generation of virtual agent’, Chin. J. Electron., 2010, 19, (1), pp. 69–74.
-
16)
-
25. Hinton, G., Roweis, S.: ‘Stochastic neighbor embedding’, Adv. Neural Inf. Process. Syst., 2002, 15, pp. 833–840.
-
17)
-
36. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: ‘The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression’. Proc. 2010 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops, 2010, pp. 94–101.
-
18)
-
12. Zhao, G., Huang, X., Taini, M., Li, S.Z., PietikäInen, M.: ‘Facial expression recognition from near-infrared videos’, Image Vis. Comput., 2011, 29, (9), pp. 607–619 (doi: 10.1016/j.imavis.2011.07.002).
-
19)
-
J.B. Tenenbaum ,
V. de Silva ,
J.C. Langford
.
A global geometric framework for nonlinear dimensionality reduction.
Science
,
5500 ,
2319 -
2323
-
20)
-
17. Martinez, A.R., Aleix, M.: ‘The AR face database’, , 1998, p. 24.
-
21)
-
34. Liu, Z., Shan, Y., Zhang, Z.: ‘Expressive expression mapping with ratio images’. Proc. 28th Annual Conf. on Computer Graphics and Interactive Techniques, 2001, pp. 271–276.
-
22)
-
7. Wang, X., Mao, X., Caleanu, C.D.: ‘Nonlinear shape-texture manifold learning’, IEICE Trans. Inf. Syst., 2010, 93, (7), pp. 2016–2019 (doi: 10.1587/transinf.E93.D.2016).
-
23)
-
8. Moore, S., Bowden, R.: ‘Local binary patterns for multi-view facial expression recognition’, Comput. Vis. Image Underst., 2011, 115, (4), pp. 541–558 (doi: 10.1016/j.cviu.2010.12.001).
-
24)
-
T.F. Cootes ,
C.J. Taylor ,
D.H. Cooper ,
J. Graham
.
Active shape models – their training and application.
Comput. Vis. Image Understand.
,
1 ,
38 -
59
-
25)
-
M. Belkin ,
P. Niyogi
.
Laplacian eigenmaps for dimensionality reduction and data representation.
Neural Comput.
,
6 ,
1373 -
1396
-
26)
-
18. Jolliffe, I.T.: ‘Principal component analysis’ (Springer-Verlag New York, 1986).
-
27)
-
9. Choi, J.Y., Ro, Y.M., Plataniotis, K.N.: ‘Color local texture features for color face recognition’, IEEE Trans. Image Process., 2012, 21, (3), pp. 1366–1380 (doi: 10.1109/TIP.2011.2168413).
-
28)
-
15. Xue, Y., Mao, X., Zhang, F.: ‘Design and realization of BHU facial expression database’, J. Beijing Univ. Aeronaut. Astronaut., 2007, 2, pp. 224–228.
-
29)
-
6. Mao, X., Li, Z.: ‘Generating and describing affective eye behaviors’, IEICE Trans. Inf. Syst., 2010, E93-D, (5), pp. 1282–1290 (doi: 10.1587/transinf.E93.D.1282).
-
30)
-
35. Kanade, T., Cohn, J.F., Tian, Y.: ‘Comprehensive database for facial expression analysis’. Proc. Fourth IEEE Int. Conf. on Automatic Face and Gesture Recognition, 2000, pp. 46–53.
-
31)
-
29. Park, S., Shin, J., Kim, D.: ‘Facial expression analysis with facial expression deformation’. Proc. 19th Int. Conf. on Pattern Recognition, 2008, pp. 1–4.
-
32)
-
14. Paul, E., Friesen, W.V.: ‘Investigator's guide to the facial action coding system, part two’ (CA: Consulting Psychologists Press, Palo Alto, 1978).
-
33)
-
28. Shi, J., Samal, A., Marx, D.: ‘How effective are landmarks and their geometry for face recognition’, Comput. Vis. Image Underst., 2006, 102, (2), pp. 117–133 (doi: 10.1016/j.cviu.2005.10.002).
-
34)
-
13. Paul, E., Friesen, W.V.: ‘Constants across cultures in the face and emotion’, J. Personality Soc. Psychol., 1971, 17, (2), pp. 124–129 (doi: 10.1037/h0030377).
-
35)
-
37. Chew, S., Lucey, P., Lucey, S., Saragih, J., Cohn, J.F., Sridharan, S.: ‘Person-independent facial expression detection using constrained local models’. Proc. 2011 IEEE Int. Conf. on Automatic Face & Gesture Recognition and Workshops, 2011, pp. 915–920.
-
36)
-
33. Cootes, T.F., Edwards, G.J., Taylor, C.J.: ‘Active appearance models’. Computer Vision-ECCV'98, 1998, pp. 484–498.
-
37)
-
27. Mahdi, I., Hamed, S.H.: ‘A novel fuzzy facial expression recognition system based on facial feature extraction from color face images’, Eng. Appl. Artif. Intell., 2012, 25, (1), pp. 130–146 (doi: 10.1016/j.engappai.2011.07.004).
-
38)
-
4. Li, Z., Mao, X.: ‘EEMML: the emotional eye movement animation toolkit’, Multimedia Tools Appl., 2012, 60, (1), pp. 181–201 (doi: 10.1007/s11042-011-0816-z).
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2013.0171
Related content
content/journals/10.1049/iet-cvi.2013.0171
pub_keyword,iet_inspecKeyword,pub_concept
6
6