Extraction of informative regions of a face for facial expression recognition

Extraction of informative regions of a face for facial expression recognition

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The aim of facial expression recognition (FER) algorithms is to extract discriminative features of a face. However, discriminative features for FER can only be obtained from the informative regions of a face. Also, each of the facial subregions have different impacts on different facial expressions. Local binary pattern (LBP) based FER techniques extract texture features from all the regions of a face, and subsequently the features are stacked sequentially. This process generates the correlated features among different expressions, and hence affects the accuracy. This research moves toward addressing these issues. The authors' approach entails extracting discriminative features from the informative regions of a face. In this view, they propose an informative region extraction model, which models the importance of facial regions based on the projection of the expressive face images onto the neural face images. However, in practical scenarios, neutral images may not be available, and therefore the authors propose to estimate a common reference image using Procrustes analysis. Subsequently, weighted-projection-based LBP feature is derived from the informative regions of the face and their associated weights. This feature extraction method reduces miss-classification among different classes of expressions. Experimental results on standard datasets show the efficacy of the proposed method.


    1. 1)
      • 1. Rahulamathavan, Y., Phan, R.-W., Chambers, J., et al: ‘Facial expression recognition in the encrypted domain based on local fisher discriminant analysis’, IEEE Trans. Affective Comput., 2013, 1, (2), pp. 8392.
    2. 2)
      • 2. Ekman, P., Friesen, W.V.: ‘Manual for the facial action coding system’ (Consulting Psychologists Press, Palo Alto, CA, USA, 1978).
    3. 3)
      • 3. Harrigan, J., Rosenthal, R., Scherer, K.: ‘New handbook of methods in nonverbal behavior research’ (Oxford University Press, 2008), p. 22.
    4. 4)
      • 4. Bettadapura, V.: ‘Face expression recognition and analysis: the state of the art’. Technical Report, College of Computing, Georgia Institute of Technology, 2012.
    5. 5)
      • 5. Shan, C., Gong, S., McOwan, P.: ‘Facial expression recognition based on local binary patterns: a comprehensive study’, Image Vis. Comput., 2009, 6, (27), pp. 803816.
    6. 6)
      • 6. Zhao, G., Pietikainen, M.: ‘Dynamic texture recognition using local binary patterns with an application to facial expressions’, IEEE Trans. Pattern Anal. Mach. Intell., 2007, 6, (29), pp. 915928.
    7. 7)
      • 7. Li, Y., Wang, S., Zhao, Y., et al: ‘Simultaneous facial feature tracking and facial expression recognition’, IEEE Trans. Image Process., 2013, 7, (22), pp. 25592573.
    8. 8)
      • 8. Cootes, T.F., Taylor, C.J., Cooper, D., et al: ‘Active shape model-their training and application’, Comput. Vis. Image Understand., 1995, 1, (61), pp. 3859.
    9. 9)
      • 9. Cootes, T., Edwards, G., Taylor, C.: ‘Active appearance models’, IEEE Trans. Pattern Anal. Mach. Intell., 2001, 6, (23), pp. 681685.
    10. 10)
      • 10. Aleksic, P., Katsaggelos, A.: ‘Automatic facial expression recognition using facial animation parameters and multistream HMMs’, IEEE Trans. Inf. Forensics Sec., 2006, 1, (1), pp. 311.
    11. 11)
      • 11. Uddin, M., Lee, J., Kim, T.S.: ‘An enhanced independent component-based human facial expression recognition from video’, IEEE Trans. Consum. Electron., 2009, 4, (55), pp. 22162224.
    12. 12)
      • 12. Zhang, S., Zhao, X., Lei, B.: ‘Facial expression recognition using local fisher discriminant analysis’. Int. Conf. on Advances in Computer Science, Environment, Ecoinformatics, and Education, 2011, vol. 214, pp. 443448.
    13. 13)
      • 13. Oshidari, B., Araabi, B.: ‘An effective feature extraction method for facial expression recognition using adaptive Gabor wavelet’. IEEE Int. Conf. on Progress in Informatics and Computing (PIC), December 2010, vol. 2, pp. 776780.
    14. 14)
      • 14. Dongcheng, S., Fang, C., Guangyi, D.: ‘Facial expression recognition based on Gabor wavelet phase features’. Seventh Int. Conf. on Image and Graphics (ICIG), July 2013, pp. 520523.
    15. 15)
      • 15. Weimin, X.: ‘Facial expression recognition based on Gabor filter and SVM’, Chin. J. Electron., 2006, 15, (4), pp. 809812.
    16. 16)
      • 16. Lekshmi, P., Sasikumar, M.: ‘Analysis of facial expression using Gabor and SVM’, Int. J. Recent Trends Eng., 2009, 1, (1), pp. 4750.
    17. 17)
      • 17. Azmi, R., Yegane, S.: ‘Facial expression recognition in the presence of occlusion using local Gabor binary patterns’. 20th Iranian Conf. on Electrical Engineering (ICEE), May 2012, pp. 742747.
    18. 18)
      • 18. Huang, X., Zhao, G., Zheng, W., et al: ‘Spatiotemporal local monogenic binary patterns for facial expression recognition’, IEEE Signal Process. Lett., 2012, 5, (19), pp. 243246.
    19. 19)
      • 19. Kotsia, I., Buciu, I., Pitas, I.: ‘An analysis of facial expression recognition under partial facial image occlusion’, Image Vis. Comput., 2008, 7, (26), pp. 10521067.
    20. 20)
      • 20. Zhang, W., Shan, S., Chen, X., et al: ‘Local Gabor binary patterns based on Kullback–Leibler divergence for partially occluded face recognition’, IEEE Signal Process. Lett., 2007, 11, (14), pp. 875878.
    21. 21)
      • 21. Lowe, D.: ‘Object recognition from local scale-invariant features’. The Proc. of the Seventh IEEE Int. Conf. on Computer Vision, 1999, 1999, vol. 2, pp. 11501157.
    22. 22)
      • 22. Khan, R., Meyer, A., Konik, H., et al: ‘Exploring human visual system: study to aid the development of automatic facial expression recognition framework’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), June 2012, pp. 4954.
    23. 23)
      • 23. Zhong, L., Liu, Q., Yang, P., et al: ‘Learning active facial patches for expression analysis’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 25622569.
    24. 24)
      • 24. Liu, P., Zhou, J.T., Tsang, I.W.-H., et al: ‘Feature disentangling machine-a novel approach of feature selection and disentangling in facial expression analysis’. European Conf. on Computer Vision, ECCV, 2014, pp. 151166.
    25. 25)
      • 25. Aifanti, N., Papachristou, C., Delopoulos, A.: ‘The mug facial expression database’. 11th Int. Workshop on Image Analysis for Multimedia Interactive Services, April 2010, pp. 14.
    26. 26)
      • 26. Ojala, T., Pietikäinen, M., Harwood, D.: ‘A comparative study of texture measures with classification based on featured distribution’, Pattern Recognit., 1996, 1, (29), pp. 5159.
    27. 27)
      • 27. Ojala, T., Pietikainen, M., Maenpaa, T.: ‘Multiresolution gray-scale and rotation invariant texture classification with local binary patterns’, IEEE Trans. Pattern Anal. Mach. Intell., 2002, 7, (24), pp. 971987.
    28. 28)
      • 28. Huang, D., Shan, C., Ardabilian, M., et al: ‘Local binary patterns and its application to facial image analysis: a survey’, IEEE Trans. Syst. Man Cybern. C, Appl. Rev., 2011, 6, (41), pp. 765781.
    29. 29)
      • 29. Goodall, C.: ‘Procrustes methods in the statistical analysis of shape’, J. R. Stat. Soc. B, 1991, 2, (53), pp. 285339.
    30. 30)
      • 30. Zhang, Z., Luo, P., Loy, C.C., et al: ‘Facial landmark detection by deep multi-task learning’. European Conf. on Computer Vision – ECCV, 2014, pp. 94108.
    31. 31)
      • 31. Duda, R., Hart, P.: ‘Pattern classification and scene analysis’ (Wiley, 1973).
    32. 32)
      • 32. Lyons, M., Budynek, J., Akamatsu, S.: ‘Automatic classification of single facial images’, IEEE Trans. Pattern Anal. Mach. Intell., 1999, 12, (21), pp. 13571362.
    33. 33)
      • 33. Kanade, T., Cohn, J., Tian, Y.: ‘Comprehensive database for facial expression analysis’. Proc. of Fourth IEEE Int. Conf. on Automatic Face and Gesture Recognition, 2000, pp. 4653.
    34. 34)
      • 34. Krippendorff, K.: ‘Content analysis, an introduction to its methodology’ (Sage Publications, Thousand Oaks, CA, 2004, 2nd edn.).
    35. 35)
      • 35. Kim, M., Pavlovic, V.: ‘Hidden conditional ordinal random fields for sequence classification’ (Springer, Berlin, Heidelberg, 2010), (Lecture Notes in Computer Science, 6322), pp. 5165.
    36. 36)
      • 36. Brabanter, K.D., Karsmakers, P., Ojeda, F., et al: ‘LS-SVMlab toolbox users guide 1.7’, 2010.

Related content

This is a required field
Please enter a valid email address