Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Extraction of informative regions of a face for facial expression recognition

The aim of facial expression recognition (FER) algorithms is to extract discriminative features of a face. However, discriminative features for FER can only be obtained from the informative regions of a face. Also, each of the facial subregions have different impacts on different facial expressions. Local binary pattern (LBP) based FER techniques extract texture features from all the regions of a face, and subsequently the features are stacked sequentially. This process generates the correlated features among different expressions, and hence affects the accuracy. This research moves toward addressing these issues. The authors' approach entails extracting discriminative features from the informative regions of a face. In this view, they propose an informative region extraction model, which models the importance of facial regions based on the projection of the expressive face images onto the neural face images. However, in practical scenarios, neutral images may not be available, and therefore the authors propose to estimate a common reference image using Procrustes analysis. Subsequently, weighted-projection-based LBP feature is derived from the informative regions of the face and their associated weights. This feature extraction method reduces miss-classification among different classes of expressions. Experimental results on standard datasets show the efficacy of the proposed method.

References

    1. 1)
      • 15. Weimin, X.: ‘Facial expression recognition based on Gabor filter and SVM’, Chin. J. Electron., 2006, 15, (4), pp. 809812.
    2. 2)
      • 25. Aifanti, N., Papachristou, C., Delopoulos, A.: ‘The mug facial expression database’. 11th Int. Workshop on Image Analysis for Multimedia Interactive Services, April 2010, pp. 14.
    3. 3)
      • 13. Oshidari, B., Araabi, B.: ‘An effective feature extraction method for facial expression recognition using adaptive Gabor wavelet’. IEEE Int. Conf. on Progress in Informatics and Computing (PIC), December 2010, vol. 2, pp. 776780.
    4. 4)
      • 7. Li, Y., Wang, S., Zhao, Y., et al: ‘Simultaneous facial feature tracking and facial expression recognition’, IEEE Trans. Image Process., 2013, 7, (22), pp. 25592573.
    5. 5)
      • 11. Uddin, M., Lee, J., Kim, T.S.: ‘An enhanced independent component-based human facial expression recognition from video’, IEEE Trans. Consum. Electron., 2009, 4, (55), pp. 22162224.
    6. 6)
      • 21. Lowe, D.: ‘Object recognition from local scale-invariant features’. The Proc. of the Seventh IEEE Int. Conf. on Computer Vision, 1999, 1999, vol. 2, pp. 11501157.
    7. 7)
      • 35. Kim, M., Pavlovic, V.: ‘Hidden conditional ordinal random fields for sequence classification’ (Springer, Berlin, Heidelberg, 2010), (Lecture Notes in Computer Science, 6322), pp. 5165.
    8. 8)
      • 24. Liu, P., Zhou, J.T., Tsang, I.W.-H., et al: ‘Feature disentangling machine-a novel approach of feature selection and disentangling in facial expression analysis’. European Conf. on Computer Vision, ECCV, 2014, pp. 151166.
    9. 9)
      • 5. Shan, C., Gong, S., McOwan, P.: ‘Facial expression recognition based on local binary patterns: a comprehensive study’, Image Vis. Comput., 2009, 6, (27), pp. 803816.
    10. 10)
      • 14. Dongcheng, S., Fang, C., Guangyi, D.: ‘Facial expression recognition based on Gabor wavelet phase features’. Seventh Int. Conf. on Image and Graphics (ICIG), July 2013, pp. 520523.
    11. 11)
      • 30. Zhang, Z., Luo, P., Loy, C.C., et al: ‘Facial landmark detection by deep multi-task learning’. European Conf. on Computer Vision – ECCV, 2014, pp. 94108.
    12. 12)
      • 19. Kotsia, I., Buciu, I., Pitas, I.: ‘An analysis of facial expression recognition under partial facial image occlusion’, Image Vis. Comput., 2008, 7, (26), pp. 10521067.
    13. 13)
      • 28. Huang, D., Shan, C., Ardabilian, M., et al: ‘Local binary patterns and its application to facial image analysis: a survey’, IEEE Trans. Syst. Man Cybern. C, Appl. Rev., 2011, 6, (41), pp. 765781.
    14. 14)
      • 10. Aleksic, P., Katsaggelos, A.: ‘Automatic facial expression recognition using facial animation parameters and multistream HMMs’, IEEE Trans. Inf. Forensics Sec., 2006, 1, (1), pp. 311.
    15. 15)
      • 32. Lyons, M., Budynek, J., Akamatsu, S.: ‘Automatic classification of single facial images’, IEEE Trans. Pattern Anal. Mach. Intell., 1999, 12, (21), pp. 13571362.
    16. 16)
      • 31. Duda, R., Hart, P.: ‘Pattern classification and scene analysis’ (Wiley, 1973).
    17. 17)
      • 6. Zhao, G., Pietikainen, M.: ‘Dynamic texture recognition using local binary patterns with an application to facial expressions’, IEEE Trans. Pattern Anal. Mach. Intell., 2007, 6, (29), pp. 915928.
    18. 18)
      • 22. Khan, R., Meyer, A., Konik, H., et al: ‘Exploring human visual system: study to aid the development of automatic facial expression recognition framework’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), June 2012, pp. 4954.
    19. 19)
      • 36. Brabanter, K.D., Karsmakers, P., Ojeda, F., et al: ‘LS-SVMlab toolbox users guide 1.7’, 2010.
    20. 20)
      • 9. Cootes, T., Edwards, G., Taylor, C.: ‘Active appearance models’, IEEE Trans. Pattern Anal. Mach. Intell., 2001, 6, (23), pp. 681685.
    21. 21)
      • 17. Azmi, R., Yegane, S.: ‘Facial expression recognition in the presence of occlusion using local Gabor binary patterns’. 20th Iranian Conf. on Electrical Engineering (ICEE), May 2012, pp. 742747.
    22. 22)
      • 12. Zhang, S., Zhao, X., Lei, B.: ‘Facial expression recognition using local fisher discriminant analysis’. Int. Conf. on Advances in Computer Science, Environment, Ecoinformatics, and Education, 2011, vol. 214, pp. 443448.
    23. 23)
      • 2. Ekman, P., Friesen, W.V.: ‘Manual for the facial action coding system’ (Consulting Psychologists Press, Palo Alto, CA, USA, 1978).
    24. 24)
      • 26. Ojala, T., Pietikäinen, M., Harwood, D.: ‘A comparative study of texture measures with classification based on featured distribution’, Pattern Recognit., 1996, 1, (29), pp. 5159.
    25. 25)
      • 29. Goodall, C.: ‘Procrustes methods in the statistical analysis of shape’, J. R. Stat. Soc. B, 1991, 2, (53), pp. 285339.
    26. 26)
      • 18. Huang, X., Zhao, G., Zheng, W., et al: ‘Spatiotemporal local monogenic binary patterns for facial expression recognition’, IEEE Signal Process. Lett., 2012, 5, (19), pp. 243246.
    27. 27)
      • 34. Krippendorff, K.: ‘Content analysis, an introduction to its methodology’ (Sage Publications, Thousand Oaks, CA, 2004, 2nd edn.).
    28. 28)
      • 8. Cootes, T.F., Taylor, C.J., Cooper, D., et al: ‘Active shape model-their training and application’, Comput. Vis. Image Understand., 1995, 1, (61), pp. 3859.
    29. 29)
      • 16. Lekshmi, P., Sasikumar, M.: ‘Analysis of facial expression using Gabor and SVM’, Int. J. Recent Trends Eng., 2009, 1, (1), pp. 4750.
    30. 30)
      • 1. Rahulamathavan, Y., Phan, R.-W., Chambers, J., et al: ‘Facial expression recognition in the encrypted domain based on local fisher discriminant analysis’, IEEE Trans. Affective Comput., 2013, 1, (2), pp. 8392.
    31. 31)
      • 20. Zhang, W., Shan, S., Chen, X., et al: ‘Local Gabor binary patterns based on Kullback–Leibler divergence for partially occluded face recognition’, IEEE Signal Process. Lett., 2007, 11, (14), pp. 875878.
    32. 32)
      • 4. Bettadapura, V.: ‘Face expression recognition and analysis: the state of the art’. Technical Report, College of Computing, Georgia Institute of Technology, 2012.
    33. 33)
      • 3. Harrigan, J., Rosenthal, R., Scherer, K.: ‘New handbook of methods in nonverbal behavior research’ (Oxford University Press, 2008), p. 22.
    34. 34)
      • 27. Ojala, T., Pietikainen, M., Maenpaa, T.: ‘Multiresolution gray-scale and rotation invariant texture classification with local binary patterns’, IEEE Trans. Pattern Anal. Mach. Intell., 2002, 7, (24), pp. 971987.
    35. 35)
      • 23. Zhong, L., Liu, Q., Yang, P., et al: ‘Learning active facial patches for expression analysis’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 25622569.
    36. 36)
      • 33. Kanade, T., Cohn, J., Tian, Y.: ‘Comprehensive database for facial expression analysis’. Proc. of Fourth IEEE Int. Conf. on Automatic Face and Gesture Recognition, 2000, pp. 4653.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2015.0273
Loading

Related content

content/journals/10.1049/iet-cvi.2015.0273
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address