http://iet.metastore.ingenta.com
1887

Facial expression recognition techniques: a comprehensive survey

Facial expression recognition techniques: a comprehensive survey

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Over the past decades, facial expression recognition (FER) has become an interesting research area and achieved substantial progress in computer vision. FER is to detect human emotional state related to biometric traits. Developing a machine based human FER system is a quite challenging task. Various FER systems are developed by analysing facial muscle motion and skin deformation based algorithms. In conventional FER system, the developed algorithms work on the constrained database. In the unconstrained environment, the efficacy of existing algorithms is limited due to certain issues during image acquisition. This study presents a detailed study on FER techniques, classifiers and datasets used for analysing the efficacy of the recognition techniques. Moreover, this survey will assist researchers in understanding the strategies and innovative methods that address the issues in a real-time application. Finally, the review presents the challenges encountered by FER system along with the future direction.

References

    1. 1)
      • 1. Ekman, P., Friesen, W.V.: ‘Constants across cultures in the face and emotion’, J. Pers. Soc. Psychol., 1971, 17, (2), pp. 124129.
    2. 2)
      • 2. Mehrabian, A.: ‘Communication without words’, Psychol. Today., 1968, 2, (4), pp. 5356.
    3. 3)
      • 3. Lau, B.T.: ‘Portable real-time emotion detection system for disabled’, Expert Syst., 2010, 37, (9), pp. 65616566.
    4. 4)
      • 4. Dulguerov, P., Marchal, F., Wang, D., et al: ‘Review of objective topographic facial nerve evaluation methods’, AM. J. Otol., 1999, 20, (5), pp. 672678.
    5. 5)
      • 5. Alazrai, R., George Lee, C.: ‘Real-time emotion identification for socially intelligent robot’. IEEE Int. Conf. Robotics and Automation, USA, May 2012, pp. 1418.
    6. 6)
      • 6. Tian, Y.L., Kanade, T., Cohn, J.F.: ‘Facial expression analysis’, in ‘Handbook of face recognition’ (Springer, New York, NY, 2005), pp. 247275.
    7. 7)
      • 7. Viola, P., Jones, M.: ‘Robust real-time face detection’, Int. J. Comput. Vis., 2004, 57, (2), pp. 137154.
    8. 8)
      • 8. Danti, V.N., Kadiyavar, P.T.: ‘An adaptive skin color model for a human face detection’. IEEE Int. Conf. Contemporary Computing and Informatics (IC3I), Mysore, November 2014, pp. 526532.
    9. 9)
      • 9. Tian, Y., Kanade, T., Cohn, J.F.: ‘Recognizing action units for facial expression analysis’, IEEE Trans. Pattern Anal. Mach. Intell., 2001, 23, (2), pp. 97115.
    10. 10)
      • 10. Chu, W.S., Torre, F.D., Cohn, J.F.: ‘Selective transfer machine for personalized facial expression analysis’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (3), pp. 529545.
    11. 11)
      • 11. Zeng, Z., Pantic, M., Roisman, G., et al: ‘A survey of affect recognition methods: audiovisual and spontaneous expressions’, IEEE Trans. Pattern Anal. Mach. Intell., 2009, 31, (1), pp. 3958.
    12. 12)
      • 12. Sebe, N., Lew, M.S., Sun, Y., et al: ‘Authentic facial expression analysis’, Image Vis. Comput., 2007, 25, (12), pp. 18561863.
    13. 13)
      • 13. Pentland, A., Moghaddam, B., Starner, T.: ‘View-based and modular eigenspaces for face recognition’. Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, USA, August 2002, pp. 8491.
    14. 14)
      • 14. Essa, A., Pentland, A.P.: ‘Coding, analysis, interpretation, and recognition of facial expressions’, IEEE Trans. Pattern Anal. Mach. Intell., 1997, 19, (7), pp. 757763.
    15. 15)
      • 15. Turk, M., Pentland, A.: ‘Eigenfaces for recognition’, J. Cogn. Neurosci., 1991, 3, (1), pp. 7186.
    16. 16)
      • 16. Pantic, M., Rothkrantz, L.J.M.: ‘Automatic analysis of facial expressions: the state of the art’, IEEE Trans. Pattern Anal. Mach. Intell., 2000, 22, (12), pp. 14241445.
    17. 17)
      • 17. Zhao-yi, P., Yan-hui, Z., Yu, Z.: ‘Real-time facial expression recognition based on adaptive canny operator edge detection’. Int. Conf. Multimedia and Information Technology (MMIT), Kaifeng, April 2010, pp. 154157.
    18. 18)
      • 18. Kheirkhah, E., Tabatabaie, Z.S.: ‘A hybrid face approach in color images with complex background’, Indian J. Sci. Technol., 2015, 8, (1), pp. 4960.
    19. 19)
      • 19. Zhao-yi, P., Zhou, Y., Wang, P.: ‘Multi-pose face detection based on adaptive skin color and structure model’. Proc. IEEE Int. Conf. Computational Intelligence and Security, Beijing, China, 2009, pp. 325329.
    20. 20)
      • 20. Gargesha, M., Panchanathan, S.: ‘Face detection from color images by iterative thresholding on skin probability maps’. Proc. IEEE Int. Symp. Circuits and Systems, Arizona, USA, August 2002.
    21. 21)
      • 21. Cho, K.M.J., Jang, H., Hong, K.S.: ‘Adaptive skin-color filter’, Pattern Recognit., 2001, 34, (5), pp. 10671073.
    22. 22)
      • 22. Happy, S.L., George, A., Routary, A.: ‘A real time facial expression classification system using local binary patterns’. Int. Conf. Intelligent Human Computer Interaction, Kharagpur, December 2012, pp. 15.
    23. 23)
      • 23. Hassan, I., Maqbool, O., Ashan, Q., et al: ‘Cascading neural network with AdaBoost for face detection’. Proc. Int. Conf. Applied Sciences & Technology, Islamabad, Pakistan, January 2010, pp. 223229.
    24. 24)
      • 24. Dabhi, M.K., Pancholi, B.K.: ‘Face detection system based on Viola-Jones algorithm’, Int. J. Sci. Res., 2016, 5, (4), pp. 6264.
    25. 25)
      • 25. Geetha, A., Ramalingam, V., Palanivel, S., et al: ‘Facial expression recognition- a real-time approach’, Expert Syst. Appl., 2009, 36, (1), pp. 303308.
    26. 26)
      • 26. Dey, A.: ‘Contour based procedure for face detection and tracking from video’. Proc. Int. Conf. Recent Advances in Information Technology, Dhanbad, July 2016, pp. 483488.
    27. 27)
      • 27. Ekman, P., Friesen, W.V.: ‘Measuring facial movement’, Environ. Psychol. Nonverbal Behav., 1976, 1, (1), pp. 5675.
    28. 28)
      • 28. Cootes, T., Taylor, C., Cooper, D., et al: ‘Active shape models-their training and applications’, Comput. Vis. Image Underst., 1995, 61, (2), pp. 3859.
    29. 29)
      • 29. Cootes, T., Edwards, G., Taylor, C.: ‘Active appearance models’, IEEE Trans. Pattern Anal. Mach. Intell., 2001, 23, (6), pp. 681685.
    30. 30)
      • 30. Lowe, D.G.: ‘Distinctive image features from scale invariant keypoints’, Int. J. Comput. Vis., 2004, 60, (2), pp. 91110.
    31. 31)
      • 31. Berretti, S., Del Bimbo, A., Pala, P., et al: ‘A set of selected SIFT features for 3D facial expression recognition’. Proc. Int. Conf. Pattern Recognition, Istanbul, Turkey, 2010, pp. 41254128.
    32. 32)
      • 32. Zhang, Z., Lyons, M., Schuster, M., et al: ‘Comparison between geometry-based and Gabor-based expression recognition using multi-layer perceptron’. Proc. Int. Conf. Automatic Face and Gesture Recognition, Japan, April 1998, pp. 2639.
    33. 33)
      • 33. Rosdiyana, S., Hideyuki, S.: ‘Edge-based facial feature extraction using Gabor wavelet and convolution filters’. Proc. Int. Conf. Machine Vision Applications, Japan, June 2011, pp. 430433.
    34. 34)
      • 34. Dongcheng, S., Fang, C., Guangyi, D.: ‘Facial expression recognition based on Gabor wavelet phase features’. Proc. Int. Conf. Image and Graphics (ICIG), Qingdao, China, July 2013, pp. 520523.
    35. 35)
      • 35. Praseed, L.V., Sasikumar, M.: ‘Facial expression recognition from global and a combination of local features’, IETE Tech. Rev., 2009, 26, (1), pp. 4146.
    36. 36)
      • 36. Lu, H.C., Huang, Y.J., Chen, Y.W.: ‘Real-time FER based on pixel-pattern-based texture feature’, Electron. Lett., 2007, 43, (17), pp. 916918.
    37. 37)
      • 37. Ojala, T., Pietikainen, M., Maenpaa, T.: ‘Multiresolution gray-scale and rotation invariant texture classification with local binary patterns’, IEEE Trans. Pattern Anal. Mach. Intell., 2002, 24, (7), pp. 971987.
    38. 38)
      • 38. Duong, V.H., Lee, Y.S., Ding, J.J.: ‘Projective complex matrix factorization for facial expression recognition’, EURASIP J. Adv. Signal Process., 2018, pp. 111.
    39. 39)
      • 39. Fatima, Z.S., Abdellah, M., Mohamed, K.: ‘Emotion recognition from facial expression based on fiducial points detection and using neural network’, Int. J. Electr. Comput. Eng., 2018, 8, (1), pp. 5259.
    40. 40)
      • 40. Xiong, X., Torre, F.: ‘Supervised decent method and its applications to face alignment’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Portland, Oregon, 2013, pp. 532539.
    41. 41)
      • 41. Michael Revina, I., Sam Emmanuel, W.R.: ‘Face expression recognition using LDN and dominant gradient local ternary pattern descriptors’, J. King Saud Univ., Comput. Inf. Sci., 2018.
    42. 42)
      • 42. Ding, Y., Zhao, Q., Li, B., et al: ‘Facial expression recognition from image sequence based on LBP and Taylor expansion’, IEEE Access, 2017, 5, pp. 1940919419.
    43. 43)
      • 43. Arshid, S., Hussain, A., Munir, A., et al: ‘Multi-stage binary patterns for facial expression recognition in real world’, Cluster Comput., 2017, 21, (1), pp. 323331.
    44. 44)
      • 44. Munir, A., Hussain, A., Khan, S.A., et al: ‘Illumination invariant facial expression recognition using selected merged binary patterns for real world images’, Optik, 2018, 158, pp. 10161025.
    45. 45)
      • 45. Hasani, B., Mahoor, M.H.: ‘Spatial-temporal facial expression recognition using convolutional neural networks and conditional random fields’, arXiv: 1703.06995, 2017.
    46. 46)
      • 46. Mahmud, F., Al Mamun, M.: ‘Facial expression recognition system using extreme learning machine’, Int. J. Sci. Eng. Res., 2017, 8, (3), pp. 2630.
    47. 47)
      • 47. Khadija, L., Yassine, R., Rochidi, M., et al: ‘Facial expression recognition using face regions’. Proc. Int. Conf. Advanced Technologies for Signal and Image Processing, Fez, Morocco, May 2017, pp. 16.
    48. 48)
      • 48. Holder, R.P., Tapamo, J.R.: ‘Improved gradient local ternary patterns for facial expression recognition’, EURASIP J. Image Video Process., 2017, 42, p. 42.
    49. 49)
      • 49. Ahmed, F., Hossain, E.: ‘Automated facial expression recognition using gradient-based ternary texture patterns’, Chin. J. Eng., 2013, 2013, pp. 18.
    50. 50)
      • 50. Qayyum, H., Majid, M., Anwar, S.M., et al: ‘Facial expression recognition using stationary wavelet transform features’, Math. Prob. Eng., 2017, 2017, pp. 19.
    51. 51)
      • 51. Du, L., Hu, H.: ‘Modified classification and regression tree for facial expression recognition using difference expression images’, Electron. Lett., 2017, 53, (9), pp. 590592.
    52. 52)
      • 52. Liu, Y., Li, Y., Ma, M., et al: ‘Facial expression recognition with fusion features extracted from salient facial areas’, Sensors, 2017, 17, (4), 712, pp. 118.
    53. 53)
      • 53. Kumar, S., Bhuyan, M.K., Chakraborty, B.K.: ‘Extraction of informative regions of a face for facial expression recognition’, IET Comput. Vis., 2016, 10, (6), pp. 567576.
    54. 54)
      • 54. Ojala, T., Pietikainen, M., Harwood, D.: ‘A comparative study of texture measure with classification based on featured distribution’, Pattern Recognit., 1996, 1, (29), pp. 5159.
    55. 55)
      • 55. Majumder, A., Behera, L., Subramanian, V.: ‘Automatic facial expression recognition system using deep network-based data fusion’, IEEE Trans. Cybern., 2016, 99, pp. 112.
    56. 56)
      • 56. Kamarol, S.K.A., Jaward, M.H., Parkkinen, J., et al: ‘Spatiotemporal feature extraction for facial expression recognition’, IET Image Process., 2016, 10, (7), pp. 534541.
    57. 57)
      • 57. Tang, Y., Zhang, X.M., Wang, H.: ‘Geometric-convolutional feature fusion based on learning propagation for facial expression recognition’, IEEE Access, 2018, 6, pp. 4253242540.
    58. 58)
      • 58. Hsieh, C.C., Hsih, M.H., Jiang, M.K., et al: ‘Effective semantic features for facial expression recognition using SVM’, Multimedia Tools Appl., 2015, 75, (11), pp. 66636682.
    59. 59)
      • 59. Zhang, K., Huang, Y., Du, Y.: ‘Facial expression recognition based on deep evolutional spatial-temporal networks’, IEEE Trans. Image Process., 2017, 26, pp. 41934203.
    60. 60)
      • 60. Kumbhar, M., Jadhav, A., Patil, M.: ‘Facial expression recognition based on image feature’, Int. J. Comput. Commun. Eng., 2012, 1, (2), pp. 117119.
    61. 61)
      • 61. Suma, K., Lakshminarayana, M.: ‘Survey on face expression recognition from human emotions by using SVM classifier’, Int. J. Cur. Eng. Sci. Res., 2018, 5, (5), pp. 105110.
    62. 62)
      • 62. Duda, R.O., Hart, P.E., Strok, D.G.: ‘Pattern classification’ (Wiley Publications, New York, 2001, 2nd edn.).
    63. 63)
      • 63. Lyons, M.: ‘Coding facial expressions with Gabor wavelets’. Proc. IEEE Int. Conf. Automatic Face and Gesture Recognition, Nara, Japan, 1998, pp. 20205.
    64. 64)
      • 64. Zhao, G., Huang, X., Taini, M., et al: ‘Facial expression recognition from near-infrared videos’, Image Vis. Comput., 2011, 29, (9), pp. 607619.
    65. 65)
      • 65. Yin, L., Wei, X., Sun, Y., et al: ‘A 3D facial expression database for facial behaviour research’. IEEE Int. Conf. Automatic Face and Gesture Recognition (FGR 2006), Southampton, UK, 2006, pp. 211216.
    66. 66)
      • 66. Dhall, A., Goecke, R., Lucey, S., et al: ‘Acted facial expressions in the wild database’. Technical Report, 2011.
    67. 67)
      • 67. Dhall, A., Goecke, R., Lucey, S., et al: ‘Collecting large, richly annotated facial-expression databases from movies’, IEEE Multimedia, 2012, 19, (3), pp. 3341.
    68. 68)
      • 68. Dhall, A., Goecke, R., Lucey, S., et al: ‘Static facial expression analysis in tough conditions: data, evaluation protocol and benchmark’. Proc. IEEE Int. Conf. ICCV Workshops, Barcelona, Spain, 2011, pp. 21062112.
    69. 69)
      • 69. Ouyang, Y.: ‘Robust automatic facial expression detection method based on sparse representation plus LBP map’, Opt.-Int. J. Light Electron Opt., 2013, 124, pp. 68276833.
    70. 70)
      • 70. Zhuang, L., Chan, T., Yang, A.: ‘Sparse illumination learning and transfer for single-sample face recognition with image corruption and misalignment’, Int. J. Comput. Vis., 2015, 114, (2), pp. 272287.
    71. 71)
      • 71. Kangkan, W.: ‘A two-stage framework for 3D face reconstruction from RGBD images’, IEEE Trans. Pattern Anal. Mach. Intell., 2014, 36, (8), pp. 14931504.
    72. 72)
      • 72. Anisha, A., Swati, M.: ‘Comparative analysis of Haar and skin color method for face detection’. IEEE Int. Conf. Recent Advances and Innovations in Engineering, Jaipur, India, May 2014.
    73. 73)
      • 73. Byungtae, A., Han, Y., Kweon, I.S.: ‘Real-time facial landmarks tracking using active shape model and LK optical flow’. Int. Conf. Ubiquitous Robots and Ambient Intelligence (URAI-2012), Daejeon, Korea, November 2012, pp. 541543.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2018.6647
Loading

Related content

content/journals/10.1049/iet-ipr.2018.6647
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address