http://iet.metastore.ingenta.com
1887

Emotion recognition from facial expressions using hybrid feature descriptors

Emotion recognition from facial expressions using hybrid feature descriptors

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Here, a hybrid feature descriptor-based method is proposed to recognise human emotions from their facial expressions. A combination of spatial bag of features (SBoFs) with spatial scale-invariant feature transform (SBoF-SSIFT), and SBoFs with spatial speeded up robust transform are utilised to improve the ability to recognise facial expressions. For classification of emotions, K-nearest neighbour and support vector machines (SVMs) with linear, polynomial, and radial basis function kernels are applied. SBoFs descriptor generates a fixed length feature vector for all sample images irrespective of their size. Spatial SIFT and SURF features are independent of scaling, rotation, translation, projective transforms, and partly to illumination changes. A modified form of bag of features (BoFs) is employed by involving feature's spatial information for facial emotion recognition. The proposed method differs from conventional methods that are used for simple object categorisation without using spatial information. Experiments have been performed on extended Cohn–Kanade (CK+) and Japanese female facial expression (JAFFE) data sets. SBoF-SSIFT with SVM resulted in a recognition accuracy of 98.5% on CK+ and 98.3% on JAFFE data set. Images are resized through selective pre-processing, thereby retaining only the information of interest and reducing computation time.

References

    1. 1)
      • 1. Chakraborty, A., Konar, A., Chakraborty, U.K., et al: ‘Emotion recognition from facial expressions and its control using fuzzy logic’, IEEE Trans. Syst. Man Cybern. A Syst. Humans, 2009, 39, (4), pp. 726743.
    2. 2)
      • 2. Pantic, M., Rothkrantz, L.J.M.: ‘Automatic analysis of facial expressions: the state of the art’, IEEE Trans. Pattern Anal. Mach. Intell., 2000, 22, (12), pp. 14241445.
    3. 3)
      • 3. Cornelius, R.R.: ‘Theoretical approaches to emotion’. ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion, 2000.
    4. 4)
      • 4. Busso, C., Deng, Z., Yildirim, S., et al: ‘ACM’. Analysis of Emotion Recognition Using Facial Expressions, Speech and Multimodal Information, 2004, pp. 205211.
    5. 5)
      • 5. Vijayakumari, V.: ‘Face recognition techniques: a survey’, World J. Comput. Appl. Technol., 2013, 1, (2), pp. 4150.
    6. 6)
      • 6. Mishra, B., Fernandes, S.L., Abhishek, K., et al: ‘Facial expression recognition using feature based techniques and model based techniques: a survey’. 2015 2nd Int. Conf. Electronics and Communication Systems (ICECS), 2015, pp. 589594.
    7. 7)
      • 7. Lartillot, O., Toiviainen, P., Eerola, T.: ‘A matlab toolbox for music information retrieval’. Data Analysis, Machine Learning and Applications, 2008, pp. 261268.
    8. 8)
      • 8. Bhatti, A.M., Majid, M., Anwar, S.M., et al: ‘Human emotion recognition and analysis in response to audio music using brain signals’, Comput. Hum. Behav., 2016, 65, pp. 267275.
    9. 9)
      • 9. Lee, Y.H., Han, W., Kim, Y., et al: ‘Robust emotion recognition algorithm for ambiguous facial expression using optimized AAM and k-NN’, Int. J. Secur. Appl., 2014, 8, (5), pp. 203212.
    10. 10)
      • 10. Shbib, R., Zhou, S.: ‘Facial expression analysis using active shape model’, Int. J. Signal Process. Image Process. Pattern Recogn., 2015, 8, (1), pp. 922.
    11. 11)
      • 11. Huang, X., Zhao, G., Pietikäinen, M., et al: ‘Dynamic facial expression recognition using boosted component-based spatiotemporal features and multi-classifier fusion’. Int. Conf. Advanced Concepts for Intelligent Vision Systems, 2010, pp. 312322.
    12. 12)
      • 12. Whitehill, J., Omlin, C.W.: ‘Local versus global segmentation for facial expression recognition’. FGR 2006. 7th Int. Conf. Automatic Face and Gesture Recognition, 2006, 2006, pp. 357362.
    13. 13)
      • 13. Cao, Y., Wang, C., Li, Z., et al: ‘Spatial-bag-of-features’. 2010 IEEE Conf. IEEE Computer Vision and Pattern Recognition (CVPR), 2010, pp. 33523359.
    14. 14)
      • 14. Mele, K., Suc, D., Maver, J.: ‘Local probabilistic descriptors for image categorisation’, IET Comput. Vis., 2009, 3, (1), pp. 823.
    15. 15)
      • 15. Mikolajczyk, K., Schmid, C.: ‘A performance evaluation of local descriptors’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (10), pp. 16151630.
    16. 16)
      • 16. Lowe, D.G.: ‘Distinctive image features from scale-invariant keypoints’, Int. J. Comput. Vis., 2004, 60, (2), pp. 91110.
    17. 17)
      • 17. Sivic, J., Zisserman, A.: ‘Video Google: a text retrieval approach to object matching in videos’, 2003, pp. 14701477.
    18. 18)
      • 18. Koller, D., Sahami, M.: ‘Hierarchically classifying documents using very few words’ (Stanford InfoLab, California, 1997).
    19. 19)
      • 19. Zhao, X., Zhang, S.: ‘Facial expression recognition using local binary patterns and discriminant kernel locally linear embedding’, EURASIP J. Adv. Signal Process., 2012, 2012, (1), p. 20.
    20. 20)
      • 20. Tie, Y., Guan, L.: ‘Automatic landmark point detection and tracking for human facial expressions’, EURASIP J. Image Video Process., 2013, 2013, (1), p. 8.
    21. 21)
      • 21. Kamarol, S.K.A., Jaward, M.H., Parkkinen, J., et al: ‘Spatiotemporal feature extraction for facial expression recognition’, IET Image Process., 2016, 10, (7), pp. 534541.
    22. 22)
      • 22. Donia, M.M., Youssif, A.A., Hashad, A.: ‘Spontaneous facial expression recognition based on histogram of oriented gradients descriptor’, Comput. Inf. Sci., 2014, 7, (3), p. 31.
    23. 23)
      • 23. Carcagnì, P., Del Coco, M., Leo, M., et al: ‘Facial expression recognition and histograms of oriented gradients: a comprehensive study’, SpringerPlus, 2015, 4, (1), p. 645.
    24. 24)
      • 24. Viola, P., Jones, M.: ‘Rapid object detection using a boosted cascade of simple features’. Proc. 2001 IEEE Computer Society Conf. Computer Vision and Pattern Recognition, 2001. CVPR 2001, 2001, vol. 1, pp. II.
    25. 25)
      • 25. Lucey, P., Cohn, J.F., Kanade, T., et al: ‘The extended Cohn–Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression’. 2010 IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2010, pp. 94101.
    26. 26)
      • 26. Lyons, M.J., Akamatsu, S., Kamachi, M., et al: ‘The Japanese female facial expression (JAFFE) database’. Proc. Third International Conf. Automatic Face and Gesture Recognition, 1998, pp. 1416.
    27. 27)
      • 27. Kao, M.Y.: ‘Encyclopedia of algorithms’ (Springer Science & Business Media, Berlin, 2008).
    28. 28)
      • 28. Sikka, K., Wu, T., Susskind, J., et al: ‘Exploring bag of words architectures in the facial expression domain’. Computer Vision–ECCV 2012. Workshops and Demonstrations, 2012, pp. 250259.
    29. 29)
      • 29. Qayyum, H., Majid, M., Anwar, S.M., et al: ‘Facial expression recognition using stationary wavelet transform features’, Math. Probl. Eng., 2017, 2017, 9854050.
    30. 30)
      • 30. Del Coco, M., Carcagnì, P., Palestra, G., et al: ‘Analysis of hog suitability for facial traits description in FER problems’. Int. Conf. Image Analysis and Processing, 2015, pp. 460471.
    31. 31)
      • 31. Ren, F., Huang, Z.: ‘Facial expression recognition based on AAM–sift and adaptive regional weighting’, IEEJ Trans. Electr. Electron. Eng., 2015, 10, (6), pp. 713722.
    32. 32)
      • 32. Savran, A., Alyüz, N., Dibeklioğlu, H., et al: ‘Bosphorus database for 3d face analysis’, Proc. European Workshop on Biometrics and Identity Management, Roskilde, Denmark, May 2008, pp. 4756.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2017.0499
Loading

Related content

content/journals/10.1049/iet-ipr.2017.0499
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address