Facial expression recognition using feature additive pooling and progressive fine-tuning of CNN

Facial expression recognition using feature additive pooling and progressive fine-tuning of CNN

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
Electronics Letters — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Facial expression recognition is one of the most important tasks in human–computer interaction, affective computing, computer vision, and related work. Feature additive pooling and progressive fine-tuning of the convolutional neural network (CNN) for facial expression recognition in a static image are introduced. Network is proposed that partially employs the visual geometry group (VGG)-face model pre-trained on a VGG-face dataset. The characteristics and distribution of the facial expression images in each database are biased according to the purpose of the publicly available facial database used. To alleviate this problem, a CNN model is developed that merges progressively fine-tuned CNNs into a single network. Experiments were carried out to validate the presented method using facial expression images from the Cohn–Kanade +, Karolinska directed emotional face, and Japanese female facial expression databases, and cross-database evaluation results show that the method is superior to state-of-the-art methods.


    1. 1)
    2. 2)
      • 2. Parkhi, O.M., Vedaldi, A., Zisserman, A.: ‘Deep face recognition’. British Machine Vision Conf., Swansea, UK, September 2015, p. 6.
    3. 3)
      • 3. Lucey, P., Cohn, J.F., Kanade, T., et al: ‘The extended Cohn–Kanade dataset (CK + ): a complete dataset for action unit and emotion-specified expression’. IEEE Conf. Computer Vision Pattern Recognition Workshops (CVPRW), San Francisco, CA, USA, June 2010, pp. 94101.
    4. 4)
      • 4. Lundqvist, D., Flykt, A., Öhman, A.: ‘The Karolinska directed emotional faces – KDEF’, CD ROM from the Department of Clinical Neuroscience, Psychology Section, Karolinska Institute, 1998.
    5. 5)
      • 5. Lyons, M., Akamatsu, S., Kamachi, M., et al: ‘Coding facial expressions with Gabor wavelets’. IEEE Int. Conf. Automatic Face and Gesture Recognition, Nara, Japan, April 1998, pp. 200205.
    6. 6)
    7. 7)
      • 7. Zavarez, M.V., Berriel, R.F., Oliveira-Santos, T.: ‘Cross-database facial expression recognition based on fine-tuned deep convolutional network’. SIBGRAPI Conf. Graphics, Patterns, and Images, Niteroi, Brazil, October 2017, pp. 405412.

Related content

This is a required field
Please enter a valid email address