http://iet.metastore.ingenta.com
1887

Self-adaptive weighted synthesised local directional pattern integrating with sparse autoencoder for expression recognition based on improved multiple kernel learning strategy

Self-adaptive weighted synthesised local directional pattern integrating with sparse autoencoder for expression recognition based on improved multiple kernel learning strategy

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

This study presents a novel method for solving facial expression recognition (FER) tasks which uses a self-adaptive weighted synthesised local directional pattern (SW-SLDP) descriptor integrating sparse autoencoder (SA) features based on improved multiple kernel learning (IMKL) strategy. The authors’ work includes three parts. Firstly, the authors propose a novel SW-SLDP feature descriptor which divides the facial images into patches and extracts sub-block features synthetically according to both distribution information and directional intensity contrast. Then self-adaptive weights are assigned to each sub-block feature according to the projection error between the expressional image and neutral image of each patch, which can highlight such areas containing more expressional texture information. Secondly, to extract a discriminative high-level feature, they introduce SA for feature representation, which extracts the hidden layer representation including more comprehensive information. Finally, to combine the above two kinds of features, an IMKL strategy is developed by effectively integrating both soft margin learning and intrinsic local constraints, which is robust to noisy condition and thus improve the classification performance. Extensive experimental results indicate their model can achieve competitive or even better performance with existing representative FER methods.

References

    1. 1)
      • 1. Ekman, P., Friesen, W.V.: ‘Facial action coding system: a technique for the measurement of facial movement’ (Consulting Psychologists Press, Palo Alto, 1978).
    2. 2)
      • 2. Tian, Y., Kanade, T., Cohn, J.F.: ‘Facial expression recognition’ (Springer-Verlag, New York, USA, 2011).
    3. 3)
      • 3. Gu, S., Zhang, L., Zuo, W., et al: ‘Projective dictionary pair learning for pattern classification’. Advances in Neural Information Processing Systems, Quebec, Canada, 2014, pp. 793801.
    4. 4)
      • 4. Jiang, Z., Lin, Z., Davis, L.S.: ‘Label consistent K-SVD: learning a discriminative dictionary for recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (11), pp. 26512664.
    5. 5)
      • 5. Pietikäinen, M., Hadid, A., Zhao, G., et al: ‘Computer vision using local binary patterns’, in ‘Computational imaging and vision’, 40, (British, 2011), pp. 312.
    6. 6)
      • 6. Lowe, D.G.: ‘Object recognition from local scale-invariant features’. IEEE Int. Conf. on Computer Vision, Kerkyra, Greece, 2002, 2.
    7. 7)
      • 7. Aguilera, C., Sappa, A.D., Toledo, R.: ‘LGHD: A feature descriptor for matching across non-linear intensity variations’. IEEE Int. Conf. on Image Processing (ICIP), Quebec, Canada, September 2015, pp. 178181.
    8. 8)
      • 8. Jabid, T., Kabir, M.H., Chae, O.: ‘Local directional pattern (LDP) for face recognition’. Int. Conf. on Consumer Electronics, Las Vegas, USA, 2010, pp. 329330.
    9. 9)
      • 9. Faraji, M.R., Qi, X.: ‘Face recognition under varying illumination based on adaptive homomorphic eight local directional patterns’, IET Comput. Vis., 2015, 9, (3), pp. 390399.
    10. 10)
      • 10. Faraji, M.R., Qi, X.: ‘Face recognition under varying illuminations using logarithmic fractal dimension-based complete eight local directional patterns’, Neurocomputing, 2016, 199, pp. 1630.
    11. 11)
      • 11. Huang, B., Ying, Z.: ‘Sparse autoencoder for facial expression recognition’. UIC-ATC-ScalCom-CBDCom-IoP 2015, Beijing, August 2015, pp. 15291532.
    12. 12)
      • 12. Sariyanidi, E., Gunes, H., Cavallaro, A.: ‘Automatic analysis of facial affect: a survey of registration, representation, and recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2015, 37, (6), pp. 11131133.
    13. 13)
      • 13. Corneanu, C., Oliu, M., Cohn, J.F., et al: ‘Survey on RGB, 3D, thermal, and multimodal approaches for facial expression recognition: history, trends, and affect-related applications’, IEEE Trans. Pattern Analy. Mach. Intell., 2016, 38, (8), pp. 15481568.
    14. 14)
      • 14. Sebe, N., Lew, M.S., Sun, Y., et al: ‘Authentic facial expression analysis’, Image Vis. Comput., 2007, 12, pp. 18561863.
    15. 15)
      • 15. Li, S., Deng, W.: ‘Deep facial expression recognition: a survey’, arXiv preprint, 2018.
    16. 16)
      • 16. Caredakis, G., Malatesta, L., Kessous, L., et al: ‘Modeling naturalistic affective states via facial and vocal expression recognition’. Int. Conf. on Multimodal Interfaces, Banff Alberta, Canada, 2006, pp. 146154.
    17. 17)
      • 17. Pons, G., Masip, D.: ‘Supervised committee of convolutional neural networks in automated facial expression analysis’, IEEE Trans. Affective Comput., 2018, 9, (3), pp. 343350.
    18. 18)
      • 18. Yu, Z., Zhang, C.: ‘Image based static facial expression recognition with multiple deep network learning’. Proc. of the 2015 ACM on Int. Conf. on Multimodal Interaction, Seattle, Washington, USA, 2015, pp. 435442.
    19. 19)
      • 19. Xie, Z., Jiang, P., Zhang, S.: ‘Fusion of LBP and HOG using multiple kernel learning for infrared face recognition’. 2017 IEEE ACIS 16th Int. Conf. on Computer and Information Science (ICIS), Wuhan, May 2017, pp. 8184.
    20. 20)
      • 20. Senechal, T., Rapp, V., Salam, H., et al: ‘Facial action recognition combining heterogeneous features via multikernel learning’, IEEE Trans. Syst. Man Cybern. B (Cybern.), 2012, 42, (4), pp. 9931005.
    21. 21)
      • 21. de Andrade Fernandes, J.Junior, Nogueira Matos, L., Géssica dos Santos Aragão, M.: ‘Geometrical approaches for facial expression recognition using support vector machines’. 2016 29th SIBGRAPI Conf. on Graphics, Patterns and Images (SIBGRAPI), Sao Paulo, Brazil, 2016, pp. 347354.
    22. 22)
      • 22. Banerjee, S., Das, S.: ‘Soft-margin learning for multiple feature-kernel combinations with domain adaptation, for recognition in surveillance face datasets’. 2016 IEEE Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), Las Vegas, July 2016, pp. 237242.
    23. 23)
      • 23. Tong, Y., Chen, J., Ji, Q.: ‘A unified probabilistic framework for spontaneous facial action modeling and understanding’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (2), pp. 258273.
    24. 24)
      • 24. Zhang, T., Zheng, W., Cui, Z., et al: ‘A deep neural network-driven feature learning method for multi-view facial expression recognition’, IEEE Trans. Multimed., 2016, 18, (12), pp. 25282536.
    25. 25)
      • 25. Zhang, T., Tang, Y.Y., Fang, B., et al: ‘Face recognition under varying illumination using gradient faces’, IEEE Trans. Image Process., 2009, 18, (11), pp. 25992606.
    26. 26)
      • 26. Kumar, S., Bhuyan, M.K., Chakraborty, B.K.: ‘Extraction of informative regions of a face for facial expression recognition’, IET Comput. Vis., 2016, 10, (6), pp. 567576.
    27. 27)
      • 27. Lanckriet, G., Cristianini, N., Bartlett, P., et al: ‘Learning the kernel matrix with semidefinite programming’, J. Mach. Learn. Res., 2004, 5, pp. 2772.
    28. 28)
      • 28. The Japanese Female Facial Expression (JAFFE) Database. Available at http://www.mis.atr.co.jp/~mlyons/jaffe.html, accessed 1998.
    29. 29)
      • 29. Lucey, P., Cohn, J.F., Kanade, T., et al: ‘The extended Cohn-Kanade dataset (Ck+): A complete dataset for action unit and emotion-specified expression’. 2010 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition-Workshops, San Francisco, CA, USA, 2010, pp. 94101.
    30. 30)
      • 30. Dhall, A., Goecke, R., Lucey, S., et al: ‘Static facial expression analysis in tough conditions: data, evaluation protocol and benchmark’. 2011 IEEE Int. Conf. on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 2011, pp. 21062112.
    31. 31)
      • 31. Sim, T., Baker, S., Bsat, M.: ‘The CMU pose, illumination, and expression database’, IEEE Trans. Pattern Anal. Mach. Intell., 2003, 25, (12), pp. 16151618.
    32. 32)
      • 32. Salmam, F.Z., Madani, A., Kissi, M.: ‘Facial expression recognition using decision trees’. 13th Int. Conf. on Computer Graphics, Imaging and Visualization, Beni Mellal, Morocco, 2016.
    33. 33)
      • 33. Yi, J., Mao, X., Chen, L., et al: ‘Facial expression recognition considering individual differences in facial structure and texture’, IET Comput. Vis., 2014, 8, (5), pp. 429440.
    34. 34)
      • 34. Fukunaga, K.: ‘Introduction to statistical pattern recognition’ (Academic Press, British, 1990, 2nd edn.).
    35. 35)
      • 35. Turk, M., Pentland, A.: ‘Eigenfaces for recognition’, J. Cogn. Neurosci., 1991, 3, (1), pp. 7286.
    36. 36)
      • 36. Guan, N., Tao, D., Luo, Z., et al: ‘Non-negative patch alignment framework’, IEEE Trans. Neural Netw., 2011, 22, (8), pp. 12181230.
    37. 37)
      • 37. Santra, B., Mukherjee, D.P.: ‘Local saliency-inspired binary patterns for automatic recognition of multi-view facial expression’. IEEE Int. Conf. on Image Processing, Phoenix, AZ, USA, 2016.
    38. 38)
      • 38. Eleftheriahis, S., Rudoviv, O., Pantic, M.: ‘Discriminative shared Gaussian process for miltiview and view-invariant facial expression recognition’, IEEE Trans. Image Process., 2015, 24, (1), pp. 189204.
    39. 39)
      • 39. Liu, M., Li, S., Shan, S., et al: ‘Au-aware deep networks for facial expression recognition’. FG, Shanghai, China, 2013, pp. 16.
    40. 40)
      • 40. Parkhi, O.M., Vedaldi, A., Zisserman, A.: ‘Deep face recognition’. Proc. Br. Mach. Vis. Conf., Swansea, UK, 2015, pp. 41.141.12.
    41. 41)
      • 41. Viola, P., Jones, M.: ‘Rapid object detection using a boosted cascade of simple features’. Computer Vision and Pattern Recognition, Kauai, USA, 2001, pp. 511518.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2018.5127
Loading

Related content

content/journals/10.1049/iet-cvi.2018.5127
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address