Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Critical parameters of the sparse representation-based classifier

In recent years, the growing attention in the study of the compressive sensing (CS) theory suggested a novel classification algorithm called sparse representation-based classifier (SRC), which obtained promising results by casting classification as a sparse representation problem. Whereas SRC has been applied to different fields of applications and several variations of it have been proposed, less attention has been given to its critical parameters, that is, measurements correlated to its performance. This work underlines the differences between CS and SRC, it gives a mathematical definition of five measurements possible correlated to the performance of SRC and identifies three of them as critical parameters. The knowledge of the critical parameters is necessary to fuse multiple scores of SRC classifiers allowing for classification. The authors addressed the problem of two-dimensional face classification: using the Extended Yale B dataset to monitor the critical parameters and the Extended Cohn-Kanade database to test the robustness of SRC with emotional faces. Finally, the authors increased the initial performance of the holistic SRC with a block-based SRC, which uses one critical parameter for automatic selection of the most successful blocks.

References

    1. 1)
      • 3. Gong, D., Yang, Q., Tang, X., Lu, J.: ‘Extracting micro-structural Gabor features for face recognition’. Int. Conf. Image Processing (ICIP), Genoa, Italy, September 2005, pp. 1114.
    2. 2)
      • 22. Georghiades, A., Belhumeur, P., Kriegman, D.: ‘From few too many: illumination cone models for face recognition under variable lighting and pose’, IEEE Trans. Pattern Anal. Mach. Intell. (PAMI), 2001, 23, (6), pp. 643660 (doi: 10.1109/34.927464).
    3. 3)
      • 18. Elad, M., Aharon, M.: ‘Image denoising via sparse and redundant representations over learned dictionaries’, IEEE Trans. Image Process., 2006, 15, (12), pp. 37363745 (doi: 10.1109/TIP.2006.881969).
    4. 4)
      • 5. Yan, S., Wang, H., Liu, J., Tang, X., Huang, T.S.: ‘Misalignment-robust face recognition’, IEEE Trans. Image Process., 2010, 19, (4), pp. 10871096 (doi: 10.1109/TIP.2009.2038765).
    5. 5)
      • 12. Candes, E.J., Wakin, M.B.: ‘An introduction to compressive sampling’, IEEE Signal Process. Mag., 2008, 25, (2), pp. 2130 (doi: 10.1109/MSP.2007.914731).
    6. 6)
      • 6. Gross, J., Shi, J., Cohn, J.: ‘Quo vadis face recognition?’, Technical report, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA, 2001.
    7. 7)
      • 24. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: ‘The Extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion specified expression’. Proc. IEEE workshop on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, 13–18 June 2010, pp. 94101.
    8. 8)
      • 14. Marial, J., Bach, F., Ponce, J., Sapiro, G., Zisserman, A.: ‘Discriminative learned dictionaries for local image analysis’. IEEE Int. Conf. Computer Vision and Pattern Recognition (CVPR), Anchorage, Alaska, USA, 24–26 June 2008, pp. 18.
    9. 9)
      • 19. Mei, X., Ling, H.: ‘Robust visual tracking using L1 minimization’. Proc. Int. Conf. on Computer Vision (ICCV), Kyoto, Japan, 29 September–2 October 2009, pp. 14361443.
    10. 10)
      • 11. Baraniuk, R.: ‘Compressive Sensing’, IEEE Signal Process. Mag., 2007, 24, (4), pp. 118121 (doi: 10.1109/MSP.2007.4286571).
    11. 11)
      • 13. Wakin, M.B., Laska, J.N., Duarte, M.F., Baron, D., Sarvotan, S., Tkhar, D., Kelly, K.F., Baraniuk, R.: ‘An architecture for compressive imaging’. Int. Conf. Image Processing (ICIP), Atlanta, GA, USA, 8–11 October 2006, pp. 12731276.
    12. 12)
      • 1. OToole, A.J., Jiang, F., Roark, D., Abdi, H.: ‘Predicting human performance for face recognition in face processing: advanced methods and models’ (Academic Press, 2006).
    13. 13)
      • 2. Calder, A.J., Young, A.W.: ‘Understanding the recognition of facial identity and facial expression’, Nat. Rev. Neurosci., 2005, 6, (8), pp. 641651 (doi: 10.1038/nrn1724).
    14. 14)
      • 17. Wang, Z.W., Ming-Wei, H., Zi-Lu, Y.: ‘The performance study of facial expression recognition via sparse representation’. Int. Conf. Machine Learning and Cybernetics, Chine, 2010, pp. 824827.
    15. 15)
      • 21. Shi, Q., Eriksson, A., Hengel, A., Shen, C.: ‘Is face recognition a compressive sensing problem?’, IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR 11), University of Adelaide, Adelaide, SA, Australia, 20–25 June 2011, pp. 553560.
    16. 16)
      • 26. Kanade, T., Cohn, J.F., Tian, Y.: ‘Comprehensive database for facial expression analyses’. Proc. Fourth IEEE Int. Conf. on Automatic Face and Gesture Recognition, March 2000, pp. 4653.
    17. 17)
      • 16. Cotter, S.F.: ‘Recognition of occluded facial expression using a fusion of localized sparse representation classifiers’. Digital Signal Processing Workshop and IEEE Signal Processing Education Workshop (DSP/SPE), 2011 IEEE, Sedona, Arizona, USA, 4–7 January 2011, pp. 437442.
    18. 18)
      • 8. Kim, J., Choi, J., Yi, J., Turk, M.: ‘Effective representation using ICA for face recognition robust to local distortion and partial occlusion’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (12), pp. 19771981 (doi: 10.1109/TPAMI.2005.242).
    19. 19)
      • 28. Ptucha, R., Tsagkatakis, G., Savakis, A.: ‘Manifold based representation for robust expression recognition without neutral subtraction’. IEEE Int. Conf. Computer Vision Workshop (ICCV), Barcellona, Spain, 6–13 November 2011, pp. 21362143.
    20. 20)
      • 25. Battini Sonmez, E., Sankur, B., Albayrak, S.: ‘Classification with emotional faces via a robust sparse classifer’. Third Int. Conf. on Image Processing Theory, Tools and Applications (IPTA2012), Aydin University, Istanbul, Turkey, 15–18 October 2012.
    21. 21)
      • 27. Ekman, P., Friesen, W.V.: ‘Facial action coding system (FACS): a technique for the measurement of facial movement’ (Consulting Psychologists Press, Palo Alto, 1978).
    22. 22)
      • 20. Zhang, L., Yang, M., Feng, X.: ‘Sparse representation or collaborative representation: which helps face recognition?’. Int. Conf. Computer Vision, Barcellona, Spain, 6–13 November 2001, pp. 471478.
    23. 23)
      • 9. Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: ‘Robust Face Recognition via Sparse Representation’, IEEE Trans. Pattern Anal. Mach. Intell. (PAMI), 2009, 31, (2), pp. 210227 (doi: 10.1109/TPAMI.2008.79).
    24. 24)
      • 10. Arandjelovic, O., Cipolla, R.: ‘A manifold approach to face recognition from low quality video across illumination and pose using implicit super-resolution’. IEEE Int. Conf. Computer Vision (ICCV), Rio de Janeiro, Brazil, 14–20 October 2007.
    25. 25)
      • 4. Wagner, A., Wright, J., Ganesh, A., Zhou, Z., Ma, Y.: ‘Towards a practical face recognition system: Robust registration and illumination by sparse representation’. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Miami, FL, 20–25 June 2009, pp. 597604.
    26. 26)
      • 7. Tarres, F., Rama, A.: ‘A novel method for face recognition under partial occlusion or facial expression variations’. 47th Int. Symp. ELMAR-2005, Multimedia Systems and Applications, Zadar, Croazia, 8–10 June 2005, pp. 163166.
    27. 27)
      • 23. Lee, K.C., Ho, J., Kriegman, D.: ‘Acquiring linear subspaces for face recognition under variable lighting’, IEEE Trans. Pattern Anal. Mach. Intell. (PAMI), 2005, 27, (5), pp. 684698 (doi: 10.1109/TPAMI.2005.92).
    28. 28)
      • 15. Barlett, M., Littlewort, G., Frank, M., Lainscek, C., Fasel, I., Movellan, J.: ‘Automatic recognition of facial actions in spontaneous expressions’, J. Multimedia, 2006, 1, (6), pp. 2235.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2012.0127
Loading

Related content

content/journals/10.1049/iet-cvi.2012.0127
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address