Subclass representation-based face-recognition algorithm derived from the structure scatter of training samples

Subclass representation-based face-recognition algorithm derived from the structure scatter of training samples

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Representation-based face-recognition techniques have received attention in the field of pattern recognition in recent years; however, the well-known works focus mainly on constraint conditions and dictionary learning. Few researchers study, which sample data features determine the performance of representation-based classification algorithms. To address this problem, the authors define the structure-scatter degree, which represents the structural features of training sample sets, to determine whether a set is suitable for the representation-based classification algorithm. Experimental results show that sets with a higher structure scatter more likely allows a classification algorithm to obtain a higher recognition rate. Further, the block contribution degree (DBC) of a training sample set is defined to evaluate whether a sample set is suitable for block-based sparse-representation classification algorithms. Experimental results indicate that if the DBC approaches zero, the block technique is unlikely to improve the performance of a representation-based classification algorithm. Thus, they devise a self-adaptive optimisation method to generate an optimal block size, an overlapping degree, and a block-weighting scheme. Finally, they propose the structure scatter-based subclass representation classification. Experimental results demonstrate that the proposed algorithm not only improves the recognition accuracy of the representation-based classification algorithm, but also greatly reduces its time complexity.


    1. 1)
      • 1. Haider, W., Bashir, H., Sharif, A., et al: ‘A survey on face detection and recognition approaches’, Res. J. Recent Sci., 2014, 3, (4), pp. 5662.
    2. 2)
      • 2. Turk, M., Pentland, A.: ‘Eigenfaces for recognition’, J. Cognitive Neurosci., 1991, 3, (1), pp. 7186.
    3. 3)
      • 3. Zhu, M.: ‘Subcalss dicriminant analysis’, IEEE Trans. Pattern Anal. Mach. Intell., 2006, 28, (8), pp. 12741286.
    4. 4)
      • 4. He, X., Yan, S., Hu, Y., et al: ‘Face recognition using Laplacianfaces’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (3), pp. 328340.
    5. 5)
      • 5. Lowe, D.: ‘Distinctive image features from scale-invariant key-points’, Int. J. Comput. Vis.., 2004, 60, (2), pp. 91110.
    6. 6)
      • 6. Ojala, T., Pietikainen, M., Harwood, D.: ‘A comparative-study of texture measures with classification based on feature distributions’, Pattern Recognit., 1996, 29, (1), pp. 5159.
    7. 7)
      • 7. Guillaumin, M., Verbeek, J., Schmid, C.: ‘Is that you? metric learning approaches for face identification’. IEEE 12th Int. Conf. on Computer Vision, Kyoto, Japan, 2009, pp. 498505.
    8. 8)
      • 8. Wright, J., Yang, A., Ganesh, A., et al: ‘Robust face recognition via sparse representation’, IEEE Trans. Pattern Anal. Mach. Intell., 2009, 31, (2), pp. 210227.
    9. 9)
      • 9. Ma, L., Wang, C., Xiao, B., et al: ‘Sparse representation for face recognition based on discriminative low-rank dictionary learning’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Providence, USA, 2012, pp. 25862593.
    10. 10)
      • 10. Mi, J.-X., Liu, J.-X.: ‘Face recognition using sparse representation-based classification on k-nearest subspace’, PLoS ONE, 2013, 8, (3), p. e59430.
    11. 11)
      • 11. Mi, J.-X., Liu, J.-X., Wen, J.: ‘New robust face recognition methods based on linear regression’, PLoS ONE, 2012, 7, (8), p. e42461.
    12. 12)
      • 12. Zhang, L., Yang, M., Feng, X.: ‘Sparse representation or collaborative representation: which helps face recognition?’. 2011 IEEE Int. Conf. on Computer Vision (ICCV), Barcelona, Spain, 2011, pp. 471478.
    13. 13)
      • 13. Yang, M., Zhang, L., Zhang, D., et al: ‘Relaxed collaborative representation for pattern Classification’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Providence, USA, 2012, pp. 22242231.
    14. 14)
      • 14. Zhu, P., Zhang, L., Hu, Q., et al: ‘Multi-scale patch based collaborative representation for face recognition with margin distribution optimization’. ECCV 2012, Firenze, Italy, 2012 (LNCS, 7572), pp. 822835.
    15. 15)
      • 15. Nhat, V.D.M., Lee, S.: ‘Block LDA for face recognition’. Computational Intelligence and Bioinspired Systems, 2005 (LNCS, 3512), pp. 899905.
    16. 16)
      • 16. Liu, X.-Z., Yang, G.: ‘Block-wise two-dimensional maximum margin criterion for face recognition’, Sci. World J., 2014, 2014,
    17. 17)
      • 17. Williams, L.J., Abdi, H., French, R., et al: ‘A tutorial on multi-block discriminant correspondence analysis (MUDICA): a new method for analyzing discourse data from clinical populations’, J. Speech Lang. Hear. Res., 2010, 53, pp. 13721393.
    18. 18)
      • 18. Li, T., Zhang, Z.: ‘Robust face recognition via block sparse Bayesian learning’, Math. Probl. Eng., 2013, 2013, doi:
    19. 19)
      • 19. Kang, C., Liao, S., Xiang, S., et al: ‘Kernel sparse representation with local patterns for face recognition’. 18th IEEE Int. Conf. on Image Processing, Beijing, China, 2011, pp. 30703073.
    20. 20)
      • 20. Chen, S., Liu, J., Zhou, Z.-H.: ‘Making FLDA applicable to face recognition with one sample per person’, Pattern Recognit., 2004, 37, (7), pp. 15531555.
    21. 21)
      • 21. Martinez, A.: ‘The AR face database’. CVC Technical Report 24, 1998.
    22. 22)
      • 22. Phillips, A.P.J., Moon, H., Rauss, P.J., et al: ‘The FERET evaluation methodology for face recognition algorithms’, IEEE Trans. Pattern Anal. Mach. Intell., 2000, 22, (10), pp. 10901104.
    23. 23)
      • 23. Chen, L., Man, H., Nefian, A.V.: ‘Face recognition based on multi-class mapping of Fisher scores’, Pattern Recognit., 2005, 38, pp. 799811.
    24. 24)
      • 24. Huang, G.B., Ramesh, M., Berg, T., et al: ‘Labeled faces in the wild: a database for studying face recognition in unconstrained environments’. Technical Report 07-49, University of Massachusetts, Amherst, October 2007.
    25. 25)
      • 25. ORL:
    26. 26)
      • 26. Sim, T., Baker, S., Bsat, M.: ‘The CMU pose, illumination, and expression (PIE) database of human faces’. CMU-RI-TR-01-02, USA, 2002, pp. 117.
    27. 27)
      • 27. Georgia Tech Face Database:
    28. 28)
      • 28. Kumar, R., Banerjee, A., Vemuri, B.: ‘Volterrafaces: Discriminant analysis using volterra kernels’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Miami, USA, 2009, pp. 150155.
    29. 29)
      • 29. Tan, X., Chen, S., Zhou, Z., et al: ‘Recognizing partially occluded, expression variant faces from single training image per person with som and soft k-NN ensemble’, IEEE Trans. Neural Netw., 2005, 16, (4), pp. 875886.
    30. 30)
      • 30. Kumar, R., Banerjee, A., Vemuri, B.C., et al: ‘Maximizing all margins: Pushing face recognition with kernel plurality’. IEEE Int. Conf. on Computer Vision (ICCV), Barcelona, Spain, 2011, pp. 23752382.
    31. 31)
      • 31. Lin, D., Tang, X.: ‘Recognize high resolution faces: From macrocosm to microcosm’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR), New York, USA, 2006, pp. 13551362.
    32. 32)
      • 32. Le Gall, F.: ‘Powers of tensors and fast matrix multiplication’. Proc. of the 39th Int. Symp. on Symbolic and Algebraic Computation, Kobe, Japan, July 23–25 2014, arXiv:1401.7714.

Related content

This is a required field
Please enter a valid email address