Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Subclass representation-based face-recognition algorithm derived from the structure scatter of training samples

Representation-based face-recognition techniques have received attention in the field of pattern recognition in recent years; however, the well-known works focus mainly on constraint conditions and dictionary learning. Few researchers study, which sample data features determine the performance of representation-based classification algorithms. To address this problem, the authors define the structure-scatter degree, which represents the structural features of training sample sets, to determine whether a set is suitable for the representation-based classification algorithm. Experimental results show that sets with a higher structure scatter more likely allows a classification algorithm to obtain a higher recognition rate. Further, the block contribution degree (DBC) of a training sample set is defined to evaluate whether a sample set is suitable for block-based sparse-representation classification algorithms. Experimental results indicate that if the DBC approaches zero, the block technique is unlikely to improve the performance of a representation-based classification algorithm. Thus, they devise a self-adaptive optimisation method to generate an optimal block size, an overlapping degree, and a block-weighting scheme. Finally, they propose the structure scatter-based subclass representation classification. Experimental results demonstrate that the proposed algorithm not only improves the recognition accuracy of the representation-based classification algorithm, but also greatly reduces its time complexity.

References

    1. 1)
      • 25. ORL: http://www.cam-orl.co.uk.
    2. 2)
      • 26. Sim, T., Baker, S., Bsat, M.: ‘The CMU pose, illumination, and expression (PIE) database of human faces’. CMU-RI-TR-01-02, USA, 2002, pp. 117.
    3. 3)
      • 5. Lowe, D.: ‘Distinctive image features from scale-invariant key-points’, Int. J. Comput. Vis.., 2004, 60, (2), pp. 91110.
    4. 4)
      • 15. Nhat, V.D.M., Lee, S.: ‘Block LDA for face recognition’. Computational Intelligence and Bioinspired Systems, 2005 (LNCS, 3512), pp. 899905.
    5. 5)
      • 3. Zhu, M.: ‘Subcalss dicriminant analysis’, IEEE Trans. Pattern Anal. Mach. Intell., 2006, 28, (8), pp. 12741286.
    6. 6)
      • 28. Kumar, R., Banerjee, A., Vemuri, B.: ‘Volterrafaces: Discriminant analysis using volterra kernels’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Miami, USA, 2009, pp. 150155.
    7. 7)
      • 2. Turk, M., Pentland, A.: ‘Eigenfaces for recognition’, J. Cognitive Neurosci., 1991, 3, (1), pp. 7186.
    8. 8)
      • 4. He, X., Yan, S., Hu, Y., et al: ‘Face recognition using Laplacianfaces’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (3), pp. 328340.
    9. 9)
      • 12. Zhang, L., Yang, M., Feng, X.: ‘Sparse representation or collaborative representation: which helps face recognition?’. 2011 IEEE Int. Conf. on Computer Vision (ICCV), Barcelona, Spain, 2011, pp. 471478.
    10. 10)
      • 19. Kang, C., Liao, S., Xiang, S., et al: ‘Kernel sparse representation with local patterns for face recognition’. 18th IEEE Int. Conf. on Image Processing, Beijing, China, 2011, pp. 30703073.
    11. 11)
      • 27. Georgia Tech Face Database: ftp://www.ftp.ee.gatech.edu/pub/users/hayes/facedb/.
    12. 12)
      • 23. Chen, L., Man, H., Nefian, A.V.: ‘Face recognition based on multi-class mapping of Fisher scores’, Pattern Recognit., 2005, 38, pp. 799811.
    13. 13)
      • 11. Mi, J.-X., Liu, J.-X., Wen, J.: ‘New robust face recognition methods based on linear regression’, PLoS ONE, 2012, 7, (8), p. e42461.
    14. 14)
      • 20. Chen, S., Liu, J., Zhou, Z.-H.: ‘Making FLDA applicable to face recognition with one sample per person’, Pattern Recognit., 2004, 37, (7), pp. 15531555.
    15. 15)
      • 16. Liu, X.-Z., Yang, G.: ‘Block-wise two-dimensional maximum margin criterion for face recognition’, Sci. World J., 2014, 2014, http://dx.doi.org/10.1155/2014/875090.
    16. 16)
      • 31. Lin, D., Tang, X.: ‘Recognize high resolution faces: From macrocosm to microcosm’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR), New York, USA, 2006, pp. 13551362.
    17. 17)
      • 29. Tan, X., Chen, S., Zhou, Z., et al: ‘Recognizing partially occluded, expression variant faces from single training image per person with som and soft k-NN ensemble’, IEEE Trans. Neural Netw., 2005, 16, (4), pp. 875886.
    18. 18)
      • 7. Guillaumin, M., Verbeek, J., Schmid, C.: ‘Is that you? metric learning approaches for face identification’. IEEE 12th Int. Conf. on Computer Vision, Kyoto, Japan, 2009, pp. 498505.
    19. 19)
      • 22. Phillips, A.P.J., Moon, H., Rauss, P.J., et al: ‘The FERET evaluation methodology for face recognition algorithms’, IEEE Trans. Pattern Anal. Mach. Intell., 2000, 22, (10), pp. 10901104.
    20. 20)
      • 17. Williams, L.J., Abdi, H., French, R., et al: ‘A tutorial on multi-block discriminant correspondence analysis (MUDICA): a new method for analyzing discourse data from clinical populations’, J. Speech Lang. Hear. Res., 2010, 53, pp. 13721393.
    21. 21)
      • 8. Wright, J., Yang, A., Ganesh, A., et al: ‘Robust face recognition via sparse representation’, IEEE Trans. Pattern Anal. Mach. Intell., 2009, 31, (2), pp. 210227.
    22. 22)
      • 24. Huang, G.B., Ramesh, M., Berg, T., et al: ‘Labeled faces in the wild: a database for studying face recognition in unconstrained environments’. Technical Report 07-49, University of Massachusetts, Amherst, October 2007.
    23. 23)
      • 10. Mi, J.-X., Liu, J.-X.: ‘Face recognition using sparse representation-based classification on k-nearest subspace’, PLoS ONE, 2013, 8, (3), p. e59430.
    24. 24)
      • 14. Zhu, P., Zhang, L., Hu, Q., et al: ‘Multi-scale patch based collaborative representation for face recognition with margin distribution optimization’. ECCV 2012, Firenze, Italy, 2012 (LNCS, 7572), pp. 822835.
    25. 25)
      • 13. Yang, M., Zhang, L., Zhang, D., et al: ‘Relaxed collaborative representation for pattern Classification’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Providence, USA, 2012, pp. 22242231.
    26. 26)
      • 30. Kumar, R., Banerjee, A., Vemuri, B.C., et al: ‘Maximizing all margins: Pushing face recognition with kernel plurality’. IEEE Int. Conf. on Computer Vision (ICCV), Barcelona, Spain, 2011, pp. 23752382.
    27. 27)
      • 6. Ojala, T., Pietikainen, M., Harwood, D.: ‘A comparative-study of texture measures with classification based on feature distributions’, Pattern Recognit., 1996, 29, (1), pp. 5159.
    28. 28)
      • 9. Ma, L., Wang, C., Xiao, B., et al: ‘Sparse representation for face recognition based on discriminative low-rank dictionary learning’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Providence, USA, 2012, pp. 25862593.
    29. 29)
      • 32. Le Gall, F.: ‘Powers of tensors and fast matrix multiplication’. Proc. of the 39th Int. Symp. on Symbolic and Algebraic Computation, Kobe, Japan, July 23–25 2014, arXiv:1401.7714.
    30. 30)
      • 21. Martinez, A.: ‘The AR face database’. CVC Technical Report 24, 1998.
    31. 31)
      • 1. Haider, W., Bashir, H., Sharif, A., et al: ‘A survey on face detection and recognition approaches’, Res. J. Recent Sci., 2014, 3, (4), pp. 5662.
    32. 32)
      • 18. Li, T., Zhang, Z.: ‘Robust face recognition via block sparse Bayesian learning’, Math. Probl. Eng., 2013, 2013, doi: http://dx.doi.org/10.1155/2013/695976.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2015.0350
Loading

Related content

content/journals/10.1049/iet-cvi.2015.0350
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address