http://iet.metastore.ingenta.com
1887

Novel multispectral face descriptor using orthogonal walsh codes

Novel multispectral face descriptor using orthogonal walsh codes

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

This paper proposes a novel high-performance multispectral local descriptor that uses orthogonal Walsh codes during the generation of the discriminative feature set. Rotational variance and noise are compelling factors that significantly affect the distinctive performance of a facial identifier. The descriptor proposed in this article handles these challenges favourably sacrificing any distinctive performance. Orthogonal Walsh codes are used during the generation of the local descriptor. A Walsh code is assigned to each neighbour of a reference pixel. Before the assignment of an orthogonal code to each neighbouring pixel, these pixels are sorted in ascending order that consolidates the robustness of the method against rotational variances. Almost all methods proposed so far focus on grayscale images. However, colour bands contain important information about the relationship between pixels. Therefore, authors’ method considers RGB colour bands of pixels to improve distinctive performance. The results of extensive simulations show the remarkable and competitive performance of the proposed method regarding recognition accuracy and robustness against rotational variances as well as noise effects.

References

    1. 1)
      • 1. Zhong, F., Zhang, J.: ‘Face recognition with enhanced local directional patterns’, Neurocomputing, 2013, 119, pp. 375384.
    2. 2)
      • 2. Guan, Z., Wang, C., Chen, Z., et al: ‘Efficient face recognition using tensor subspace regression’, Neurocomputing, 2010, 73, pp. 27442753.
    3. 3)
      • 3. Jain, A., Hong, L., Pankanti, S.: ‘Biometric identification’, Commun. ACM, 2000, 43, (2), pp. 9198.
    4. 4)
      • 4. Jain, A.K., Ross, A.: ‘Introduction to biometrics’, in Jain, AK, Flynn, , Ross, A. (Eds.): ‘Handbook of biometrics’ (Springer, New York, 2008), pp. 122.
    5. 5)
      • 5. Nisenson, M., Yariv, I., El-Yaniv, R., et al: ‘Towards behaviometric security systems: learning to identify a typist’. (LNCS), 2003, pp. 363374.
    6. 6)
      • 6. Dubey, S.R.: ‘Local directional relation pattern for unconstrained and robust face retrieval’. arXiv:1709.09518 [cs.CV], 2017.
    7. 7)
      • 7. Jafri, R., Arabnia, H.R.: ‘A survey of face recognition techniques’, J. Inf. Process. Syst., 2009, 5, (2), pp. 4168.
    8. 8)
      • 8. Lei, Z., Liao, S., Pietikainen, M.: ‘Face recognition by exploring information jointly in space, scale, and orientation’, IEEE Trans. Image Process., 2011, 20, (1), pp. 247256.
    9. 9)
      • 9. Nanni, L., Brahnam, S., Ghidoni, S., et al: ‘Different approaches for extracting information from the co-occurrence matrix’, Plos ONE, 2013, 8, (12), pp. 19.
    10. 10)
      • 10. Liu, L., Fieguth, P., Guo, Y., et al: ‘Local binary features for texture classification: taxonomy and experimental study’, Pattern Recognit., 2017, 62, pp. 135160.
    11. 11)
      • 11. Lei, Z., Liao, S., Pietikäinen, M., et al: ‘Face recognition by exploring information jointly in space, scale and orientation’, IEEE Trans. Image Process, 2011, 20, (1), pp. 247256.
    12. 12)
      • 12. Tseng, S.: ‘Comparison of holistic and feature based approaches to face recognition’. MSc Thesis, Royal Melbourne Institute of Technology University, Melbourne, Victoria, Australia, 2003.
    13. 13)
      • 13. Turk, M.A., Pentland, A.P.: ‘Eigenfaces for recognition’, J. Cogn. Neurosci., 1991, 3, (1), pp. 7186.
    14. 14)
      • 14. Belhumeur, P., Hespanha, J., Kriegman, D.: ‘Eigenfaces vs. Fisher-faces: recognition using class specific linear projection’, IEEE Trans. Pattern Anal. Mach. Intell., 1997, 19, (7), p. 711720.
    15. 15)
      • 15. Comon, P.: ‘Independent component analysis - a new concept?’, Signal Process., 1994, 36, pp. 287314.
    16. 16)
      • 16. Jutten, C., Herault, J.: ‘Blind separation of sources, part I: an adaptive algorithm based on neuromimatic architecture’, Signal Process., 1991, 24, (1), pp. 110.
    17. 17)
      • 17. Wang, X., Tang, X.: ‘A unified framework for subspace face recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2004, 26, (9), pp. 12221228.
    18. 18)
      • 18. Yan, S., Xu, D., Zhang, B., et al: ‘Graph embedding and extensions: a general framework for dimensionality reduction’, IEEE Trans. Pattern Anal. Mach. Intell., 2007, 29, (1), pp. 4051.
    19. 19)
      • 19. Tenenbaum, J., Silva, V., Langford, J.: ‘A global geometric framework for nonlinear dimensionality reduction’, Science, 2000, 290, (22), pp. 23192323.
    20. 20)
      • 20. Roweis, S., Saul, L.: ‘Nonlinear dimensionality reduction by locally linear embedding’, Science, 2000, 290, (22), pp. 23232326.
    21. 21)
      • 21. He, X., Yan, S., Hu, Y., et al: ‘Face recognition using laplacian faces’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (3), pp. 328340.
    22. 22)
      • 22. He, X., Cai, D., Yan, S., et al: ‘Neighborhood preserving embedding’. IEEE Int. Conf. Comput. Vis., Beijing, China, 2005, pp. 12081213.
    23. 23)
      • 23. Schölkopf, B., Smola, A., Müller, K.R.: ‘Nonlinear component analysis as a kernel eigenvalue problem’, Neural Comput.., 1999, 10, pp. 12991319.
    24. 24)
      • 24. Mika, S., Ratsch, G., Weston, J., et al: ‘Fisher discriminant analysis with kernels’. Proc. Neural Netw. Signal Process., Madison, WI, USA, 1999, pp. 4148.
    25. 25)
      • 25. Ahonen, T., Hadid, A., Pietikainen, M.: ‘Face recognition with local binary patterns’. Proc. of the 8th European Conf. on Computer Vision, Prague, Czech Republic, 2004, pp. 469481.
    26. 26)
      • 26. Zhang, W.C., Shan, S.G., Gao, W., et al: ‘Local gabor binary pattern histogram sequence (lgbphs): a novel non-statistical model for face representation and recognition’. Proc. of the 10th IEEE Int. Conf. and Computer Vision, Beijing, China, 2005, pp. 786791.
    27. 27)
      • 27. Heikkilä, M., Pietikäinen, M., Schmid, C.: ‘Description of interest regions with local binary patterns’, Pattern Recognit., 2009, 42, (3), pp. 425436.
    28. 28)
      • 28. Jabid, T., Kabir, M.H., Chae, O.: ‘Robust facial expression recognition based on local directional pattern’, ETRI J., 2010, 32, (5), pp. 784794.
    29. 29)
      • 29. Dan, Z., Chen, Y., Yang, Z., et al: ‘An improved local binary pattern for texture classification’, Optik, 2014, 125, pp. 63206324.
    30. 30)
      • 30. Qian, X., Hua, X.-S., Chen, P., et al: ‘PLBP: an effective local binary patterns texture descriptor with pyramid represenation’, Pattern Recognit., 2011, 44, pp. 25022515.
    31. 31)
      • 31. Chakraborty, S., Singh, S.K., Chakraborty, P.: ‘Local directional gradient pat-tern: a local descriptor for face recognition’, Multimedia Tools Appl, 2017, 76, pp. 12011216.
    32. 32)
      • 32. Yang, S., Bhanu, B.: ‘Facial expression recognition using emotion avatar image’. IEEE Int. Conf. Autom. Face Gesture Recognit. Workshops (FG), Santa Barbara, USA, 2011, pp. 866871.
    33. 33)
      • 33. Ramirez Rivera, A., Castillo, R., Chae, O.: ‘Local directional number pattern for face analysis: face and expression recognition’, IEEE Trans. Image Process., 2013, 22, (5), pp. 17401752.
    34. 34)
      • 34. Rivera, A.R., Chae, O.: ‘Spatiotemporal directional number transitional graph for dynamic texture recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2015, 37, (10), pp. 21462152.
    35. 35)
      • 35. Dahmane, M., Meunier, J.: ‘Emotion recognition using dynamic gridbased HoG features’. IEEE Int. Conf. Autom. Face Gesture Recognit. Workshops (FG), Santa Barbara, USA, March 2011, pp. 884888.
    36. 36)
      • 36. Tan, X., Triggs, B.: ‘Enhanced local texture feature sets for face recognition under difficult lighting conditions’, IEEE Trans. Image Process., 2010, 19, (6), pp. 16351650.
    37. 37)
      • 37. Yin, Q.B., Kim, J.N.: ‘Rotation-invariant texture classification using circular Gabor wavelets based local and global features’, Chin. J. Electron., 2008, 17, (4), pp. 646648.
    38. 38)
      • 38. Melendez, J., Garcia, M.A., Puig, D.: ‘Efficient distance-based per-pixel texture classification with gabor wavelet filters’, Pattern Anal. Appl., 2008, 11, (3), pp. 365372.
    39. 39)
      • 39. Jafari-Khouzani, K., Soltanian-Zadeh, H.: ‘Radon trans-form orientation estimation for rotation invariant texture analysis’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (6), pp. 10041008.
    40. 40)
      • 40. Varma, M., Zisserman, A.: ‘A statistical approach to material classification using image patch exemplars’, IEEE Trans. Pattern Anal. Mach. Intell., 2009, 31, (11), pp. 20322047.
    41. 41)
      • 41. Lazebnik, S., Schmid, C., Ponce, J.: ‘A sparse texture representation using local affine regions’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (8), pp. 12651278.
    42. 42)
      • 42. Zhang, B., Shan, S., Chen, X., et al: ‘Histogram of gabor phase patterns (hgpp): A novel object representa-tion approach for face recognition’, IEEE Trans. Image Process., 2007, 16, (1), pp. 5768.
    43. 43)
      • 43. Doshi, N., Schaefer, G.: ‘A comprehensive bench-mark of local binary pattern algorithms for texture re-trieval’. Proc. of the Int. Conf. on Pattern Recognition (ICPR), Tsukuba, Japan, 2012, pp. 27602763.
    44. 44)
      • 44. Huang, D., Shan, C., Ardabilian, M., et al: ‘Local binary patterns and its application to facial image analysis: a survey’, IEEE Trans. Syst. Man Cybern. – Part C: Appl. Rev., 2011, 41, (6), pp. 765781.
    45. 45)
      • 45. Fernández, A., Álvarez, M., Bianconi, F.: ‘Texture description through histograms of equivalent patterns’, Math J. Imaging Vis., 2013, 45, (1), pp. 76102.
    46. 46)
      • 46. Nanni, L., Lumini, A., Brahnam, S.: ‘Survey on lbp based texture descriptors for image classification’, Expert Syst. Appl., 2012, 39, (3), pp. 36343641.
    47. 47)
      • 47. Rivera, A.R., Castillo, J.R., Chae, O.: ‘Local directional number pattern for face analy-sis: face and expression recognition’, IEEE Trans. Image Process., 2013, 22, (5), pp. 17401752.
    48. 48)
      • 48. Fan, K.C., Hung, T.Y.: ‘A novel local patternde-scriptor-local vector pattern in high-order derivative space for face recognition’, IEEE Trans. Image Process., 2014, 23, (7), pp. 28772891.
    49. 49)
      • 49. Ojala, T., Pietikäinen, M., Harwood, D.: ‘A compara-tive study of texture measures with classification based on feature distributions’, Pattern Recognit., 1996, 29, (1), pp. 5159.
    50. 50)
      • 50. Ojala, T., Pietikäinen, M., Mäenpää, T.: ‘Multiresolu-tion gray-scale and rotation invariant texture classification with local binary patterns’, IEEE .Trans Patterns Anal. Mach. Intell., 2002, 24, (7), pp. 971987.
    51. 51)
      • 51. Pietikäinen, M., Ojala, T., Xu, Z.: ‘Rotation-invariant texture classification using feature distributions’, Pattern Recognit., 2000, 33, (1), pp. 4352.
    52. 52)
      • 52. Pietikäinen, M., Hadid, A., Zhao, G., et al: ‘Computer vision using local binary patterns’ (Springer, New York, 2011).
    53. 53)
      • 53. Quevedo, R., Aguilera, J., Pedreschi, F.: ‘Color of salmon fillets by computer vision and sensory panel’, Food Bioprocess. Technol., 2010, 3, pp. 637643.
    54. 54)
      • 54. Ishikawa, Y., Hirata, T.: ‘Color change model forbroccoli packaged in polymeric films’, Trans. the ASAE, 2001, 44, pp. 923927.
    55. 55)
      • 55. Stajnko, D., Rakun, J., Blanke, M.: ‘Modelling apple fruit yield using image analysis for fruit color, shape and texture’, European J. Hortic. Sci., 2009, 74, (6), pp. 260267.
    56. 56)
      • 56. Hahn, F., Sanchez, S.: ‘Carrot volume evaluation using imaging algorithms’, J. Agric. Eng. Res., 2000, 75, pp. 243249.
    57. 57)
      • 57. Koc, A.B.: ‘Determination of watermelon volume using ellipsoid approximation and image processing’, Postharvest Biol. Technol., 2007, 45, (3), pp. 366371.
    58. 58)
      • 58. Khojastehnazhand, M., Omid, M., Tabatabaeefar, A.: ‘Determination of orange volume and surface area using image processing technique’, Int. Agrophys., 2009, 23, pp. 237242.
    59. 59)
      • 59. Wang, W., Li, C.: ‘Size estimation of sweet onions using consumer-grade RGB-depth sensor’, J. Food Eng., 2014, 142, pp. 153162.
    60. 60)
      • 60. Nambi, V.E., Thangavel, K., Rajeswari, K.A., et al: ‘Texture and rheological changes of Indian mango cultivars during ripening’, Postharvest Biol. Technol., 2016, 117, pp. 152160.
    61. 61)
      • 61. Nambi, V.E., Thangavel, K., Shahir, S., et al: ‘Comparison of various RGB image features for nondestructive prediction of ripening quality of alphonso mangoes for easy adoptability in machine vision applications: a multivariate approach’, J. Food Qual., 2016, 39, pp. 816825.
    62. 62)
      • 62. Zhang, Q., Zhang, J.: ‘RGB color analysis for face detection’, in Hussain, D.M.A. (Ed.): ‘Advances in computer science and IT’, (InTech, 2009), Available from: http://www.intechopen.com/books/advances-in-computer-science-and-it/rgb-color-analysis-for-face-detection.
    63. 63)
      • 63. Libor, S.: ‘Description of the collection of facial images2008. [Online] http://cswww.essex.ac.uk/mv/allfaces/index.html.
    64. 64)
      • 64. Sarasota, F.L.: Proc. of 2nd IEEE Workshop on Applications of Computer Vision, Sarasota, USA, 1994.
    65. 65)
      • 65. Lyons, M.J., Akemastu, S., Kamachi, M., et al: ‘Coding facial expressions with gabor wavelets’. 3rd IEEE Int. Conf. on Automatic Face and Gesture Recognition, Nara, Japan, 1998, pp. 200205.
    66. 66)
      • 66. http://vision.ucsd.edu/content/yale-face-database.
    67. 67)
      • 67. Gao, W., Cao, B., Shan, S., et al: ‘The CAS-PEAL large-scale Chinese face database and baseline evaluations’, IEEE Trans. Syst. Man, Cybern.(Part A), 2008, 38, (1), pp. 149161.
    68. 68)
      • 68. Viola, P., Jones, M.J.: ‘Robust real-time face detection’, Int. J. Comput. Vis., 2004, 57, pp. 137154.
    69. 69)
      • 69. Murala, S., Maheshwari, R.P., Balasubramanian, R.: ‘Local tetra patterns: a new feature descriptor for content-based image retrieval’, IEEE Trans. Image Process., 2012, 21, (5), pp. 28742886.
    70. 70)
      • 70. Zhang, B., Gao, Y., Zhao, S., et al: ‘Local derivative pattern versus local binary pattern: face recognition with high-order local pattern descriptor’, IEEE Trans. Image Process., 2010, 19, (2), pp. 533543.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2018.6423
Loading

Related content

content/journals/10.1049/iet-ipr.2018.6423
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address