Hybrid local and global descriptor enhanced with colour information

Hybrid local and global descriptor enhanced with colour information

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Feature extraction is one of the most important steps in computer vision tasks such as object recognition, image retrieval and image classification. It describes an image by a set of descriptors where the best one gives a high quality description and a low computation. In this study, the authors propose a novel descriptor called histogram of local and global features using speeded up robust feature (SURF) descriptor (HLGSURF) based on a combination of local features obtained by computation of Bag of words of SURF and global features issued from a novel operator called upper and lower local binary pattern (ULLBP) that encodes the texture analysis associated with wavelet transform. To enhance the effectiveness of the descriptor, the authors used the colour information. To evaluate the proposed method, the authors carried out experiments in different applications such as image retrieval and image classification. The performance of the suggested descriptor was evaluated by calculating both precision and recall values using the challenging Corel and COIL-100 datasets for image retrieval. For image classification, the performance was measured by the classification rate using the challenging Corel and MIT scene datasets. The experimental results showed that the proposed descriptor outperforms the existing state of the art results.


    1. 1)
      • 1. Yu, J., Qin, Z., Wan, T., et al: ‘Feature integration analysis of bag of features model for image retrieval’, Neurocomputing, 2013, 120, pp. 355364.
    2. 2)
      • 2. Banerji, S., Verma, A., Liu, C.: ‘LBP and color descriptors for image classification’, in Liu, C., Mago, V.K. (Eds.): ‘Springer cross disciplinary biometric systems’ (Springer Berlin Heidelberg, 2012), pp. 205225.
    3. 3)
      • 3. Lowe, D.G.: ‘Distinctive image features from scale-invariant keypoints’, Int. J. Comput. Vis., 2004, 60, (2), pp. 91110.
    4. 4)
      • 4. Huang, Z., Kang, W., Wu, Q., et al: ‘A new descriptor resistant to affine transformation and monotonic intensity change’, Comput. Vis. Image Underst., 2014, 120, pp. 117125.
    5. 5)
      • 5. Huang, M., Mu, Z., Zeng, H.: ‘Efficient image classification via sparse coding spatial pyramid matching representation of SIFT-WCS-LTP feature’, IET Image Process., 2016, 10, (1), pp. 6167.
    6. 6)
      • 6. Kavitha, H., Sudhamani, M.V.: ‘Object based image retrieval from database using combined features’. Int. Conf. on Signal and Image Processing (ICSIP), Bangalore, Karnataka, India, January 2014, pp. 161165.
    7. 7)
      • 7. Harris, C., Stephens, M.J.: ‘A combined corner and edge detector’. Alvey Vision Conf., Manchester, UK, September 1988, pp. 147152.
    8. 8)
      • 8. Mikolajczyk, K., Schmid, C.: ‘Indexing based on scale invariant interest points’. Int. Conf. on Computer Vision (ICCV), Vancouver, BC, Canada, July 2001, vol. 1, pp. 525531.
    9. 9)
      • 9. Ke, Y., Sukthankar, R.: ‘PCA-SIFT: ‘A more distinctive representation for local image descriptors’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR), Washington, DC, USA, July 2004, vol. 2, pp. 506513.
    10. 10)
      • 10. Mikolajczyk, K., Schmid, C.: ‘A performance evaluation of local descriptors’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (10), pp. 16151630.
    11. 11)
      • 11. Bay, H., Essa, A., Tuytelaarsb, T., et al: ‘Speeded-up robust features (SURF)’, Comput. Vis. Image Underst., 2008, 110, (3), pp. 346359.
    12. 12)
      • 12. Muralidharan, R., Chandrasekar, C.: ‘Combining local and global feature for object recognition using SVM-KNN’. Int. conf. on Pattern Recognition, Informatics and Medical Engineering (PRIME), Salem, Tamilnadu, March 2012, pp. 17.
    13. 13)
      • 13. Chaudhary, M.D., Upadhyay, A.B.: ‘Fusion of local and global features using stationary wavelet transform for efficient content based image retrieval’. Int. Conf. on Electrical, Electronics and Computer Science (SCEECS), Bhopal, March 2014, pp. 16.
    14. 14)
      • 14. Gupta, E., Kushwah, R.S.: ‘Combination of global and local features using DWT with SVM for CBIR’. Int. Conf. on Reliability, Infocom Technologies and Optimization (ICRITO), Noida, India, September 2015, pp. 16.
    15. 15)
      • 15. Ojala, T., Pietikäinen, M., Harwood, D.: ‘A comparative study of texture measures with classification based on featured distributions’, Pattern Recognit., 1996, 29, (1), pp. 5159.
    16. 16)
      • 16. Zhu, C., Bichot, C.E., Chen, L.: ‘Image region description using orthogonal combination of local binary patterns enhanced with color information’, Pattern Recognit., 2013, 46, (7), pp. 19491963.
    17. 17)
      • 17. Sinha, A., Banerji, S., Liu, C.: ‘New color GPHOG descriptors for object and scene image Classification’, Mach. Vis. Appl., 2014, 25, (2), pp. 361375.
    18. 18)
      • 18. Ashraf, R., Bashir, K., Irtaza, A., et al: ‘Content based image retrieval using embedded neural networks with bandletized regions’, Entropy, 2015, 17, (6), pp. 35523580.
    19. 19)
      • 19. Tajeripour, F., Saberi, M., Fekri-Ershad, S.: ‘Developing a novel approach for content based image retrieval using modified local binary patterns and morphological transform’, Int. Arab. J. Inf. Techn., 2015, 12, (6), pp. 574581.
    20. 20)
      • 20. Kavitha, H., Sudhamani, M.V.: ‘Content-based image retrieval using edge and gradient orientation features of an object in an image from database’, J. Intell. Syst., 2015, 25, (3), pp. 441454.
    21. 21)
      • 21. Heikkila, M., Pietikainen, M., Schmid, C.: ‘Description of interest regions with local binary patterns’, Pattern Recognit., 2009, 42, (3), pp. 425436.
    22. 22)
      • 22. Ojala, T., Pietikäinen, M., Mäenpää, T.: ‘Multiresolution gray-scale and rotation invariant texture classification with local binary patterns’, IEEE Trans. Pattern Anal. Mach. Intell., 2002, 24, (7), pp. 971987.
    23. 23)
      • 23. Liao, S., Law, M.W.K., Chung, A.C.S.: ‘Dominant local binary patterns for texture classification’, IEEE Trans. Image Process., 2009, 18, (5), pp. 11071118.
    24. 24)
      • 24. Guo, Z., Zhang, L., Zhang, D.: ‘A completed modeling of local binary pattern operator for texture classification’, IEEE Trans. Image Process., 2010, 19, (6), pp. 16571663.
    25. 25)
      • 25. Khellah, F.M.: ‘Texture classification using dominant neighborhood structure’, IEEE Trans. Image Process., 2011, 20, (11), pp. 32703279.
    26. 26)
      • 26. Tan, X., Triggs, B.: ‘Enhanced local texture feature sets for face recognition under difficult lighting conditions’, IEEE Trans. Image Process., 2010, 19, (6), pp. 16351650.
    27. 27)
      • 27. Zhao, Y., Jia, W., Hu, R.X., et al: ‘Completed robust local binary pattern for texture classification’, Neurocomputing, 2013, 106, pp. 6876.
    28. 28)
      • 28. Sandid, F., Douik, A.: ‘Texture descriptor based on local combination adaptive ternary pattern’, IET Image Process., 2015, 9, (8), pp. 634642.
    29. 29)
      • 29. Mallat, S.G.: ‘A theory for multiresolution signal decomposition: the wavelet representation’, IEEE Trans. Pattern Anal. Mach. Intell., 1989, 11, (7), pp. 674693.
    30. 30)
      • 30. Smith, J.R., Chang, S.F.: ‘Automated binary texture feature sets for image retrieval’. Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP), Atlanta, GA, May 1996, pp. 22392242.
    31. 31)
      • 31. Jurie, F., Triggs, B.: ‘Creating efficient codebooks for visual recognition’. Int. Conf. on Computer Vision (ICCV), Beijing, China, October 2005, vol. 2, pp. 604610.
    32. 32)
      • 32. Zhang, J., Marszałek, M., Lazebnik, S., et al: ‘Local features and kernels for classification of texture and object categories: A comprehensive study’, Int. J. Comput. Vis., 2007, 73, (2), pp. 213238.
    33. 33)
      • 33. Mansoori, N.S., Nejati, M., Razzaghi, P., et al: ‘Bag of visual words approach for image retrieval using color information’. Iranian Conf. on Electrical Engineering (ICEE), Mashhad, Iran, May 2013, pp. 16.
    34. 34)
      • 34. Ojala, T., Mäenpää, T., Pietikainen, M., et al: ‘Outex-new framework for empirical evaluation of texture analysis algorithms’. Int. Conf. on Pattern Recognition, Quebec, Canada, August 2002, vol. 1, pp. 701706.
    35. 35)
      • 35. Oliva, A., Torralba, A.: ‘Modeling the shape of the scene: A holistic representation of the spatial envelope’, Int. J. Comput. Vis., 2001, 42, (3), pp. 145175.
    36. 36)
      • 36. Xiaolin, T., Licheng, J., Xianlong, L., et al: ‘Feature integration of EODH and Color-SIFT: Application to image retrieval based on codebook’, Signal. Process-Image, 2014, 29, pp. 530545.
    37. 37)
      • 37. De Brabanter, K., Karsmakers, P., Ojeda, F., et al: ‘LS-SVMlab toolbox user's guide version 1.8’. Internal Report, 10–146, ESAT-SISTA, K.U. Leuven, Leuven, Belgium, 2010.

Related content

This is a required field
Please enter a valid email address