http://iet.metastore.ingenta.com
1887

New strategy for CBIR by combining low-level visual features with a colour descriptor

New strategy for CBIR by combining low-level visual features with a colour descriptor

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

In computer vision, the analysis of image contents plays a significant role to perform intelligent tasks such as object recognition and image retrieval. These contents can be low-level visual features or colour information within an image. For content-based image retrieval (CBIR), several methods have been proposed that focus on either low-level visual features extraction or the colour information, and very few works can be seen that retrieve the images by fusing both types of contents. Consequently, this work addresses the problem of combining low-level visual features with colour information that helps to improve the retrieval accuracy of CBIR. The proposed strategy extracts the low-level visual salient features with features from accelerated segment test feature descriptor and quantises the salient keypoints into a feature vector. The colour information of the image is extracted and segmented with non-linear L*a*b* colour space and quantised into a feature vector. The similarity for both the feature vectors including visual and colour features is computed and combined together. The top-rank images are retrieved for the obtained feature vector using the distance metric. The experimental results on two standard benchmark datasets show the improved efficiency and 85% accuracy of the proposed strategy over state-of-the-art methods.

References

    1. 1)
      • 1. Reta, C., Solis-Moreno, I., Cantoral-Ceballos, J.A., et al: ‘Improving content-based image retrieval for heterogeneous datasets using histogram-based descriptorsMultimedia Tools Appl., 2018, 77, (7), pp. 81638193.
    2. 2)
      • 2. Fadaei, S., Amirfattahi, R., Ahmadzadeh, M.R.: ‘New content-based image retrieval system based on optimised integration of DCD, wavelet and curvelet features’, IET Image Process.., 2017, 11, (2), pp. 8998.
    3. 3)
      • 3. Dimitrovski, I., Kocev, D., Loskovska, S., et al: ‘Improving bag-of-visual-words image retrieval with predictive clustering trees’, Inf. Sci., 2016, 329, pp. 851865.
    4. 4)
      • 4. Mishra, A., Alahari, K., Jawahar, C. V.: ‘Image retrieval using textual cues’. Proc. of the IEEE Int. Conf. on Computer Vision, Sydney, Australia, 2013, pp. 30403047.
    5. 5)
      • 5. Unar, S., Wang, X., Zhang, C., et al: ‘Detected text-based image retrieval approach for textual images’, IET Image Process.., 2018, 13, (3), pp. 515521.
    6. 6)
      • 6. Zhang, J., Feng, S., Li, D., et al: ‘Image retrieval using the extended salient region’, Inf. Sci., 2017, 399, pp. 154182.
    7. 7)
      • 7. Iqbal, M., Naqvi, S.S., Browne, W.N., et al: ‘Learning feature fusion strategies for various image types to detect salient objects’, Pattern Recognit.., 2016, 60, pp. 106120.
    8. 8)
      • 8. Zhang, M., Zhang, K., Feng, Q., et al: ‘A novel image retrieval method based on hybrid information descriptors’, J. Vis. Commun. Image Represent., 2014, 25, (7), pp. 15741587.
    9. 9)
      • 9. Jiang, F., Hu, H.M., Zheng, J., et al: ‘A hierarchal BoW for image retrieval by enhancing feature salience’, Neurocomputing, 2016, 175, (Part A), pp. 146154.
    10. 10)
      • 10. Feng, L., Wu, J., Liu, S., et al: ‘Global correlation descriptor: a novel image representation for image retrieval’, J. Vis. Commun. Image Represent., 2015, 33, pp. 104114.
    11. 11)
      • 11. Alsmadi, M.K.: ‘An efficient similarity measure for content based image retrieval using memetic algorithm’, Egypt. J. Basic Appl. Sci., 2017, 4, (2), pp. 112122.
    12. 12)
      • 12. Dash, J.K., Mukhopadhyay, S., Gupta, R.D.: ‘Content-based image retrieval using fuzzy class membership and rules based on classifier confidence’, IET Image Process.., 2015, 9, (9), pp. 836848.
    13. 13)
      • 13. Sana, J.K., Islam, M.M.: ‘PLT-based spectral features for texture image retrieval’, IET Image Process.., 2018, 12, (11), pp. 20652074.
    14. 14)
      • 14. Yu, S., Niu, D., Zhang, L., et al: ‘Colour image retrieval based on the hypergraph combined with a weighted adjacent structure’, IET Comput. Vis., 2018, 12, (5), pp. 563569.
    15. 15)
      • 15. Hemachandran, K., Paul, A., Singha, M.: ‘Content-based image retrieval using the combination of the fast wavelet transformation and the colour histogram’, IET Image Process.., 2012, 6, (9), pp. 12211226.
    16. 16)
      • 16. Lin, C.-H., Liu, C.-W., Chen, H.-Y.: ‘Image retrieval and classification using adaptive local binary patterns based on texture features’, IET Image Process.., 2012, 6, (7), p. 822.
    17. 17)
      • 17. Liu, P., Guo, J.M., Chamnongthai, K., et al: ‘Fusion of color histogram and LBP-based features for texture image retrieval and classification’, Inf. Sci., 2017, 390, pp. 95111.
    18. 18)
      • 18. Wang, Y., Cen, Y., Zhao, R., et al: ‘Separable vocabulary and feature fusion for image retrieval based on sparse representation’, Neurocomputing, 2017, 236, (July 2016), pp. 1422.
    19. 19)
      • 19. Husain, S.S., Bober, M.: ‘Improving large-scale image retrieval through robust aggregation of local descriptors’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (9), pp. 17831796.
    20. 20)
      • 20. Liu, G.-H., Yang, J.-Y.: ‘Content-based image retrieval using color difference histogram’, Pattern Recognit.., 2013, 46, (1), pp. 188198.
    21. 21)
      • 21. Wang, X., Wang, Z.: ‘A novel method for image retrieval based on structure elements’ descriptor’, J. Vis. Commun. Image Represent., 2013, 24, (1), pp. 6374.
    22. 22)
      • 22. Liu, G.-H., Yang, J.-Y., Li, Z.: ‘Content-based image retrieval using computational visual attention model’, Pattern Recognit.., 2015, 48, (8), pp. 25542566.
    23. 23)
      • 23. Karaoglu, S., Tao, R., Gevers, T., et al: ‘Words matter: scene text for image classification and retrieval’, IEEE Trans. Multimed., 2017, 19, (5), pp. 10631076.
    24. 24)
      • 24. Unar, S., Wang, X., Zhang, C.: ‘Visual and textual information fusion using kernel method for content based image retrieval’, Inf. Fusion, 2018, 44, (2), pp. 176187.
    25. 25)
      • 25. Unar, S., Jalbani, A.H., Shaikh, M., et al: ‘A study on text detection and localization techniques for natural scene images’, Int. J. Comput. Sci. Netw. Secur., 2018, 18, (1), pp. 99111.
    26. 26)
      • 26. Ho, T., Ly, N.: ‘A scene text-based image retrieval system’. 2012 IEEE Int. Symp. Signal Process. Inf. Technol. ISSPIT 2012, Ho Chi Minh City, Vietnam, 2012, pp. 7984.
    27. 27)
      • 27. Unar, S., Jalbani, A.H., Jawaid, M.M., et al: ‘Artificial Urdu text detection and localization from individual video frames’, Mehran Univ. Res. J. Eng. Technol., 2018, 37, (2), pp. 429438.
    28. 28)
      • 28. Lowe, D.G.: ‘Distinctive image features from scale-invariant keypoints’, Int. J. Comput. Vis., 2004, 60, (2), pp. 91110.
    29. 29)
      • 29. Bay, H., Ess, A., Tuytelaars, T., et al: ‘Speeded-up robust features (SURF)’, Comput. Vis. Image Underst., 2008, 110, (3), pp. 346359.
    30. 30)
      • 30. Yang, J., Jiang, B., Li, B., et al: ‘A fast image retrieval method designed for network big data’, IEEE Trans. Ind. Inf., 2017, 13, (5), pp. 23502359.
    31. 31)
      • 31. Bai, C., Chen, J., Huang, L., et al: ‘Saliency-based multi-feature modeling for semantic image retrieval’, J. Vis. Commun. Image Represent., 2018, 50, (November 2017), pp. 199204.
    32. 32)
      • 32. Yang, Z., Shen, D., Yap, P.T.: ‘Image mosaicking using SURF features of line segments’, PLoS One, 2017, 12, (3), pp. 115.
    33. 33)
      • 33. Torralba, A., Fergus, R., Weiss, Y.: ‘Small codes and large image databases for recognition’. 26th IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, Anchorage, USA, 2008.
    34. 34)
      • 34. Rosten, E., Porter, R., Drummond, T.: ‘Faster and better: a machine learning approach to corner detection’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (1), pp. 105119.
    35. 35)
      • 35. Walia, E., Pal, A.: ‘Fusion framework for effective color image retrieval’, J. Vis. Commun. Image Represent., 2014, 25, (6), pp. 13351348.
    36. 36)
      • 36. Sedghi, T., Amirani, M.C., Fakheri, M., et al: ‘Framework for image retrieval using machine learning and statistical similarity matching techniques’, IET Image Process.., 2013, 7, (1), pp. 111.
    37. 37)
      • 37. Wang, J.Z., Li, J., Wiederhold, G.: ‘SIMPLIcity: semantics-sensitive integrated matching for picture libraries’, IEEE Trans. Pattern Anal. Mach. Intell., 2001, 23, (9), pp. 947963.
    38. 38)
      • 38. Nilsback, M.E., Zisserman, A.: ‘A visual vocabulary for flower classification’. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2006, 2, pp. 14471454.
    39. 39)
      • 39. Elalami, M.E.: ‘A new matching strategy for content based image retrieval system’, Appl. Soft Comput. J., 2014, 14, (PART C), pp. 407418.
    40. 40)
      • 40. Zafar, B., Ashraf, R., Ali, N., et al: ‘A novel discriminating and relative global spatial image representation with applications in CBIR’, Appl. Sci., 2018, 8, (11), pp. 123.
    41. 41)
      • 41. Yu, J., Qin, Z., Wan, T., et al: ‘Feature integration analysis of bag-of-features model for image retrieval’, Neurocomputing, 2013, 120, pp. 355364.
    42. 42)
      • 42. Jhanwar, N., Chaudhuri, S., Seetharaman, G., et al: ‘Content based image retrieval using motif cooccurrence matrix’, Image Vis. Comput., 2004, 22, (14), pp. 12111220.
    43. 43)
      • 43. Elalami, M.E.: ‘A novel image retrieval model based on the most relevant features’, Knowl.-Based Syst., 2011, 24, (1), pp. 2332.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2019.0098
Loading

Related content

content/journals/10.1049/iet-ipr.2019.0098
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address