Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Visual saliency object detection using sparse learning

In many applications in order to recognise the relationship between user and computer, the position at which the user looks should be detected. To this end, a salient object should be extracted that is attracted to the attention of the viewer. In this study, a new method is proposed to extract the object saliency map, which is based on learning automata and sparse algorithms. In the proposed method, after decomposition of an image to its superpixels, eight features (namely three features in red–green–blue colour space, coalition, central bias, rotation feature, brightness, and colour difference) are extracted. Then the extracted features are normalised to zero mean and unit variance. In this study, K-means singular-value decomposition is used to integrate the extracted features. The performance of the proposed method is compared with that of 20 other methods by applying four new databases, including MSRA-100, ECSSD, MSRA-10K, and Pascal-S. The obtained results show that the proposed method has a better performance compared to the other methods with regard to the prediction of the salient object.

References

    1. 1)
      • 32. Hou, Q., Cheng, M.M., Hu, X., et al: ‘Deeply supervised salient object detection with short connections’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, HI, 2017, pp. 32033212.
    2. 2)
      • 65. Hou, X., Zhang, L.: ‘Dynamic visual attention: searching for coding length increments’. NIPS Processing, Vancouver, Canada, 2008, pp. 681688.
    3. 3)
      • 22. Zhao, Q., Koch, C.: ‘Learning a saliency map using fixated locations in natural scenes’, J. Vis., 2011, 11, (3), pp. 115.
    4. 4)
      • 31. Luo, Z., Mishra, A., Achkar, A., et al: ‘Non-local deep features for salient object detection’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, HI, 2017, pp. 66096617.
    5. 5)
      • 59. Chang, K., Liu, T., Chen, T., et al: ‘Fusing generic objectness and visual saliency for salient object detection’. IEEE Int. Conf. on Computer Vision, 2011, pp. 914921.
    6. 6)
      • 9. Min, X., Zhai, G., Gu, K., et al: ‘Fixation prediction through multimodal analysis’, ACM Trans. Multimedia Comput. Commun. Appl., 2017, 13, (1), p. 6.
    7. 7)
      • 68. Liu, N., Han, J., Yang, M.H.: ‘Picanet: learning pixel-wise contextual attention for saliency detection’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, 2018, pp. 30893098.
    8. 8)
      • 46. Li, X., Li, U., Shen, C., et al: ‘Contextual hypergraph modeling for salient object detection’. IEEE Int. Conf. on Computer Vision, Sydney, Australia, 2013, pp. 33283335.
    9. 9)
      • 24. Tong, N., Lu, H., Zhang, L., et al: ‘Saliency detection with multi-scale superpixels’, IEEE Signal Process. Lett., 2014, 21, (9), pp. 10351039.
    10. 10)
      • 19. Huang, K., Zhu, C., Li, G.: ‘Saliency detection by adaptive channel fusion’, IEEE Signal Process. Lett., 2018, 25, (7), pp. 10591063.
    11. 11)
      • 23. Jiang, B., Zhang, L., Lu, H., et al: ‘Saliency detection via absorbing Markov chain’. IEEE Int. Conf. on Computer Vision, Sydney, Australia, 2013.
    12. 12)
      • 21. Judd, T., Ehinger, K., Durand, F., et al: ‘Learning to predict where humans look’. Proc. Int. Conf. on Computer Vision, Kyoto, Japan, 2009.
    13. 13)
      • 42. Li, Y., Hou, X., Koch, C., et al: ‘The secrets of salient object segmentation’. Proc. IEEE Conf. Computer Vision Pattern and Recognition, Columbus, OH, 2014, pp. 280287.
    14. 14)
      • 36. Tong, N., Lu, H., Xiang, R., et al: ‘Salient object detection via bootstrap learning’. IEEE Conf. on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 18841892.
    15. 15)
      • 7. Min, X., Ma, K., Gu, K., et al: ‘Unified blind quality assessment of compressed natural, graphic, and screen content images’, IEEE Trans. Image Process., 2017, 26, (11), pp. 54625474.
    16. 16)
      • 69. Li, Z., Lang, C., Chen, Y., et al: ‘Deep reasoning with multi-scale context for salient object detection’, 2019, arXiv preprint arXiv:1901.08362.
    17. 17)
      • 40. Yan, Q., Xu, L., Shi, J., et al: ‘Hierarchical saliency detection’. IEEE Conf., USA, 2013, pp. 11551162.
    18. 18)
      • 54. Hou, X., Zhang, L.: ‘Saliency detection: a spectral residual approach’. 2017 IEEE Conf. on Computer Vision and Pattern Recognition, Minneapolis, MN, 2007, pp. 18.
    19. 19)
      • 48. Lang, C., Liu, G., Yu, J., et al: ‘Saliency detection by multitask sparsity pursuit’, IEEE Trans. Image Process., 2012, 21, (3), pp. 13271338.
    20. 20)
      • 11. Sjanic, Z., Gustafsson, F: ‘Navigation and SAR focusing with map aiding’, IEEE Trans. Aerosp. Electron. Syst., 2015, 51, (3), pp. 16521663.
    21. 21)
      • 6. Min, X., Gu, K., Zhai, G., et al: ‘Blind quality assessment based on pseudo-reference image’, IEEE Trans. Multimed., 2017, 20, (8), pp. 20492062.
    22. 22)
      • 80. Cornia, M., Baraldi, L., Serra, G., et al: ‘Predicting human eye fixations via an LSTM-based saliency attentive model’, IEEE Trans. Image Process., 2018, 27, (10), pp. 51425154.
    23. 23)
      • 38. Mohammadzadeh, S., Farsi, H.: ‘Image retrieval using colour-texture features extracted from Gabor–Walsh wavelet pyramid’, J. Inf. Syst. Telecommun., 2014, 2, (15), pp. 3140.
    24. 24)
      • 14. Zhou, Q.: ‘Object-based attention: saliency detection using contrast via background prototypes’, Electron. Lett., 2014, 50, (14), pp. 997999.
    25. 25)
      • 55. Zhai, Y., Shah, M.: ‘Visual attention detection in video sequences using spatiotemporal cues’. Proc. 14th ACM Int. Conf. on Multimedia, Santa Barbara, CA, USA, 2006, pp. 815824.
    26. 26)
      • 26. Aytekin, C., Possegger, H., Mauthner, T., et al: ‘Spatiotemporal saliency estimation by spectral foreground detection’, IEEE Trans. Multimed., 2018, 20, (1), pp. 8295.
    27. 27)
      • 67. Zhang, P., Wang, D., Lu, H., et al: ‘Amulet: aggregating multi-level convolutional features for salient object detection’. Proc. IEEE Int. Conf. on Computer Vision (CVPR), Honolulu, HI, 2017, pp. 202211.
    28. 28)
      • 27. Yang, J., Yang, M.: ‘Top-down visual saliency via joint CRF and dictionary learning’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (3), pp. 576588.
    29. 29)
      • 28. Huafeng, L., Jing, L., Peiyun, Z., et al: ‘Saliency detection using adaptive background template’, IET Comput. Vis., 2017, 11, (6), pp. 389397.
    30. 30)
      • 51. Hou, X., Harel, J., Koch, C.: ‘Image signature: highlighting sparse salient regions’, IEEE Trans. Pattern Animal. Mach. Intell., 2012, 34, (1), pp. 194201.
    31. 31)
      • 72. Borji A. Itti, L.: ‘Exploiting local and global patch rarities for saliency detection’. IEEE Conf. on Computer Vision and Pattern Recognition, Providence, RI, 2012, pp. 478485.
    32. 32)
      • 76. Hu, X., Zhu, L., Qin, J., et al: ‘Recurrently aggregating deep features for salient object detection’. Thirty-Second AAAI Conf. on Artificial Intelligence, Hilton New Orleans Riverside, 2018.
    33. 33)
      • 20. Zhao, Q., Koch, C.: ‘Learning visual saliency by combining feature maps in a nonlinear manner using adaboost’, J. Vis., 2012, 12, (6), pp. 115.
    34. 34)
      • 43. Farsi, H., Mohamadzadeh, S.: ‘Combining hadamard matrix, discrete wavelet transform and DCT features based on PCA and KNN for image retrieval’, Majlesi J. Electr. Eng., 2013, 7, (1), pp. 915.
    35. 35)
      • 12. Krafka, K., Khosla, A., Kellnhofer, P., et al: ‘Eye tracking for everyone’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 21762184.
    36. 36)
      • 5. Hosseini, M.S., Farsi, H., Yazdi, H.S.: ‘Best clustering around the color images’, Int. J. Comput. Electr. Eng., 2009, 1, (1), pp. 2024.
    37. 37)
      • 37. Mohammadzadeh, S., Farsi, H.: ‘Content-based image retrieval system via sparse representation’, IET Comput. Vis., 2016, 10, (1), pp. 95102.
    38. 38)
      • 74. Li, G., Xie, Y., Lin, L., et al: ‘Instance-level salient object segmentation’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, HI, 2017, pp. 23862395.
    39. 39)
      • 44. Farsi, H., Nasiripour, R., Mohammadzadeh, S.: ‘Improved generic object retrieval in large scale databases by SURF descriptor’, J. Inf. Syst. Telecommun., 2017, 5, (2), pp. 128137.
    40. 40)
      • 41. http://mmcheng.net/gsal/.
    41. 41)
      • 18. Kienzle, W., Wichmann, F., Schölkopf, B., et al: ‘A nonparametric approach to bottom-up visual saliency’. Advances in Neural Information Processing Systems, Vancouver, Canada, 2007, pp. 689696.
    42. 42)
      • 70. Schauerte, B., Stiefelhagen, R.: ‘Quaternion-based spectral saliency detection for eye fixation prediction’. European Conf. on Computer Vision, Florence, Italy, 2012, pp. 116129.
    43. 43)
      • 78. Wang, W., Shen, J., Dong, X., et al: ‘Salient object detection driven by fixation prediction’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 17111720.
    44. 44)
      • 82. Yang, C., Zhang, L., Lu, H., et al: ‘Saliency detection via graph-based manifold ranking’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Portland, OR, 2013, pp. 31663173.
    45. 45)
      • 56. Kim, J., Han, D., Tai, Y., et al: ‘Salient region detection via high-dimensional colour transform’. 2014 IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, OH, 2014, pp. 883890.
    46. 46)
      • 64. Garcia-Diaz, A., Leborán, V., Fdez-Vidal, X., et al: ‘On the relationship between optical variability, visual saliency, and eye fixations. A computational approach’, J. Vis., 2012, 12, (6), pp. 1724.
    47. 47)
      • 35. Bruna, J., Mallat, S.: ‘Invariant scattering convolution networks’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, pp. 18721886.
    48. 48)
      • 45. Cheng, M., Mitra, N., Huang, X., et al: ‘Global contrast based salient region detection’, IEEE Trans. Pattern Anal. Mach. Intell., 2015, 37, (3), pp. 569582.
    49. 49)
      • 29. Zhang, L., Dai, J., Lu, H., et al: ‘A bidirectional message passing model for salient object detection’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, 2018, pp. 17411750.
    50. 50)
      • 1. Etezadifar, P., Farsi, H.: ‘Scalable video summarization via sparse dictionary learning and selection simultanously’, Multimedia Tools Appl., 2017, 76, (6), pp. 79477971.
    51. 51)
      • 57. Margolin, R., Tal, A., Zelnik-Manor, T.: ‘What makes a patch distinct?’. 2013 IEEE Conf. on Computer Vision and Pattern Recognition, 2013, pp. 11391146.
    52. 52)
      • 66. Borji, A., Cheng, M.M., Jiang, H., et al: ‘Salient object detection: a benchmark’, IEEE Trans. Image Process., 2015, 24, (12), pp. 57065722.
    53. 53)
      • 53. Achanta, R., Hemami, S., Estrada, F., et al: ‘Frequency tuned salient region detection’. 2009 IEEE Conf. on Computer Vision and Pattern Recognition, Miami, FL, 2009, pp. 15971604.
    54. 54)
      • 34. Jia, S., Bruce, N.D.: ‘Richer and deeper supervision network for salient object detection’, 2019, arXiv preprint arXiv:1901.02425.
    55. 55)
      • 77. Zhang, X., Wang, T., Qi, J., et al: ‘Progressive attention guided recurrent network for salient object detection’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 714722.
    56. 56)
      • 8. Min, X., Zhai, G., Gu, K., et al: Quality evaluation of image dehazing methods using synthetic hazy images’, IEEE Trans. Multimed., 2019, Doi: 10.1109/TMM.2019.2902097.
    57. 57)
      • 71. Li, J., Levine, M.D., An, X., et al: ‘Visual saliency based on scale-space analysis in the frequency domain’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (4), pp. 9961010.
    58. 58)
      • 52. Rahtu, E., Kannala, J., Salo, M., et al: ‘Segmenting salient objects from images and videos’. Conf. Computer Vision Processing Europa, Heraklion, Crete, Greece, 2010, pp. 366379.
    59. 59)
      • 58. Scharfenberger, C., Wong, A., Eergani, K., et al: ‘Statistical textural distinctiveness for salient region detection in natural’. 2013IEEE Conf. on Computer Vision and Pattern Recognition, Boston, MA, 2013, pp. 979986.
    60. 60)
      • 50. Goferman, S., Manor, L., Tal, A.: ‘Context-aware saliency detection’. IEEE Conf. on Computer Vision Pattern Recognition, San Francisco, CA, 2010, pp. 19151926.
    61. 61)
      • 47. Shen, X., Wu, Y.: ‘A unified approach to salient object detection via low rank matrix recovery’. IEEE Conf. on Computer Vision Pattern Recognition, Providence, RI, 2012, pp. 22962303.
    62. 62)
      • 2. Borji, A., Sihite, D., Itti, L.: ‘State-of-the-art in visual attention modeling’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (1), pp. 185207.
    63. 63)
      • 60. Achanta, R., Estrada, F., Wils, P., et al: ‘Salient region detection and segmentation’. 6th Int. Conf. on Computer Vision System, Santorini, Greece, 2008, pp. 6675.
    64. 64)
      • 15. Itti, L., Koch, C., Niebur, E.: ‘A model of saliency-based visual attention for rapid scene analysis’, IEEE Trans. Pattern Anal. Mach. Intell., 1998, 20, (11), pp. 12541259.
    65. 65)
      • 61. Bruce, N., Tsotsos, J.: ‘Saliency based on information maximization’. Advances in Neural Information Processing Systems, Vancouver, Canada, 2005, pp. 155162.
    66. 66)
      • 63. Ma, Y., Zhang, H.: ‘Contrast-based image attention analysis by using fuzzy growing’. ACM Int. Conf. on Multimedia, Berkeley, CA, USA, 2003.
    67. 67)
      • 30. Li, G., Yu, Y.: ‘Deep contrast learning for salient object detection’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, 2016, pp. 478487.
    68. 68)
      • 16. Chenlei, G., Liming, Z.: ‘A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression’, IEEE Trans. Image Process., 2010, 19, (1), pp. 185198.
    69. 69)
      • 49. Jiang, H., Wang, J., Yuan, Z., et al: ‘Automatic salient object segmentation based on context and shape prior’. The 22nd British Machine Vision Conf., Dundee, Scotland, 2011, pp. 112.
    70. 70)
      • 10. Beugeling, T., Branzan-Albu, A.: ‘Detection of objects and their shadows from acoustic images of the sea floor’. 2013 OCEANS, San Diego, 2013, pp. 15.
    71. 71)
      • 62. Harel, J., Koch, C., Perona, P.: ‘Graph-based visual saliency’. Advances in Neural Information Processing Systems, Vancouver, Canada, 2007, pp. 545552.
    72. 72)
      • 17. Sang, N., Wei, L., Wang, Y.: ‘A biologically-inspired top-down learning model based on visual attention’. Int. Conf. on Pattern Recognition, Istanbul, Turkey, 2010.
    73. 73)
      • 73. Gu, K., Zhai, G., Lin, W., et al: ‘Visual saliency detection with free energy theory’, IEEE Signal Process. Lett., 2015, 22, (10), pp. 15521555.
    74. 74)
      • 33. Chen, S., Tan, X., Wang, B., et al: ‘Reverse attention for salient object detection’. Proc. European Conf. on Computer Vision (ECCV), Munich, Germany, 2018, pp. 234250.
    75. 75)
      • 4. Fan, D.P., Cheng, M.M., Liu, J.J., et al: ‘Salient objects in clutter: bringing salient object detection to the foreground’. Proc. European Conf. on Computer Vision (ECCV), Munich, Germany, 2018, pp. 186202.
    76. 76)
      • 3. Farsi, H., Nasiripour, R., Mohammadzadeh, S.: ‘Eye gaze detection based on learning automata by using SURF descriptor’, J. Inf. Syst. Telecommun., 2018, 6, (1), pp. 4149.
    77. 77)
      • 25. Wang, H., Lu, X., Li, N., et al: ‘Saliency detection via background and foreground seed selection’. Neurocomputing, Journal, Elsevier, ISSN: 0925-2312.
    78. 78)
      • 75. Zhang, P., Wang, D., Lu, H., et al: ‘Learning uncertain convolutional features for accurate saliency detection’. Proc. IEEE Int. Conf. on Computer Vision, Venice, Italy, 2017, pp. 212221.
    79. 79)
      • 39. Achanta, R., Hemami, S., Estrada, F., et al: ‘Frequency-tuned salient region detection’. Proc. IEEE Conf. Computer Vision Pattern and Recognition, Miami, FL, 2009, pp. 15971660.
    80. 80)
      • 79. Fang, S., Li, J., Tian, Y., et al: ‘Learning discriminative subspaces on random contrasts for image saliency analysis’, IEEE Trans. Neural Netw. Learn. Syst., 2017, 28, (5), pp. 10951108.
    81. 81)
      • 13. Recasens, A., Khosla, A., Vondrick, C., et al: ‘Where are they looking?’. Advances in Neural Information Processing Systems, Montréal, Canada, 2015, pp. 199207.
    82. 82)
      • 81. Zanca, D., Gori, M.: ‘Variational laws of visual attention for dynamic scenes’. Advances in Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 38233832.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2018.6613
Loading

Related content

content/journals/10.1049/iet-ipr.2018.6613
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address