http://iet.metastore.ingenta.com
1887

Optimisation for image salient object detection based on semantic-aware clustering and CRF

Optimisation for image salient object detection based on semantic-aware clustering and CRF

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

State-of-the-art optimisation methods for salient object detection neglect that saliency maps of different images usually show different imperfections. Therefore, the saliency maps of some images cannot achieve effective optimisation. Based on the observation that the saliency maps of semantically similar images usually show similar imperfections, the authors propose an optimisation method for salient object detection based on semantic-aware clustering and conditional random field (CRF), named CCRF. They first cluster the training images into some clusters using the image semantic features extracted by using a deep convolutional neural network model for image classification. Then for each cluster, they use a CRF to optimise the saliency maps generated by existing salient object detection methods. A grid search method is used to compute the optimal weights of the kernels of the CRF. The saliency maps of the testing images are optimised by the corresponding CRFs with the optimal weights. The experimental results with 13 typical salient object detection methods on four datasets show that the proposed CCRF algorithm can effectively improve the results of a variety of image salient object detection methods and outperforms the compared optimisation methods.

References

    1. 1)
      • 1. Jiang, B., Zhang, L., Lu, H., et al: ‘Saliency detection via absorbing Markov chain’. Proc. IEEE Int. Conf. on Computer Vision, Sydney, Australia, 2013, pp. 16651672.
    2. 2)
      • 2. Li, X., Zhao, L., Wei, L., et al: ‘Deepsaliency: multi-task deep neural network model for salient object detection’, IEEE Trans. Image Process., 2016, 25, (8), pp. 39193930.
    3. 3)
      • 3. Huang, F., Qi, J., Lu, H., et al: ‘Salient object detection via multiple instance learning’, IEEE Trans. Image Process., 2017, 26, (4), pp. 19111922.
    4. 4)
      • 4. Niu, Y., Lin, W., Ke, X., et al: ‘Fitting-based optimisation for image visual salient object detection’, IET Comput. Vis., 2017, 11, (2), pp. 161172.
    5. 5)
      • 5. Li, G., Yu, Y.: ‘Deep contrast learning for salient object detection’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, US, 2016, pp. 478487.
    6. 6)
      • 6. Zhu, W., Liang, S., Wei, Y., et al: ‘Saliency optimization from robust background detection’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, OH, US, 2014, pp. 28142821.
    7. 7)
      • 7. Margolin, R., Tal, A., Zelnik-Manor, L.: ‘What makes a patch distinct?’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Portland, OR, US, 2013, pp. 11391146.
    8. 8)
      • 8. Wei, Y., Wen, F., Zhu, W., et al: ‘Geodesic saliency using background priors’. Proc. European Conf. on Computer Vision, Florence, Italy, 2012, pp. 2942.
    9. 9)
      • 9. Kim, J., Han, D., Tai, Y.W., et al: ‘Salient region detection via high-dimensional color transform’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, OH, US, 2014, pp. 883890.
    10. 10)
      • 10. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’. arXiv preprint arXiv:1409.1556, 2014.
    11. 11)
      • 11. Hou, Q., Cheng, M.-M., Hu, X., et al: ‘Deeply supervised salient object detection with short connections’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, HI, US, 2017, pp. 53005309.
    12. 12)
      • 12. Itti, L., Koch, C., Niebur, E.: ‘A model of saliency-based visual attention for rapid scene analysis’, IEEE Trans. Pattern Anal. Mach. Intell., 1998, 20, (11), pp. 12541259.
    13. 13)
      • 13. Achanta, R., Hemami, S., Estrada, F., et al: ‘Frequency-tuned salient region detection’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Miami, FL, US, 2009, pp. 15971604.
    14. 14)
      • 14. Hou, X., Zhang, L.: ‘Saliency detection: A spectral residual approach’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, CiteSeerX, NJ, USA, 2007, pp. 18.
    15. 15)
      • 15. Harel, J., Koch, C., Perona, P.: ‘Graph-based visual saliency’. Advances in Neural Information Processing Systems, Vancouver, Canada, 2007, pp. 545552.
    16. 16)
      • 16. Goferman, S., Zelnik-Manor, L., Tal, A.: ‘Context-aware saliency detection’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 34, (10), pp. 19151926.
    17. 17)
      • 17. Cheng, M.-M., Mitra, N.J., Huang, X., et al: ‘Global contrast based salient region detection’, IEEE Trans. Pattern Anal. Mach. Intell., 2014, 37, (3), pp. 569582.
    18. 18)
      • 18. Scharfenberger, C., Wong, A., Fergani, K., et al: ‘Statistical textural distinctiveness for salient region detection in natural images’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Portland, OR, US, 2013, pp. 979986.
    19. 19)
      • 19. Yan, Q., Xu, L., Shi, J., et al: ‘Hierarchical saliency detection’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Portland, OR, US, 2013, pp. 11551162.
    20. 20)
      • 20. Peng, H., Li, B., Ling, H., et al: ‘Saliency object detection via structured matrix decomposition’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (4), pp. 818832.
    21. 21)
      • 21. Yang, C., Zhang, L., Lu, H., et al: ‘Saliency detection via graph-based manifold ranking’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Portland, OR, US, 2013, pp. 31663173.
    22. 22)
      • 22. Qin, Y., Lu, H., Xu, Y., et al: ‘Saliency detection via cellular automata’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Boston, MA, US, 2015, pp. 110119.
    23. 23)
      • 23. Tu, W.-C., He, S., Yang, Q., et al: ‘Real-time saliency detection with a minimum spanning tree’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, US, 2016, pp. 23342342.
    24. 24)
      • 24. Zhang, J., Zhang, T., Dai, Y., et al: ‘Deep unsupervised saliency detection: A multiple noisy labeling perspective’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Salt Lake City, UT, US, 2018, pp. 90299038.
    25. 25)
      • 25. Yang, Y., Li, B., Li, P., et al: ‘A two-stage clustering based 3D visual saliency model for dynamic scenarios’, IEEE Trans. Multimed., 2018, 21, (4), pp. 809820.
    26. 26)
      • 26. Li, B., Liu, Q., Shi, X., et al: ‘Graph-based saliency fusion with superpixel-level belief propagation for 3D fixation prediction’. Proc. IEEE Int. Conf. on Image Processing, Athens, Greece, 2018, pp. 23212325.
    27. 27)
      • 27. Jiang, H., Wang, J., Yuan, Z., et al: ‘Salient object detection: a discriminative regional feature integration approach’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Portland, OR, US, 2013, pp. 20832090.
    28. 28)
      • 28. Yang, J., Yang, M.: ‘Top-down visual saliency via joint CRF and dictionary learning’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Providence, RI, US, 2012, pp. 22962303.
    29. 29)
      • 29. Liu, N., Han, J., Yang, M.-H.: ‘PiCANet: learning pixel-wise contextual attention for saliency detection’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Salt Lake City, UT, US, 2018, pp. 30893098.
    30. 30)
      • 30. He, S., Tavakoli, H.R., Borji, A., et al: ‘Understanding and visualizing deep visual saliency models’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Long Beach, CA, US, 2019, pp. 1020610215.
    31. 31)
      • 31. Wu, R., Feng, M., Guan, W., et al: ‘A mutual learning method for salient object detection with intertwined multi-supervision’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Long Beach, CA, US, 2019, pp. 81508159.
    32. 32)
      • 32. Liu, J.-J., Hou, Q., Cheng, M.-M., et al: ‘A simple pooling-based design for real-time salient object detection’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Long Beach, CA, US, 2019, pp. 39173926.
    33. 33)
      • 33. Zhang, J., Jin, X., Sun, J., et al: ‘Spatial and semantic convolutional features for robust visual object tracking’, Multimedia Tools Appl., 2018, 77, pp. 121.
    34. 34)
      • 34. Chen, Y., Xiong, J., Xu, W., et al: ‘A novel online incremental and decremental learning algorithm based on variable support vector machine’, Cluster Comput., 2018, 22, pp. 111.
    35. 35)
      • 35. Chen, Y., Wang, J., Xia, R., et al: ‘The visual object tracking algorithm research based on adaptive combination kernel’, J. Ambient. Intell. Humaniz. Comput., 2019, 10, pp. 48554867.
    36. 36)
      • 36. Zhang, J., Wu, Y., Jin, X., et al: ‘A fast object tracker based on integrated multiple features and dynamic learning rate’, Math. Probl. Eng., 2018, 2018, pp. 114.
    37. 37)
      • 37. Babenko, A., Lempitsky, V.: ‘Aggregating local deep features for image retrieval’. Proc. IEEE Int. Conf. on Computer Vision, Santiago, Chile, 2015, pp. 12691277.
    38. 38)
      • 38. Niu, Y., Lin, W., Ke, X.: ‘Cf-based optimisation for saliency detection’, IET Comput. Vis., 2018, 12, (4), pp. 365376.
    39. 39)
      • 39. Liu, Z., Zou, W., Li, L., et al: ‘Co-saliency detection based on hierarchical segmentation’, IEEE Signal Process. Lett., 2014, 21, (1), pp. 8892.
    40. 40)
      • 40. Ye, L., Liu, Z., Li, J., et al: ‘Co-saliency detection via co-salient object discovery and recovery’, IEEE Signal Process. Lett., 2015, 22, (11), pp. 20732077.
    41. 41)
      • 41. Siahaan, E., Hanjalic, A., Redi, J.A.: ‘Semantic-aware blind image quality assessment’, Signal Process., Image Commun., 2018, 60, pp. 237252.
    42. 42)
      • 42. Zhang, G., Zeng, Z., Zhang, S., et al: ‘Sift matching with CNN evidences for particular object retrieval’, Neurocomputing, 2017, 238, pp. 399409.
    43. 43)
      • 43. Wang, L., Ouyang, W., Wang, X., et al: ‘Visual tracking with fully convolutional networks’. Proc. IEEE Int. Conf. on Image Processing, Santiago, Chile, 2015, pp. 31193127.
    44. 44)
      • 44. Kr̈ahenb̈uhl, P., Koltun, V.: ‘Efficient inference in fully connected CRFs with Gaussian edge potentials’. Advances in Neural Information Processing Systems, Granada, Spain, 2011, pp. 109117.
    45. 45)
      • 45. Cheng, M.-M., Prisacariu, V., Zheng, S., et al: ‘Densecut: densely connected CRFs for realtime grabcut’, Comput. Graph. Forum, 2015, 34, (7), pp. 193201.
    46. 46)
      • 46. Zou, W., Kpalma, K., Liu, Z., et al: ‘Segmentation driven low-rank matrix recovery for saliency detection’. Proc. British Machine Vision Conf., Bristol, UK, 2013, pp. 113.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2019.0063
Loading

Related content

content/journals/10.1049/iet-cvi.2019.0063
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address