access icon free Saliency detection using suitable variant of local and global consistency

In existing local and global consistency (LGC) framework, the cost functions related to classifying functions adopt the sum of each row of weight matrix as an important factor. Some of these classifying functions are successfully applied to saliency detection. From the point of saliency detection, this factor is inversely proportional to the colour contrast between image regions and their surroundings. However, an image region that holds a big colour contrast against it surroundings does not denote it must be a salient region. Therefore a suitable variant of LGC is introduced by removing this factor in cost function, and a suitable classifying function (SCF) is decided. Then a saliency detection method that utilises the SCF, content-based initial label assignment scheme, and appearance-based label assignment scheme is presented. Via updating the content-based initial labels and appearance-based labels by the SCF, a coarse saliency map and several intermediate saliency maps are obtained. Furthermore, to enhance the detection accuracy, a novel optimisation function is presented to fuse the intermediate saliency maps that have a high detection performance for final saliency generation. Numerous experimental results demonstrate that the proposed method achieves competitive performance against some recent state-of-the-art algorithms for saliency detection.

Inspec keywords: image classification; image colour analysis

Other keywords: image regions; colour contrast; local and global consistency framework; saliency detection; content-based initial labels; coarse saliency map; LGC framework; content-based initial label assignment scheme; appearance-based label assignment scheme; suitable classifying function; appearance-based labels; SCF

Subjects: Computer vision and image processing techniques; Image recognition

References

    1. 1)
      • 16. Goferman, S., Zelnik-Manor, L., Tal, A.: ‘Context-aware saliency detection’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (10), pp. 19151925.
    2. 2)
      • 6. Wei, Y., Wen, F., Zhu, W., et al: ‘Geodesic saliency using background priors’. Proc. European Conf. Computer Vision, 2012, pp. 2932.
    3. 3)
      • 21. Rahtu, E., Kannala, J., Blaschko, M.: ‘Learning a category independent object detection cascade’. Proc. IEEE Int. Conf. Computer Vision, 2011, pp. 10521059.
    4. 4)
      • 24. Nie, F., Xu, D., Tsang, I., et al: ‘Flexible manifold embedding: A framework for semi-supervised and unsupervised dimension reduction’, IEEE Trans. Image Process., 2010, 19, (7), pp. 19211932.
    5. 5)
      • 14. Yang, J., Yang, M.: ‘Top-down visual saliency via joint CRF and dictionary learning’. Proc. IEEE Int. Conf. Computer Vision, 2012, pp. 22962303.
    6. 6)
      • 13. Liu, T., Yuan, Z., Sun, J., et al: ‘Learning to detect a salient object’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (2), pp. 353367.
    7. 7)
      • 8. Zhou, D., Weston, J., Gretton, A., et al: ‘Ranking on data manifolds’. Proc. NIPS, 2004, vol. 2, p. 3.
    8. 8)
      • 10. Perazzi, F., Krähenbühl, P., Pritch, Y., et al: ‘Saliency filters: contrast based filtering for salient region detection’. Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2012, pp. 733740.
    9. 9)
      • 28. Jiang, B., Zhang, L., Lu, H., et al: ‘Saliency detection via absorbing Markov chain’. Proc. IEEE Int. Conf. Computer Vision, 2013, pp. 16651672.
    10. 10)
      • 37. Cheng, M., Warrell, J., Lin, W., et al: ‘Efficient salient region detection with soft image abstraction’. Proc. IEEE Int. Conf. Computer Vision, 2013, pp. 15291536.
    11. 11)
      • 12. Wang, J., Lu, H., Li, X., et al: ‘Saliency detection via background and foreground seed selection’, Neurocomputing, 2015, 152, pp. 358368.
    12. 12)
      • 26. Gopalakrishnan, V., Hu, Y., Rajan, D.: ‘Random walks on graphs for salient object detection in images’, IEEE Trans. Image Process., 2010, 19, (12), pp. 32323242.
    13. 13)
      • 35. Li, X., Lu, H., Zhang, L., et al: ‘Saliency detection via dense and sparse reconstruction’. Proc. IEEE Int. Conf. Computer Vision, 2013, pp. 29762983.
    14. 14)
      • 20. Marchesotti, L., Cifarelli, C., Csurka, G.: ‘A framework for visual saliency detection with applications to image thumb nailing’. Proc. IEEE Int. Conf. Computer Vision, 2009, pp. 22322239.
    15. 15)
      • 11. Li, H., Lu, H., Lin, Z., et al: ‘Inner and inter label propagation: salient object detection in the wild’, IEEE Trans. Image Process., 2015, 24, (10), pp. 31763186.
    16. 16)
      • 17. Gopalakrishnan, V., Hu, Y., Rajan, D.: ‘Salient region detection by modeling distributions of color and orientation’, IEEE Trans. Multimedia, 2009, 11, (5), pp. 892905.
    17. 17)
      • 4. Meng, F., Li, H., Liu, G., et al: ‘Object co-segmentation based on shortest path algorithm and saliency model’, IEEE Trans. Multimedia, 2012, 14, (1), pp. 14291441.
    18. 18)
      • 36. Yan, Q., Xu, L., Shi, J., et al: ‘Hierarchical saliency detection’. Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2013, pp. 11551162.
    19. 19)
      • 3. Fang, Y., Lin, W., Lee, B., et al: ‘Bottom-up saliency detection model based on human visual sensitivity and amplitude spectrum’, IEEE Trans. Multimedia, 2012, 14, (1), pp. 187198.
    20. 20)
      • 29. Zhou, D., Bousquet, O., Lal, T., et al: ‘Learning with local and global consistency’. Proc. NIPS, 2003, pp. 321328.
    21. 21)
      • 19. Itti, L., Koch, C., Niebur, E.: ‘A model of saliency-based visual attention for rapid scene analysis’, IEEE Trans. Pattern Anal. Mach. Intell., 1998, 20, (11), pp. 12541259.
    22. 22)
      • 7. Yang, C., Zhang, L., Lu, H., et al: ‘Saliency detection via graph-based manifold ranking’. Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2013, pp. 31663173.
    23. 23)
      • 38. Li, Y., Hou, X., Koch, C., et al: ‘The secrets of salient object segmentation’. Proc. IEEE Int. Conf. Computer Vision, 2014, pp. 280287.
    24. 24)
      • 15. İmamoğlu, N., Lin, W., Fang, Y.: ‘A saliency detection model using low-level features based on wavelet transform’, IEEE Trans. Multimedia, 2013, 15, (1), pp. 96105.
    25. 25)
      • 34. Margolin, R., Tal, A., Zelnik-Manor, L.: ‘What makes a patch distinct?’. Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2013, pp. 11391146.
    26. 26)
      • 9. Cheng, M., Zhang, G., Mitra, N., et al: ‘Global contrast based salient region detection’. Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2011, pp. 409416.
    27. 27)
      • 23. Liu, Z., Shi, R., Shen, L., et al: ‘Unsupervised salient object segmentation based on kernel density estimation and two-phase graph cut’, IEEE Trans. Multimedia, 2012, 14, (1), pp. 12751289.
    28. 28)
      • 27. Judd, T., Ehinger, K., Durand, F., et al: ‘Learning to predict where humans look’. Proc. IEEE Int. Conf. Computer Vision, 2009, pp. 21062113.
    29. 29)
      • 31. Yang, C., Zhang, L., Lu, H.: ‘Graph-regularized saliency detection with convex-hull-based center prior’, IEEE Signal Process. Lett., 2013, 20, (7), pp. 637640.
    30. 30)
      • 18. Achanta, R., Hemami, S., Estrada, F., et al: ‘Frequency-tuned salient region detection’. Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2009, pp. 15971604.
    31. 31)
      • 39. Cheng, M., Mitra, N., Huang, X., et al: ‘Global contrast based salient region detection’, IEEE Trans. Pattern Anal. Mach. Intell., 2015, 37, (3), pp. 569582.
    32. 32)
      • 5. Mahadevan, V., Vasconcelos, N.: ‘Saliency-based discriminant tracking’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2009, pp. 10071013.
    33. 33)
      • 25. Yang, Y., Nie, F., Xu, D., et al: ‘A multimedia retrieval framework based on semi-supervised ranking and relevance feedback’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (4), pp. 723742.
    34. 34)
      • 22. Li, J., Xu, D., Gao, W.: ‘Removing label ambiguity in learning-based visual saliency estimation’, IEEE Trans. Image Process., 2012, 21, (4), pp. 15131525.
    35. 35)
      • 2. Itti, L.: ‘Automatic foveation for video compression using a neurobiological model of visual attention’, IEEE Trans. Image Process., 2004, 13, (10), pp. 13041318.
    36. 36)
      • 30. Lu, S., Mahadevan, V., Vasconcelos, N.: ‘Learning optimal seeds for diffusion-based salient object detection’. Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2014, pp. 27902797.
    37. 37)
      • 32. Achanta, R., Shaji, A., Smith, K., et al: ‘SLIC superpixels compared to state-of-the-art superpixel methods’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (11), pp. 22742282.
    38. 38)
      • 1. Itti, L., Koch, C.: ‘Computational modeling of visual attention’, Nat. Rev. Neurosci., 2001, 2, (3), pp. 194203.
    39. 39)
      • 33. Ng, A., Jordan, M., Weiss, Y.: ‘On spectral clustering: analysis and an algorithm’. Proc. NIPS, 2001, pp. 849856.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2016.0453
Loading

Related content

content/journals/10.1049/iet-cvi.2016.0453
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading