Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Multi-scale contrast-based saliency enhancement for salient object detection

To achieve more complete and more uniformly highlighted salient object regions, this study presents a computational saliency enhancement model that incorporates the properties of multi-scale and logarithmic response into the local and global contrasts. A distinct feature of the authors model is a novel saliency enhancement operator. This operator can effectively enhance the saliency of object interior regions while simultaneously reducing blur on object boundaries caused by multiple scales. Their model is a general one that can make flexible tradeoffs between precision and recall. Detailed comparisons with 12 state-of-the-art methods show that their method can obtain satisfactory salient object regions that are closer to the human-labelled results. In addition, their method provides superior results in precision–recall, F-measure and mean absolute error.

References

    1. 1)
    2. 2)
    3. 3)
    4. 4)
    5. 5)
    6. 6)
    7. 7)
    8. 8)
    9. 9)
    10. 10)
      • 6. Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: ‘Frequency-tuned salient region detection’. CVPR, 2009, pp. 15971604.
    11. 11)
      • 3. Judd, T., Ehinger, K., Durand, F., Torralba, A.: ‘Learning to predict where humans look’. ICCV, 2009, pp. 21062113.
    12. 12)
      • 15. Packer, O.S., Dacey, D.M.: ‘Synergistic center-surround receptive field model of monkey h1 horizontal cells’, J. Vis., 2005, 5, (11), pp. 10381054 (doi: 10.1167/5.11.9).
    13. 13)
      • 11. Murray, N., Vanrell, M., Otazu, X., Parraga, C.A.: ‘Saliency estimation using a non-parametric low-level vision model’. CVPR, 2011, pp. 433440.
    14. 14)
      • 5. Liu, T., Sun, J., Zheng, N., Tang, X., Shum, H.Y.: ‘Learning to detect a salient object’. CVPR, 2007, pp. 18.
    15. 15)
      • 1. Perazzi, F., Krahenbuhl, P., Pritch, Y., Hornung, A.: ‘Saliency filters: contrast based filtering for salient region detection’. CVPR, 2012, pp. 733740.
    16. 16)
      • 10. Navon, D.: ‘Forest before trees: the precedence of global features in visual perception’, J. Cogn. Psychol., 1977, 9, (3), pp. 353383 (doi: 10.1016/0010-0285(77)90012-3).
    17. 17)
      • 18. Achanta, R., Estrada, F.J., Wils, P., Susstrunk, S.: ‘Salient region detection and segmentation’. ICVS, 2008, pp. 6675.
    18. 18)
      • 9. Schyns, P., Oliva, A.: ‘From blobs to boundary edges: evidence for time and spatial scale dependent scene recognition’, J. Psychological Sci., 1994, 5, pp. 195200 (doi: 10.1111/j.1467-9280.1994.tb00500.x).
    19. 19)
      • 12. Borji, A., Itti, L.: ‘Exploiting local and global patch rarities for saliency detection’. CVPR, 2012, pp. 478485.
    20. 20)
      • 20. Itti, L., Koch, C., Niebur, E.: ‘A model of saliency-based visual attention for rapid scene analysis’, IEEE TPAMI, 1998, 20, (11), pp. 12541259 (doi: 10.1109/34.730558).
    21. 21)
      • 23. Hou, X., Zhang, L.: ‘Saliency detection: a spectral residual approach’. CVPR, 2007.
    22. 22)
      • 2. Cheng, M.-M., Zhang, G.-X., Mitra, N.J., Huang, X., Hu, S.-M.: ‘Global contrast based salient region detection’. CVPR, 2011, pp. 409416.
    23. 23)
      • 17. Shen, X., Wu, Y.: ‘A unified approach to salient object detection via low rank matrix recovery’. CVPR, 2012, pp. 853860.
    24. 24)
      • 4. Bisley, J., Goldberg, M.: ‘Neuronal activity in the lateral intraparietal area and spatial attention’, J. Sci., 2003, 299, (5603), pp. 8186 (doi: 10.1126/science.1077395).
    25. 25)
      • 22. Ma, Y.-F., Zhang, H.: ‘Contrast-based image attention analysis by using fuzzy growing’. ACM Multimedia, 2003, pp. 374381.
    26. 26)
      • 8. Borji, A., Itti, L.: ‘State-of-the-art in visual attention modeling’, IEEE TPAMI, 2013, 35, (1), pp. 185207 (doi: 10.1109/TPAMI.2012.89).
    27. 27)
      • 19. Harel, J., Koch, C., Perona, P.: ‘Graph-based visual saliency’. NIPS, 2006, pp. 545552.
    28. 28)
      • 7. Goferman, S., Zelnik-Manor, L., Tal, A.: ‘Context-aware saliency detection’. CVPR, 2010, pp. 23762383.
    29. 29)
      • 13. Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Susstrunk, S.: ‘SLIC superpixels compared to state-of-the-art superpixel methods’, IEEE TPAMI, 2012, 34, (11), pp. 22742282 (doi: 10.1109/TPAMI.2012.120).
    30. 30)
      • 14. Cavanaugh, J.R., Bair, W., Movshon, J.A.: ‘Nature and interaction of signals from the receptive field center and surround in macaque v1 neurons’, J. Neurophysiol., 2002, 88, (5), pp. 25302546 (doi: 10.1152/jn.00692.2001).
    31. 31)
      • 21. Zhai, Y., Shah, M.: ‘Visual attention detection in video sequences using spatiotemporal cues’. ACM Multimedia, 2006, pp. 815824.
    32. 32)
      • 16. Gottlieb, J.: ‘From thought to action: the parietal cortex as a bridge between perception, action, and cognition’, J. Neuron, 2007, 53, (1), pp. 916 (doi: 10.1016/j.neuron.2006.12.009).
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2013.0118
Loading

Related content

content/journals/10.1049/iet-cvi.2013.0118
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address