http://iet.metastore.ingenta.com
1887

Efficient Bayesian approach to saliency detection based on Dirichlet process mixture

Efficient Bayesian approach to saliency detection based on Dirichlet process mixture

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Saliency detection has shown a great role in many image processing applications. This study introduces a new Bayesian framework for saliency detection. In this framework, image saliency is computed as product of three saliencies: location-based, feature-based and centre-surround saliencies. Each of these saliencies is estimated using statistical approaches. The centre-surround saliency is estimated using Dirichlet process mixture model. The authors evaluate their method using five different databases and it is shown that it outperform state-of-the-art methods. Also, they show that the proposed method has a low computational cost.

References

    1. 1)
      • 1. Koch, K., McLean, J., Segev, R., et al: ‘How much the eye tells the brain’, Curr. Biol., 2006, 16, (14), pp. 14281434.
    2. 2)
      • 2. Hsin, H.: ‘Saliency histogram equalization and its application to image resizing’, IET Image Process., 2016, 10, (10), pp. 787798.
    3. 3)
      • 3. Lee, J., Park, R.: ‘Segmentation with saliency map using colour and depth images’, IET Image Process., 2015, 9, (1), pp. 6270.
    4. 4)
      • 4. Liu, Z., Xue, Y., Yan, H., et al: ‘Efficient saliency detection based on Gaussian models’, IET Image Process., 2011, 5, (2), pp. 122131.
    5. 5)
      • 5. Itti, L., Koch, C., Niebur, E.: ‘A model of saliency-based visual attention for rapid scene analysis’, IEEE Trans. Pattern Anal. Mach. Intell., 1998, 20.11, pp. 12541259.
    6. 6)
      • 6. Wang, Q., Yuan, Y., Yan, P.: ‘Visual saliency by selective contrast’, IEEE Trans. Circuits Syst. Video Technol., 2013, 23, (7), pp. 11501155.
    7. 7)
      • 7. Oliva, A., Torralba, A., Castelhano, M.S., et al: ‘Top-down control of visual attention in object detection’. Proc. Int. Conf. Image Processing, 2003, pp. 253256.
    8. 8)
      • 8. Zhang, L., Tong, M.H., Marks, T.K., et al: ‘SUN: A Bayesian framework for saliency using natural statistics’, J. Vis., 2008, 8, (7), pp. 3232.
    9. 9)
      • 9. Harel, J., Koch, C., Perona, P.: ‘Graph-based visual saliency’. Proc. Conf. Advances in Neural Information Processing Systems, 2007, vol. 19, pp. 545552.
    10. 10)
      • 10. Bruce, N., Tsotsos, J.: ‘Attention based on information maximization’, J. Vis., 2007, 7, (9), pp. 950950.
    11. 11)
      • 11. Riche, N., Mancas, M., Duvinage, M., et al: ‘Rare2012: a multi-scale rarity-based saliency detection with its comparative statistical analysis’, Signal Process., Image Commun., 2013, 28, (6), pp. 642658.
    12. 12)
      • 12. Hou, X., Zhang, L.: ‘Saliency detection: a spectral residual approach’. Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2007, pp. 18.
    13. 13)
      • 13. Judd, T., Ehinger, K., Durand, F., et al: ‘Learning to predict where humans look’. Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2009, pp. 21062113.
    14. 14)
      • 14. Viola, P., Jones, M.J.: ‘Robust real-time face detection’, Int. J. Comput. Vis., 2004, 57, (2), pp. 137154.
    15. 15)
      • 15. Felzenszwalb, P.F., Girshick, R.B., McAllester, D., et al: ‘Object detection with discriminatively trained part-based models’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (9), pp. 16271645.
    16. 16)
      • 16. Wang, Q., Yuan, Y., Yan, P., et al: ‘Saliency detection by multiple-instance learning’, IEEE Trans. Cybern., 2013, 43, (2), pp. 660672.
    17. 17)
      • 17. Wang, Q., Zhu, G., Yuan, Y.: ‘Multi-spectral dataset and its application in saliency detection’, Comput. Vis. Image Underst., 2013, 117, (12), pp. 17481754.
    18. 18)
      • 18. Szegedy, C., Liu, W., Jia, Y., et al: ‘Going deeper with convolutions’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2015, pp. 19.
    19. 19)
      • 19. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’, arXiv preprint arXiv, 2014, 1409.1556.
    20. 20)
      • 20. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ‘Imagenet classification with deep convolutional neural networks’. Proc. Advances in Neural Information Processing Systems, 2012, pp. 10971105.
    21. 21)
      • 21. Vig, E., Dorr, M., Cox, D.: ‘Large-scale optimization of hierarchical features for saliency prediction in natural images’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2014, pp. 27982805.
    22. 22)
      • 22. Kruthiventi, S.S.S., Ayush, K., Babu, R.V.: ‘Deepfix: a fully convolutional neural network for predicting human eye fixations’, arXiv preprint arXiv:1510.02927, 2015.
    23. 23)
      • 23. Borji, A., Itti, L.: ‘Cat2000: a large scale fixation dataset for boosting saliency research’, arXiv preprint arXiv, 2015, 1505.03581.
    24. 24)
      • 24. Gonzalez, R.C., Woods, R.E.: ‘Digital image processing’ (Prentice-Hall, 1992, 2007, 2nd edn.).
    25. 25)
      • 25. Orbanz, P., Buhmann, J.M.: ‘Nonparametric Bayesian image segmentation’, Int. J. Comput. Vis., 2008, 77, (1-3), pp. 2545.
    26. 26)
      • 26. Kivinen, J.J., Sudderth, E.B., Jordan, M.I.: ‘Image denoising with nonparametric hidden Markov trees’. Proc. IEEE Int. Conf. Image Processing, 2007, vol. 3.
    27. 27)
      • 27. Ferguson, T.S.: ‘A Bayesian analysis of some nonparametric problems’, Ann. Stat., 1973, 1, (2), pp. 209230.
    28. 28)
      • 28. Neal, R.M.: ‘Markov chain sampling methods for Dirichlet process mixture models’, J. Computat. Graph. Stat., 2000, 9, (2), pp. 249265.
    29. 29)
      • 29. Bylinskii, Z., Judd, T., Oliva, A., et al: ‘What do different evaluation metrics tell us about saliency models?’, arXiv preprint arXiv, 2016, 1604.03605.
    30. 30)
      • 30. Li, J., Levine, M.D., An, X., et al: ‘Visual saliency based on scale-space analysis in the frequency domain’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (4), pp. 9961010.
    31. 31)
      • 31. Cerf, M., Harel, J., Einhäuser, W., et al: ‘Predicting human gaze using low-level saliency combined with face detection’, Adv. Neural Inf. Process. Syst., 2008, pp. 241248.
    32. 32)
      • 32. Bylinskii, Z., Judd, T., Borji, A., et al: ‘MIT saliency benchmarkhttp://saliency.mit.edu/.
    33. 33)
      • 33. Zhang, J., Sclaroff, S.: ‘Exploiting surroundedness for saliency detection: a Boolean map approach’, IEEE Trans. Pattern Anal. Mach. Intell., 2016, 38, (5), pp. 889902.
    34. 34)
      • 34. Goferman, S., Zelnik-Manor, L., Tal, A.: ‘Context-aware saliency detection’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (10), pp. 19151926.
    35. 35)
      • 35. Tavakoli, H.R., Rahtu, E., Heikkilä, J.: ‘Fast and efficient saliency detection using sparse sampling and kernel density estimation’. Proc. of Scandinavian Conf. Image Analysis, 23 May 2011, pp. 666675.
    36. 36)
      • 36. Garcia-Diaz, A., Leboran, V., Fdez-Vidal, X.R., et al: ‘On the relationship between optical variability, visual saliency, and eye fixations: a computational approach’, J. Vis., 2012, 12, (6), p. 17.
    37. 37)
      • 37. Torralba, A., Oliva, A., Castelhano, M., et al: ‘Contextual guidance of attention in natural scenes: the role of global features on object search’, Psychol. Rev., 2006, 113, (4), pp. 766786.
    38. 38)
      • 38. Murray, N., Vanrell, M., Otazu, X., et al: ‘Saliency estimation using a non-parametric low-level vision model’. Proc. of Computer Vision and Pattern Recognition, 2011, pp. 433440.
    39. 39)
      • 39. López-García, F., Dosil, R., Pardo, X.M., et al: ‘Scene recognition through visual attention and image features: a comparison between sift and surf approaches’ (INTECH Open Access Publisher, Rijeka, 2011).
    40. 40)
      • 40. Achanta, R., Hemami, S., Estrada, F., et al: ‘Frequency-tuned salient region detection’. Proc. of Computer Vision and Pattern Recognition, 2009, pp. 15971604.
    41. 41)
      • 41. Xie, Y., Lu, H.: ‘Visual saliency detection based on Bayesian model’. Proc. of Int. Conf. Image Processing, 2011, pp. 645648.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2017.0267
Loading

Related content

content/journals/10.1049/iet-ipr.2017.0267
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address