http://iet.metastore.ingenta.com
1887

Towards path-based semantic dissimilarity estimation for scene representation using bottleneck analysis

Towards path-based semantic dissimilarity estimation for scene representation using bottleneck analysis

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

In natural images, it remains challenging to estimate dissimilarities between image elements for scene representation due to gradual variations of illuminations, textures or clutters. To tackle this problem, we utilise a path-based bottleneck analysis method that captures the semantic information between image elements to measure the dissimilarity. By integrating both the spatial continuity and feature consistency into the understanding of the semantic information, we detect the bottlenecks on the proposed double-S path to define the bottleneck distance, which demonstrates a favourable capability of grouping image elements that follow a similar pattern and separating different ones. In the experiments, the method is proved to be robust to noises and invariant to changing illumination and arbitrary scales in natural images. Tests on some challenging datasets validate the advantage of applying the path-based bottleneck distance in image ranking and salient object detection.

References

    1. 1)
      • 1. Barnich, O., Van Droogenbroeck, M.: ‘Vibe: A universal background subtraction algorithm for video sequences’, IEEE Trans. Image Process., 2011, 20, (6), pp. 17091724.
    2. 2)
      • 2. Fischer, B., Zöller, T., Buhmann, J.M.: ‘Path based pairwise data clustering with application to texture segmentation’. IEEE Proc. in Computer Vision and Pattern Recognition, Kauai, USA, 2001, pp. 235250.
    3. 3)
      • 3. Chang, H., Yeung, D.: ‘Robust path-based spectral clustering with application to image segmentation’. Tenth IEEE Int. Conf. on Computer Vision, Beijing, China, 2005, vol. 1, pp. 278285.
    4. 4)
      • 4. Yu, J., Tian, Q., Amores, J., et al: ‘Toward robust distance metric analysis for similarity estimation’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, New York, USA, 2006, vol. 1, pp. 316322.
    5. 5)
      • 5. Yu, J., Amores, J., Sebe, N., et al: ‘Distance learning for similarity estimation’, IEEE Trans. Pattern Anal. Mach. Intell., 2008, 30, (3), pp. 451462.
    6. 6)
      • 6. Guo, Y., Ding, G., Han, J.: ‘Robust quantization for general similarity search’, IEEE Trans. Image Process., 2018, 27, (2), pp. 949963.
    7. 7)
      • 7. Xing, E., Ng, A., Jordan, M., et al: ‘Distance metric learning with application to clustering with side-information’, Adv. Neural. Inf. Process. Syst., 2003, 15, pp. 505512.
    8. 8)
      • 8. Zemene, E., Pelillo, M.: ‘Path-based dominant-set clustering’. Int. Conf. on Image Analysis and Processing, Genova, Italy, 2015, pp. 150160.
    9. 9)
      • 9. Itti, L., Koch, C., Niebur, E.: ‘A model of saliency-based visual attention for rapid scene analysis’, IEEE Trans. Pattern Anal. Mach. Intell., 1998, 20, pp. 12541259.
    10. 10)
      • 10. Derrick, P., Klinton, L., Ernst, N.: ‘Modeling the role of salience in the allocation of overt visual attention’, Vis. Res., 2002, 42, (1), pp. 107123.
    11. 11)
      • 11. Yan, Q., Xu, L., Shi, J., et al: ‘Hierarchical saliency detection’. Proc. of Computer Vision and Pattern Recognition, Portland, USA, 2013, pp. 11551162.
    12. 12)
      • 12. Zhang, Q., Liu, Y., Zhu, S., et al: ‘Salient object detection based on super-pixel clustering and unified low-rank representation’, Comput. Vis. Image Underst., 2017, 161, pp. 5164.
    13. 13)
      • 13. Thureson, J., Carlsson, S.: ‘Appearance based qualitative image description for object class recognition’. ECCV, Prague, Czech Republic, 2004, pp. 518529.
    14. 14)
      • 14. Zheng, J., Tsuji, S.: ‘Generating dynamic projection images for scene representation and understanding’, Comput. Vis. Image Underst., 1998, 72, (3), pp. 237256.
    15. 15)
      • 15. Kadir, T., Brady, M.: ‘Saliency, scale and image description’, Int. J. Comput. Vis., 2001, 45, (2), pp. 83105.
    16. 16)
      • 16. Schneider, W.: ‘Visual-spatial working memory, attention, and scene representation: A neuro-cognitive theory’, Psychol. Res., 1999, 62, (2–3), pp. 220236.
    17. 17)
      • 17. Lou, Y., Favaro, P., Soatto, S., et al: ‘Nonlocal similarity image filtering’. Image Analysis and Processing (ICIAP 2009), Vietri sul Mare, Italy, 2009, pp. 6271.
    18. 18)
      • 18. Omer, I., Werman, M.: ‘The bottleneck geodesic: computing pixel affinity’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, New York, USA, 2006, pp. 19011907.
    19. 19)
      • 19. Dai, B., Zhang, Y., Lin, D.: ‘Detecting visual relationships with deep relational networks’. IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, USA, 2017, pp. 32983308.
    20. 20)
      • 20. Yang, L., Jin, R.: ‘Distance metric learning: A comprehensive survey’, Michigan State University, 2006, 2.
    21. 21)
      • 21. Itti, L., Koch, C.: ‘Feature combination strategies for saliency-based visual attention systems’, J. Electron. Imaging, 2001, 10, (1), pp. 161169.
    22. 22)
      • 22. Zakai, M.: ‘General distance criteria’, IEEE Trans. Inf. Theory, 1964, January, pp. 9495.
    23. 23)
      • 23. Tenenbaum, J., De Silva, V., Langford, J.: ‘A global geometric framework for nonlinear dimensionality reduction’, Science, 2000, 290, (5500), pp. 23192323.
    24. 24)
      • 24. Wei, Y., Wen, F., Zhu, W., et al: ‘Geodesic saliency using background priors’. ECCV, Florence, Italy, 2012, pp. 2942.
    25. 25)
      • 25. Strand, R., Ciesielski, K., Malmberg, F., et al: ‘The minimum barrier distance’, Comput. Vis. Image Underst., 2013, 117, (4), pp. 429437.
    26. 26)
      • 26. Guan, Y., Jiang, B., Xiao, Y., et al: ‘A new graph ranking model for image saliency detection problem’. IEEE 15th Int. Conf. on Software Engineering Research, Management and Applications (SERA), London, UK, 2017.
    27. 27)
      • 27. Hu, P., Shuai, B., Liu, J., et al: ‘Deep level sets for salient object detection’. CVPR, Honolulu, USA, 2017.
    28. 28)
      • 28. Liu, Y., Han, J., Zhang, Q., et al: ‘Salient object detection via two-stage graphs’, IEEE Trans. Circuits Syst. Video Technol., 2018, 29, (4), pp. 10231037.
    29. 29)
      • 29. Wan, H., Luo, Y., Peng, B., et al: ‘Representation learning for scene graph completion via jointly structural and visual embedding’. IJCAI, Stockholm, Sweden, 2018, pp. 949956.
    30. 30)
      • 30. Elhoseiny, M., Cohen, S., Chang, W., et al: ‘Sherlock: scalable fact learning in images’. AAAI, San Francisco, USA, 2017, pp. 40164024.
    31. 31)
      • 31. Lu, X., Song, L., Xie, R., et al: ‘Deep binary representation for efficient image retrieval’, Adv. Multimedia, 2017, 2017.
    32. 32)
      • 32. Wu, G., Han, J., Lin, Z., et al: ‘Joint image-text hashing for fast large-scale cross-media retrieval using self-supervised deep learning’, IEEE Trans. Ind. Electron., 2018, 66, (12), pp. 98689877.
    33. 33)
      • 33. Achanta, R., Shaji, A., Smith, K., et al: ‘SLIC superpixels compared to state-of-the-art superpixel methods’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, pp. 22742282.
    34. 34)
      • 34. Borji, A., Cheng, M., Jiang, H., et al: ‘Salient object detection: A benchmark’, IEEE Trans. Image Process., 2015, 24, (12), pp. 57065722.
    35. 35)
      • 35. Kaibel, V., Peinhardt, M.A.F.: ‘On the bottleneck shortest path problem’, Konrad-Zuse-Zentrum für Information stechnik, 2006.
    36. 36)
      • 36. Rubin, J., Kanwisher, N.: ‘Topological perception: holes in an experiment’, Percept. Psychophys, 1985, 37, pp. 179180.
    37. 37)
      • 37. Chen, K.: ‘Adaptive smoothing via contextual and local discontinuities’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (10), pp. 15521567.
    38. 38)
      • 38. Chen, L.: ‘The topological approach to perceptual organization’, Vis. Cognit., 2005, 12, (4), pp. 553637.
    39. 39)
      • 39. Vassilevska, V.: ‘Efficient algorithms for path problems in weighted graphs’, ProQuest, 2008.
    40. 40)
      • 40. Palmer, S.: ‘Vision science: photons to phenomenology’ (MIT press, Cambridge, MA, 1999).
    41. 41)
      • 41. Reinagel, P., Zador, A., Zador, R.: ‘Natural scene statistics at the center of gaze’, Comput. Neural Syst., 1998, 10, (4), pp. 341350.
    42. 42)
      • 42. Tu, W., He, S., Yang, Q., et al: ‘Real-time salient object detection with a minimum spanning tree’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 23342342.
    43. 43)
      • 43. Yang, C., Zhang, L., Lu, H., et al: ‘Saliency detection via graph-based manifold ranking’. CVPR, Portland, USA, 2013, pp. 31663173.
    44. 44)
      • 44. Zhu, W., Liang, S., Wei, Y., et al: ‘Saliency optimization from robust background detection’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, USA, 2014, pp. 28142821.
    45. 45)
      • 45. Jiang, B., Zhang, L., Lu, H., et al: ‘Saliency detection via absorbing markova chain’. CVPR, Portland, USA, 2013, pp. 16651672.
    46. 46)
      • 46. Li, X., Lu, H., Zhang, L., et al: ‘Saliency detection via dense and sparse reconstruction’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Portland, USA, 2013, pp. 29762983.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2018.5560
Loading

Related content

content/journals/10.1049/iet-cvi.2018.5560
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address