Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

Fusion of visual salience maps for object acquisition

Fusion of visual salience maps for object acquisition

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The paradigm of visual attention has been widely investigated and applied to many computer vision applications. In this study, the authors propose a new saliency-based visual attention algorithm applied to object acquisition. The proposed algorithm automatically extracts points of visual attention (PVA) in the scene, based on different feature saliency maps. Each saliency map represents a specific feature domain, such as textural, contrast, and statistical-based features. A feature selection, based on probability of detection and false alarm rate and repeatability criteria, is proposed to choose the most efficient feature combination for saliency map. Motivated by the assumption that the extracted PVA represents the most visually salient regions in the image, they suggest using the visual attention approach for object acquisition. A comparison with other well-known algorithms for point of interest detection shows that the proposed algorithm performs better. The proposed algorithm was successfully tested on synthetic, charge-coupled device (CCD), and infrared (IR) images. Evaluation of the algorithm for object acquisition, based on ground truth, is carried out using synthetic images, which contain multiple examples of objects, with various sizes and brightness levels. A high probability of correct detection (greater than 90%) with a low false alarm rate (about 20 false alarms per image) was achieved.

References

    1. 1)
      • 23. Snorrason, M., Ruda, H., Hoffman, J.: ‘Modeling cognitive effects on visual search for targets in cluttered backgrounds’. Proc. of SPIE 2th Annual AeroSense, Orlando, 1998, vol. 3375.
    2. 2)
      • 38. Pan, Z., Liu, S., Sangaiah, A.K., et al: ‘Visual attention feature (VAF): a novel strategy for visual tracking based on cloud platform in intelligent surveillance systems’, J. Parallel Distrib. Comput., 2018, 120, pp. 182194.
    3. 3)
      • 34. Cheng, M., Mitra, N.J., Huang, X., et al: ‘Global contrast based salient region detection’, IEEE Trans. Pattern Anal. Mach. Intell., 2015, 37, (3), pp. 569582.
    4. 4)
      • 11. Zhang, Q., Ren, H.: ‘A computational model for object-based visual attention’, J. Convergence Inf. Technol., 2011, 6, (8), pp. 2334.
    5. 5)
      • 45. Ratches, J.: ‘Static performance model for thermal imaging systems’, Opt. Eng., 1976, 15, (6), pp. 523536.
    6. 6)
      • 15. Aguilar, W.G., Luna, M.A., Moya, J.F., et al: ‘Cascade classifiers and saliency maps based people detection’. Proc. Int. Conf. Augmented Reality, Virtual Reality Computer Graphics, Ugento, Italy, 2017, pp. 501510.
    7. 7)
      • 21. Itti, L., Koch, C., Niebur, E.: ‘A model of saliency-based visual attention for rapid scene analysis’, IEEE Trans. Pattern Anal. Mach. Intell., 1998, 20, pp. 12541259.
    8. 8)
      • 48. Koch, C., Ullman, S.: ‘Shifts in selective visual attention: towards the underlying neural circuitry’, Hum. Neurobiol., 1985, 4, pp. 219227.
    9. 9)
      • 32. Gao, D., Vasconcelos, N.: ‘Decision-theoretic saliency: computational principles, biological plausibility, and implications for neurophysiology and psychophysics’, Neural Comput., 2009, 21, pp. 239271.
    10. 10)
      • 33. Navalpakkam, V., Itti, L.: ‘An integrated model of top-down and bottom-up attention for optimal object detection’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, New York, 2006, pp. 20492056.
    11. 11)
      • 44. Ratches, J., Walters, C.P., Buser, R.G., et al: ‘Aided and automatic target recognition based upon sensory inputs from image forming system’, IEEE Trans. Pattern Anal. Mach. Intell., 1997, 19, (9), pp. 10041019.
    12. 12)
      • 17. Zhen, H., Yewei, L., Jinjiang, I.: ‘Image salient region extraction algorithm based on improved visual attention model’, J. Convergence Inf. Technol., 2011, 6, (5), pp. 280290.
    13. 13)
      • 49. Itti, L., Koch, C.: ‘A comparison of feature combination strategies for saliency based visual attention systems’. Proc. SPIE, San Jose, CA, 1999, vol. 3644, pp. 473482.
    14. 14)
      • 46. Harris, C., Stephens, M.: ‘A combined corner and edge detector’. Alvey Vision Conf., Manchester, 1988, pp. 147151.
    15. 15)
      • 25. Itti, L., Borji, A.: ‘Computational models: bottom-up and top-down aspects’, ArXiv e-prints, 2015.
    16. 16)
      • 1. Greenberg, S., Rotman, S., Guterman, H., et al: ‘ROI-based algorithm for automatic target detection in IR images’, Opt. Eng., 2005, 44, (7), pp. 077002(1)077002(10).
    17. 17)
      • 16. Aziz, M., Mertsching, B.: ‘Fast and robust generation of feature maps for region-based visual attention’, IEEE Trans. Image Process., 2008, 17, (5), pp. 17.
    18. 18)
      • 39. Borji, A., Itti, L.: ‘State-of-the-art in visual attention modeling’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (1), pp. 185207.
    19. 19)
      • 18. Itti, L., Koch, C.: ‘Visual attention and target detection in cluttered natural scenes’, Opt. Eng., 2001, 40, (9), pp. 17841793.
    20. 20)
      • 6. Bellavia, F., Tegolo, D., Valenti, C.: ‘Improving Harris corner selection strategy’, IET Comput. Vis., 2011, 5, pp. 8796.
    21. 21)
      • 42. Schmid, S., Mohr, R., Bauckhage, C.: ‘Evaluation of interest point detectors’, Int. J. Comput. Vis., 2000, 37, (2), pp. 151172.
    22. 22)
      • 55. Gareth, L., Zelinsky, Z.: ‘A fast radial symmetry transform for detecting points of interest’. Proc. of European Conf. on Computer Vision (ECCV2002), Copenhagen, 2002.
    23. 23)
      • 5. Kovesi, P.: ‘Phase congruency detects corners and edges’. The Australian Pattern Recognition Society Conf., Sydney, 2003, pp. 309318.
    24. 24)
      • 14. Borji, A., Cheng, M., Jiang, H., et al: ‘Salient object detection: a benchmark’, IEEE Trans. Image Process., 2015, 24, (12), pp. 57065722.
    25. 25)
      • 51. Greenspan, H., Belongie, S., Goodman, R., et al: ‘Overcomplete steerable pyramid filters and rotation invariance’. Proc. IEEE Computer Vision and Pattern Recognition, Seattle, Washington, 1994, vol. 1, no. 2, pp. 222228.
    26. 26)
      • 2. Wang, J., Weichuan, Z.: ‘A survey of corner detection methods’. 2018 2nd Int. Conf. on Electrical Engineering and Automation (ICEEA), Chengdu, 2018.
    27. 27)
      • 4. Greenberg, S., Yehezkel, R., Gurevich, Y., et al: ‘NLEBS: automatic target detection using a unique non-linear enhancement based system applied to IR images’, Opt. Eng. J., 2000, 39, (5), pp. 13691376.
    28. 28)
      • 54. Xing, F., Yang, L.: ‘Robust nucleus/cell detection and segmentation in digital pathology and microscopy images: a comprehensive review’, IEEE Rev. Biomed. Eng., 2016, 9, pp. 234263.
    29. 29)
      • 65. Copeland, A.C., Trivedi, M.M.: ‘Signature strength metrics for camouflaged targets corresponding to human perceptual cues’, Opt. Eng., 1998, 37, pp. 582591.
    30. 30)
      • 66. Haghighat, M., Abdel-Mottaleb, M., Alhalabi, W.: ‘Discriminant correlation analysis: real-time feature level fusion for multimodal biometric recognition’, IEEE Trans. Inf. Forensics Sec., 2016, 11, (9), pp. 19841996.
    31. 31)
      • 13. Mendi1, E., Milanova, M.: ‘Contour-based image segmentation using selective visual attention’, J. Softw.e Eng. Appl., 2010, 3, pp. 768802.
    32. 32)
      • 24. Privitera, C., Stark, L.: ‘Algorithm for defining visual region of interest: comparison with eye fixations’, IEEE Trans. Pattern Anal. Mach. Intell., 2000, 22, (9), pp. 970980.
    33. 33)
      • 19. Lindeberg, T.: ‘Feature detection with automatic scale selection’, Int. J. Comput. Vis., 1998, 30, (2), pp. 76116.
    34. 34)
      • 8. Maver, J: ‘Self-similarity and points of interest’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (7), pp. 12111226.
    35. 35)
      • 22. Bollman, M., Hoischen, R., Mertsching, B.: ‘Integration of static and dynamic scene features guiding visual attention’. Musterekennung, Berlin, 1997, pp. 483490.
    36. 36)
      • 26. Liesefeld, H.R., Liesefeld, A.M., Muller, H.J.: ‘Saliency maps for finding changes in visual scenes? Attention, perception, and psychophysics’, IET Syst. Biol., 2017, 79, pp. 21902201.
    37. 37)
      • 43. Sojka, E.: ‘A new algorithm for detecting corners in digital images’. Spring Conf. of Computer Graphics, Budmerice, Slovakia, 2002.
    38. 38)
      • 31. Kanan, C., Tong, M.H., Zhang, L., et al: ‘SUN: top-down saliency using natural statistics’, Vis. Cogn., 2009, 17, pp. 9791003.
    39. 39)
      • 10. Verma, S.K., Kaur, G., Kumar, A.: ‘Entropy based ROI extraction and modified contour model for image segmentation’. IEEE 6th Int. Conf. on Advanced Computing (IACC), Bhimavaram, 2016, pp. 326331.
    40. 40)
      • 56. Tuceryan, M., Jain, S., Al-Kofahi, O., et al: ‘Texture analysis’, ‘Handbook of pattern recognition and computer vision’ (World Scientific, Singapore, 1993), pp. 235276.
    41. 41)
      • 20. Javier, J., Mishkin, D., Chum, O., et al: ‘In the saddle: chasing fast and repeatable features’, ArXiv e-print, 2016.
    42. 42)
      • 3. Li, Y., Wang, S., Tian, Q., et al: ‘A survey of recent advances in visual feature detection’, Neurocomputing, 2015, 149, pp. 736751.
    43. 43)
      • 50. Walther, D., Rutisheuser, U., Koch, C., et al: ‘Selective visual attention enables learning and recognition of multiple objects in cluttered scenes’, Comput. Vis. Image Underst. J., 2005, 1, (2), pp. 17.
    44. 44)
      • 70. Radke, R.J., Andra, S., Al-Kofahi, O., et al: ‘Image change detection algorithms: a systematic survey’, IEEE Trans. Image Process., 2005, 14, (3), pp. 294307.
    45. 45)
      • 60. Winkler, S.: ‘Digital video quality-vision models and metrics’ (Wiley, USA, 2005).
    46. 46)
      • 9. Fang, Y., Zhang, C., Li, J., et al: ‘Visual attention modeling for stereoscopic video: a benchmark and computational model’, IEEE Trans. Image Process., 2017, 26, (10), pp. 46844696.
    47. 47)
      • 53. Reisfeld, D., Wolfson, H., Yeshurun, Y.: ‘Context-free attentional operators: the generalized symmetry transform’, J. Comput. Vis., 1995, 14, pp. 119130.
    48. 48)
      • 61. Bruni, V., Vitulano, D.: ‘An improvement of kernel-based object tracking based on human perception’, IEEE Trans. Syst., 2014, 44, (11), pp. 14741485.
    49. 49)
      • 58. Canny, J.: ‘A computational approach to edge detection’, IEEE Trans. Pattern Anal. Mach. Intell., 1986, 8, (6), pp. 679698.
    50. 50)
      • 30. Gao, D., Han, S., Vasconcelos, N.: ‘Discriminant saliency, the detection of suspicious coincidences, and applications to visual recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2009, 31, pp. 9891005.
    51. 51)
      • 47. Sebe, N., Tian, Q., Loupias, E., et al: ‘Evaluation of salient point techniques’, Image Vis. Comput., 2003, 21, (13–14), pp. 10871095.
    52. 52)
      • 35. Judd, T., Durand, F., Torralba, A.: ‘A benchmark of computational models of saliency to predict human fixations’. Tech. Rep., Massachusetts Inst. Technol., Computer Science and Artificial Intelligence Lab (CSAIL), 2012.
    53. 53)
      • 67. Gonzalez, R., Woods, R.: ‘Digital image processing’ (Addison-Wesley, Reading, Massachusetts, 1993).
    54. 54)
      • 7. Mikolajczyk, K., Schmid, C.: ‘Scale and affine invariant interest point detectors’, Int. J. Comput. Vis., 2004, 60, (1), pp. 6386.
    55. 55)
      • 52. Wang, J., Fan, Y., Li, Z., et al: ‘Texture classification using multi-resolution global and local Gabor features in pyramid space’, Image Video Process., 2019, 1, (2), pp. 163170.
    56. 56)
      • 29. Vikran, T.N., Tscherepanow, M., Wrede, B.: ‘A visual saliency map based on random sub-window means’. Proc. of the 5th Iberian Conf. on Pattern Recognition and Image Analysis, Berlin, 2011, pp. 3340.
    57. 57)
      • 41. Wan, Y., Muhammad, S.: ‘Invariant Gabor-based interest points detector under geometric transformation’, Digit. Signal Process., 2014, 25, pp. 190197.
    58. 58)
      • 40. Zheng, L., Yang, Y., Tian, Q.: ‘SIFT meets CNN: a decade survey of instance retrieval’, IEEE Trans. Pattern Anal. Mach. Intell., 2018, 40, (5), pp. 12241244.
    59. 59)
      • 59. Lowe, D.: ‘Distinctive image features from scale-invariant keypoints’, J. Comput. Vis., 2004, 60, (2), pp. 91110.
    60. 60)
      • 62. Burt, P., Adelson, E.: ‘The Laplacian pyramid as a compact image code’, Trans. Commun., 1983, 31, (4), pp. 532540.
    61. 61)
      • 12. Yunqi, L., Xiaoling, G., Zhenxiang, S.: ‘Feature description and image retrieval based on visual attention model’, J. Multimed., 2011, 6, (1), pp. 5665.
    62. 62)
      • 64. Ouerhani, N., Wartburg, R., Hugli, H., et al: ‘Empirical validation of the saliency-based model of visual attention’, Electron. Lett. Comput. Vis. Image Anal., 2004, 3, (1), pp. 1324.
    63. 63)
      • 27. Achanta, R., Hemami, S., Estrada, F., et al: ‘Frequency-tuned salient region detection’. IEEE Conf. on Computer Vision and Pattern Recognition, Miami, 2009, pp. 15971604.
    64. 64)
      • 28. Seo, H.J., Milanfar, P.: ‘Nonparametric bottom-up saliency detection by self-resemblance’. IEEE Conf. on Computer Vision and Pattern Recognition, Miami, 2009, pp. 4552.
    65. 65)
      • 69. Bae, T.W., Kim, B.I., Kim, Y.C.: ‘Small target detection using cross product based on temporal profile in infrared image sequences’, Comput. Electr. Eng., 2010, 36, (6), pp. 1561164.
    66. 66)
      • 63. Dong, L., Wang, B., Zhao, M., et al: ‘Robust infrared maritime target detection based on visual attention and spatiotemporal filtering’, IEEE Trans. Geosci. Remote Sens., 2017, 55, (5), pp. 30373050.
    67. 67)
      • 37. Frintrop, F., Rome, E., Christensen, H.I.: ‘Computational visual attention systems and their cognitive foundations: a survey’, ACM Trans. Appl. Percept., 2010, 7, (1), pp. 139.
    68. 68)
      • 36. Borji, A., Sihite, D.N., Itti, L.: ‘Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study’, IEEE Trans. Image Process., 2013, 22, (1), pp. 5569.
    69. 69)
      • 57. Tidhar, G., Relter, G., Avital, Z., et al: ‘Modeling human search and targets acquisition performance: IV. Detection probability in the cluttered environment’, Opt. Eng., 1994, 33, pp. 801808.
    70. 70)
      • 68. Tuytelaars, T., Mikolajczyk, K.: ‘Local invariant feature detectors: a survey’, Found. Trends Comput. Graph. Vis., 2008, 3, (3), pp. 177280.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2019.0624
Loading

Related content

content/journals/10.1049/iet-cvi.2019.0624
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address