http://iet.metastore.ingenta.com
1887

NSCT-PCNN image fusion based on image gradient motivation

NSCT-PCNN image fusion based on image gradient motivation

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Pulse coupled neural network (PCNN) is widely used in image processing because of its unique biological characteristics, which is suitable for image fusion. When combining PCNN with non-subsampled contourlet (NSCT) model, it is applied in overcoming the difficulty of coefficients selection for subband of the NSCT model. However in the original model, only the grey values of image pixels are used as input, without considering that the subjective vision of human eyes lacks the sensitivity to the local factors of the image. In this study, the improved pulse-coupled neural network model has replaced the grey-scale value of the image and introduced the weighted product of the strength of the gradient of the image and the local phase coherence as the model input. Finally, compared with other multi-scale decompositions-based image fusion and other improved NSCT-PCNN algorithms, the algorithm presented in this study outperforms them in terms of objective criteria and visual appearance.

References

    1. 1)
      • 1. Aslantas, V., Toprak, A.N.: ‘A pixel based multi-focus image fusion method’, Opt. Commun., 2014, 332, (4), pp. 350358.
    2. 2)
      • 2. Zhang, Y.X., Chen, L., Jia, J., et al: ‘Multi-focus image fusion based on non-negative matrix factorization and difference images’, Signal Process., 2014, 105, pp. 8497.
    3. 3)
      • 3. Burt, P., Adelson, E.: ‘The Laplacian pyramid as a compact image code’, IEEE Trans. Commun., 1983, 31, (4), pp. 532540.
    4. 4)
      • 4. Ranchin, T., Wald, L.: ‘Efficient data fusion using wavelet transform: the case of SPOT satellite images’, Opt. Eng., 1992, 31, pp. 10261031.
    5. 5)
      • 5. Do, M.N., Vetterli, M.: ‘The contourlet transform: an efficient directional multiresolution image representation’, IEEE Trans. Image Process., 2005, 14, (12), pp. 20912106.
    6. 6)
      • 6. Cunha, A.L., Zhou, J.P., Do, M.N.: ‘The nonsubsampled contourlet transform: theory,design,and applications’, IEEE Trans. Image Process., 2006, 15, (10), pp. 30893101.
    7. 7)
      • 7. Yang, Y., Tong, S., Huang, S., et al: ‘Multifocus image fusion based on NSCT and focused area detection’, IEEE Sens. J., 2014, 15, (5), pp. 28242838.
    8. 8)
      • 8. Kong, W.W., Lei, Y.J., Lei, Y., et al: ‘Image fusion technique based on nonsubsampled contourlet transform and adaptive unit-fast-linking pulse coupled neural network’, IET Image Process., 2011, 5, (2), pp. 113121.
    9. 9)
      • 9. Qu, X.B., Yan, J.W., Xiao, H.Z., et al: ‘Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain’, Acta Autom. Sin., 2008, 34, (12), pp. 15081514.
    10. 10)
      • 10. Xiang, T.Z., Yan, L., Gao, R.R.: ‘A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain’, Infrared Phys. Technol., 2015, 69, pp. 5361.
    11. 11)
      • 11. Yan, C.M., Guo, B.L., Yi, M.: ‘Fast algorithm for nonsubsampled contourlet transform’, Acta Autom. Sin., 2014, 40, (4), pp. 757762.
    12. 12)
      • 12. Chen, B.J., Shu, H.Z., Coatrieux, G., et al: ‘Color image analysis by quaternion-type moments’, J. Math. Imaging Vis., 2015, 51, (1), pp. 124144.
    13. 13)
      • 13. Sayed-Mouchawe, M., Lughofer, E.: ‘Learning in non-stationary environments: methods and applications’ (Springer, New York, 2012).
    14. 14)
      • 14. Lughofer, E., Sayed-Mouchaweh, M.: ‘Autonomous data stream clustering implementing split-and-merge concepts-towards a plug-and-play approach’, Inf. Sci., 2015, 304, pp. 5479.
    15. 15)
      • 15. Eckhorn, R., Reitboeck, H.J., Arndt, M., et al: ‘Feature linking via synchronization among distributed assemblies: simulations of results from cat visual cortex’, Neural Comput., 1990, 2, (3), pp. 293307.
    16. 16)
      • 16. Fang, Y., Liu, S.P.: ‘Infrared image fusion algorithm based on contourlet transform and improved pulse coupled neural networks’. China Patent, 1873693A, December 2006(in Chinese).
    17. 17)
      • 17. Johnson, J.L., Padgett, M.L.: ‘PCNN models and applications’, IEEE Trans. Neural Netw., 1999, 10, (3), pp. 480498.
    18. 18)
      • 18. Subashini, M.M., Sahoo, S.K.: ‘Pulse coupled neural networks and its applications’, Expert Syst. Appl., 2014, 41, (8), pp. 39653974.
    19. 19)
      • 19. Rubio, J.J.: ‘Least square neural network model of the crude oil blending process’, J. Int. Neural Netw. Soc., 2016, 78, (3), pp. 8896.
    20. 20)
      • 20. Rubio, J.J.: ‘Interpolation neural network model of a manufactured wind turbine’, Neural Comput. Appl., 2016, 28, (8), pp. 112.
    21. 21)
      • 21. Wang, Z.B., Ma, Y.D.: ‘Medical image fusion using m-PCNN’, Inf. Fusion, 2008, 9, (2), pp. 176185.
    22. 22)
      • 22. Petrovic, V.S., Xydeas, C.S.: ‘Gradient-based multiresolution image fusion’, IEEE Trans. Image Process., 2004, 13, (2), pp. 228237.
    23. 23)
      • 23. Zhao, W.D., Xu, Z.J., Zhao, J.: ‘Gradient entropy metric and p-Laplace diffusion constraint-based algorithm for noisy multispectral image fusion’, Inf. Fusion, 2016, 27, pp. 138149.
    24. 24)
      • 24. Tian, J., Chen, L., Ma, L.H., et al: ‘Multi-focus image fusion using a bilateral gradient-based sharpness criterion’, Opt. Commun., 2011, 284, (1), pp. 8087.
    25. 25)
      • 25. Agrawal, A., Raskar, R., Chellappa, R.: ‘Edge suppression by gradient field transformation using cross-projection tensors’, Comput. Vis. Pattern Recognit., 2006, 2, pp. 23012308.
    26. 26)
      • 26. Kaur, S., Kaur, K.: ‘Multi-focus image fusion using denoising and sharpness criterion’, Int. J. Electron. Comput. Sci. Eng., 2013, 2, (1), pp. 1822.
    27. 27)
      • 27. Wee, C.Y., Paramesran, R.: ‘Measure of image sharpness using eigenvalues’, Inf. Sci., 2007, 177, (12), pp. 25332552.
    28. 28)
      • 28. Kovesi, P.: ‘Image features from phase congruency’, Videre:J. Comput. Vis. Res., 1999, 1, (3), pp. 126.
    29. 29)
      • 29. Hassen, R., Wang, Z., Salama, M.M.: ‘Image sharpness assessment based on local phase coherence’, IEEE Trans. Image Process., 2013, 22, (7), pp. 27982810.
    30. 30)
      • 30. Bourantas, C.V., Papafaklis, M.I., Naka, K.K., et al: ‘Fusion of optical coherence tomography and coronary angiography - In vivo assessment of shear stress in plaque rupture’, Int. J. Cardiol., 2012, 155, (2), pp. e24e26.
    31. 31)
      • 31. Ma, L., Tian, J., Yu, W.: ‘Visual saliency detection in image using ant colony optimisation and local phase coherence’, Electron. Lett., 2010, 46, (15), pp. 10661068.
    32. 32)
      • 32. Kong, W.W., Zhang, L.J., Lei, Y.: ‘Novel fusion method for visible light and infrared images based on NSST–SF–PCNN’, Infrared Phys. Technol., 2014, 65, pp. 103112.
    33. 33)
      • 33. Zhou, Z.Q., Li, S., Wang, B.: ‘Multi-scale weighted gradient-based fusion for multi-focus images’, Inf. Fusion, 2014, 20, pp. 6072.
    34. 34)
      • 34. Wang, Z.B., Ma, Y.D., Gu, J.: ‘Multi-focus image fusion using PCNN’, Pattern Recognit., J. Pattern Recognit. Soc., 2010, 43, (6), pp. 20032016.
    35. 35)
      • 35. Huang, W., Jing, Z.L.: ‘Multi-focus image fusion using pulse coupled neural network’, Pattern Recognit. Lett., 2007, 28, (9), pp. 11231132.
    36. 36)
      • 36. Liu, F., Li, J., Huang, C.Y.: ‘Image fusion algorithm based on simplified PCNN in nonsubsampled contourlet transform domain’, Procedia Eng., 2012, 29, pp. 14341438.
    37. 37)
      • 37. Li, J., Li, X.L., Yang, B., et al: ‘Segmentation-based image copy-move forgery detection scheme’, IEEE Trans. Inf. Forensics Sec., 2015, 10, (3), pp. 507518.
    38. 38)
      • 38. Liu, Z., Forsyth, D.S., Laganiere, R.: ‘A feature-based metric for the quantitative evaluation of pixel-level image fusion’, Comput. Vis. Image Underst., 2008, 109, (1), pp. 5668.
    39. 39)
      • 39. Barnaure, I., Pollak, P., Momjian, S., et al: ‘Evaluation of electrode position in deep brain stimulation by image fusion (MRI and CT)’, Neuroradiology, 2015, 57, pp. 903908.
    40. 40)
      • 40. Eskicioglu, A.M., Fisher, P.S.: ‘Image quality measures and their performance’, IEEE Trans. Commun., 1995, 43, (12), pp. 29592965.
    41. 41)
      • 41. Zhu, Q., Ding, S.: ‘Self-adaptation NSCT-PCNN image fusion based GA optimization’, J. Chi. Comput. Syst., 2016, 37, (7), pp. 15831587.
    42. 42)
      • 42. Das, S., Malay, K.K.: ‘NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency’, Med. Biol. Eng. Comput., 2012, 50, (10), pp. 11051114.
    43. 43)
      • 43. Li, S., Kang, X., Hu, J.: ‘Image fusion with guided filtering’, IEEE Trans. Image Process., 2013, 22, (7), pp. 28642875.
    44. 44)
      • 44. Liu, Y., Liu, S., Wang, Z.: ‘A general framework for image fusion based on multi-scale transform and sparse representation’, Inf. Fusion, 2015, 24, pp. 147164.
    45. 45)
      • 45. Liu, Y., Liu, S., Wang, Z.: ‘Medical image fusion by combining nonsubsampled contourlet transform and sparse representation’, Pattern Recognit., 2014, 484, pp. 372381.
    46. 46)
      • 46. Tang, L., Li, L., Qian, J., et al: ‘NSCT-based multimodal medical image fusion with sparse representation and pulse coupled neural network’, J. Inf. Hiding Multimedia Signal Process., 2016, 7, (6), pp. 13061316.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0285
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0285
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address