NSCT-PCNN image fusion based on image gradient motivation
- Author(s): Shifei Ding 1 ; Xingyu Zhao 1 ; Hui Xu 1 ; Qiangbo Zhu 1 ; Yu Xue 2
-
-
View affiliations
-
Affiliations:
1:
School of Computer Science and Technology , China University of Mining and Technology , Xuzhou 221116 , People's Republic of China ;
2: School of Computer and Software , Nanjing University of Information Science & Technology , Nanjing 210044 , People's Republic of China
-
Affiliations:
1:
School of Computer Science and Technology , China University of Mining and Technology , Xuzhou 221116 , People's Republic of China ;
- Source:
Volume 12, Issue 4,
June
2018,
p.
377 – 383
DOI: 10.1049/iet-cvi.2017.0285 , Print ISSN 1751-9632, Online ISSN 1751-9640
Pulse coupled neural network (PCNN) is widely used in image processing because of its unique biological characteristics, which is suitable for image fusion. When combining PCNN with non-subsampled contourlet (NSCT) model, it is applied in overcoming the difficulty of coefficients selection for subband of the NSCT model. However in the original model, only the grey values of image pixels are used as input, without considering that the subjective vision of human eyes lacks the sensitivity to the local factors of the image. In this study, the improved pulse-coupled neural network model has replaced the grey-scale value of the image and introduced the weighted product of the strength of the gradient of the image and the local phase coherence as the model input. Finally, compared with other multi-scale decompositions-based image fusion and other improved NSCT-PCNN algorithms, the algorithm presented in this study outperforms them in terms of objective criteria and visual appearance.
Inspec keywords: neural nets; image resolution; gradient methods; image fusion
Other keywords: improved pulse-coupled neural network model; nonsubsampled contourlet model; image pixels; image gradient motivation; NSCT-PCNN image fusion
Subjects: Computer vision and image processing techniques; Neural computing techniques; Interpolation and function approximation (numerical analysis); Interpolation and function approximation (numerical analysis); Optical, image and video signal processing
References
-
-
1)
-
27. Wee, C.Y., Paramesran, R.: ‘Measure of image sharpness using eigenvalues’, Inf. Sci., 2007, 177, (12), pp. 2533–2552.
-
-
2)
-
15. Eckhorn, R., Reitboeck, H.J., Arndt, M., et al: ‘Feature linking via synchronization among distributed assemblies: simulations of results from cat visual cortex’, Neural Comput., 1990, 2, (3), pp. 293–307.
-
-
3)
-
46. Tang, L., Li, L., Qian, J., et al: ‘NSCT-based multimodal medical image fusion with sparse representation and pulse coupled neural network’, J. Inf. Hiding Multimedia Signal Process., 2016, 7, (6), pp. 1306–1316.
-
-
4)
-
12. Chen, B.J., Shu, H.Z., Coatrieux, G., et al: ‘Color image analysis by quaternion-type moments’, J. Math. Imaging Vis., 2015, 51, (1), pp. 124–144.
-
-
5)
-
36. Liu, F., Li, J., Huang, C.Y.: ‘Image fusion algorithm based on simplified PCNN in nonsubsampled contourlet transform domain’, Procedia Eng., 2012, 29, pp. 1434–1438.
-
-
6)
-
30. Bourantas, C.V., Papafaklis, M.I., Naka, K.K., et al: ‘Fusion of optical coherence tomography and coronary angiography - In vivo assessment of shear stress in plaque rupture’, Int. J. Cardiol., 2012, 155, (2), pp. e24–e26.
-
-
7)
-
17. Johnson, J.L., Padgett, M.L.: ‘PCNN models and applications’, IEEE Trans. Neural Netw., 1999, 10, (3), pp. 480–498.
-
-
8)
-
41. Zhu, Q., Ding, S.: ‘Self-adaptation NSCT-PCNN image fusion based GA optimization’, J. Chi. Comput. Syst., 2016, 37, (7), pp. 1583–1587.
-
-
9)
-
11. Yan, C.M., Guo, B.L., Yi, M.: ‘Fast algorithm for nonsubsampled contourlet transform’, Acta Autom. Sin., 2014, 40, (4), pp. 757–762.
-
-
10)
-
23. Zhao, W.D., Xu, Z.J., Zhao, J.: ‘Gradient entropy metric and p-Laplace diffusion constraint-based algorithm for noisy multispectral image fusion’, Inf. Fusion, 2016, 27, pp. 138–149.
-
-
11)
-
35. Huang, W., Jing, Z.L.: ‘Multi-focus image fusion using pulse coupled neural network’, Pattern Recognit. Lett., 2007, 28, (9), pp. 1123–1132.
-
-
12)
-
14. Lughofer, E., Sayed-Mouchaweh, M.: ‘Autonomous data stream clustering implementing split-and-merge concepts-towards a plug-and-play approach’, Inf. Sci., 2015, 304, pp. 54–79.
-
-
13)
-
18. Subashini, M.M., Sahoo, S.K.: ‘Pulse coupled neural networks and its applications’, Expert Syst. Appl., 2014, 41, (8), pp. 3965–3974.
-
-
14)
-
32. Kong, W.W., Zhang, L.J., Lei, Y.: ‘Novel fusion method for visible light and infrared images based on NSST–SF–PCNN’, Infrared Phys. Technol., 2014, 65, pp. 103–112.
-
-
15)
-
20. Rubio, J.J.: ‘Interpolation neural network model of a manufactured wind turbine’, Neural Comput. Appl., 2016, 28, (8), pp. 1–12.
-
-
16)
-
25. Agrawal, A., Raskar, R., Chellappa, R.: ‘Edge suppression by gradient field transformation using cross-projection tensors’, Comput. Vis. Pattern Recognit., 2006, 2, pp. 2301–2308.
-
-
17)
-
42. Das, S., Malay, K.K.: ‘NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency’, Med. Biol. Eng. Comput., 2012, 50, (10), pp. 1105–1114.
-
-
18)
-
24. Tian, J., Chen, L., Ma, L.H., et al: ‘Multi-focus image fusion using a bilateral gradient-based sharpness criterion’, Opt. Commun., 2011, 284, (1), pp. 80–87.
-
-
19)
-
21. Wang, Z.B., Ma, Y.D.: ‘Medical image fusion using m-PCNN’, Inf. Fusion, 2008, 9, (2), pp. 176–185.
-
-
20)
-
16. Fang, Y., Liu, S.P.: ‘Infrared image fusion algorithm based on contourlet transform and improved pulse coupled neural networks’. China Patent, 1873693A, December 2006(in Chinese).
-
-
21)
-
19. Rubio, J.J.: ‘Least square neural network model of the crude oil blending process’, J. Int. Neural Netw. Soc., 2016, 78, (3), pp. 88–96.
-
-
22)
-
45. Liu, Y., Liu, S., Wang, Z.: ‘Medical image fusion by combining nonsubsampled contourlet transform and sparse representation’, Pattern Recognit., 2014, 484, pp. 372–381.
-
-
23)
-
31. Ma, L., Tian, J., Yu, W.: ‘Visual saliency detection in image using ant colony optimisation and local phase coherence’, Electron. Lett., 2010, 46, (15), pp. 1066–1068.
-
-
24)
-
39. Barnaure, I., Pollak, P., Momjian, S., et al: ‘Evaluation of electrode position in deep brain stimulation by image fusion (MRI and CT)’, Neuroradiology, 2015, 57, pp. 903–908.
-
-
25)
-
44. Liu, Y., Liu, S., Wang, Z.: ‘A general framework for image fusion based on multi-scale transform and sparse representation’, Inf. Fusion, 2015, 24, pp. 147–164.
-
-
26)
-
33. Zhou, Z.Q., Li, S., Wang, B.: ‘Multi-scale weighted gradient-based fusion for multi-focus images’, Inf. Fusion, 2014, 20, pp. 60–72.
-
-
27)
-
3. Burt, P., Adelson, E.: ‘The Laplacian pyramid as a compact image code’, IEEE Trans. Commun., 1983, 31, (4), pp. 532–540.
-
-
28)
-
6. Cunha, A.L., Zhou, J.P., Do, M.N.: ‘The nonsubsampled contourlet transform: theory,design,and applications’, IEEE Trans. Image Process., 2006, 15, (10), pp. 3089–3101.
-
-
29)
-
5. Do, M.N., Vetterli, M.: ‘The contourlet transform: an efficient directional multiresolution image representation’, IEEE Trans. Image Process., 2005, 14, (12), pp. 2091–2106.
-
-
30)
-
9. Qu, X.B., Yan, J.W., Xiao, H.Z., et al: ‘Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain’, Acta Autom. Sin., 2008, 34, (12), pp. 1508–1514.
-
-
31)
-
4. Ranchin, T., Wald, L.: ‘Efficient data fusion using wavelet transform: the case of SPOT satellite images’, Opt. Eng., 1992, 31, pp. 1026–1031.
-
-
32)
-
28. Kovesi, P.: ‘Image features from phase congruency’, Videre:J. Comput. Vis. Res., 1999, 1, (3), pp. 1–26.
-
-
33)
-
34. Wang, Z.B., Ma, Y.D., Gu, J.: ‘Multi-focus image fusion using PCNN’, Pattern Recognit., J. Pattern Recognit. Soc., 2010, 43, (6), pp. 2003–2016.
-
-
34)
-
7. Yang, Y., Tong, S., Huang, S., et al: ‘Multifocus image fusion based on NSCT and focused area detection’, IEEE Sens. J., 2014, 15, (5), pp. 2824–2838.
-
-
35)
-
1. Aslantas, V., Toprak, A.N.: ‘A pixel based multi-focus image fusion method’, Opt. Commun., 2014, 332, (4), pp. 350–358.
-
-
36)
-
37. Li, J., Li, X.L., Yang, B., et al: ‘Segmentation-based image copy-move forgery detection scheme’, IEEE Trans. Inf. Forensics Sec., 2015, 10, (3), pp. 507–518.
-
-
37)
-
43. Li, S., Kang, X., Hu, J.: ‘Image fusion with guided filtering’, IEEE Trans. Image Process., 2013, 22, (7), pp. 2864–2875.
-
-
38)
-
8. Kong, W.W., Lei, Y.J., Lei, Y., et al: ‘Image fusion technique based on nonsubsampled contourlet transform and adaptive unit-fast-linking pulse coupled neural network’, IET Image Process., 2011, 5, (2), pp. 113–121.
-
-
39)
-
29. Hassen, R., Wang, Z., Salama, M.M.: ‘Image sharpness assessment based on local phase coherence’, IEEE Trans. Image Process., 2013, 22, (7), pp. 2798–2810.
-
-
40)
-
38. Liu, Z., Forsyth, D.S., Laganiere, R.: ‘A feature-based metric for the quantitative evaluation of pixel-level image fusion’, Comput. Vis. Image Underst., 2008, 109, (1), pp. 56–68.
-
-
41)
-
40. Eskicioglu, A.M., Fisher, P.S.: ‘Image quality measures and their performance’, IEEE Trans. Commun., 1995, 43, (12), pp. 2959–2965.
-
-
42)
-
22. Petrovic, V.S., Xydeas, C.S.: ‘Gradient-based multiresolution image fusion’, IEEE Trans. Image Process., 2004, 13, (2), pp. 228–237.
-
-
43)
-
2. Zhang, Y.X., Chen, L., Jia, J., et al: ‘Multi-focus image fusion based on non-negative matrix factorization and difference images’, Signal Process., 2014, 105, pp. 84–97.
-
-
44)
-
13. Sayed-Mouchawe, M., Lughofer, E.: ‘Learning in non-stationary environments: methods and applications’ (Springer, New York, 2012).
-
-
45)
-
26. Kaur, S., Kaur, K.: ‘Multi-focus image fusion using denoising and sharpness criterion’, Int. J. Electron. Comput. Sci. Eng., 2013, 2, (1), pp. 18–22.
-
-
46)
-
10. Xiang, T.Z., Yan, L., Gao, R.R.: ‘A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain’, Infrared Phys. Technol., 2015, 69, pp. 53–61.
-
-
1)