http://iet.metastore.ingenta.com
1887

NSCT-PCNN image fusion based on image gradient motivation

NSCT-PCNN image fusion based on image gradient motivation

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Pulse coupled neural network (PCNN) is widely used in image processing because of its unique biological characteristics, which is suitable for image fusion. When combining PCNN with non-subsampled contourlet (NSCT) model, it is applied in overcoming the difficulty of coefficients selection for subband of the NSCT model. However in the original model, only the grey values of image pixels are used as input, without considering that the subjective vision of human eyes lacks the sensitivity to the local factors of the image. In this study, the improved pulse-coupled neural network model has replaced the grey-scale value of the image and introduced the weighted product of the strength of the gradient of the image and the local phase coherence as the model input. Finally, compared with other multi-scale decompositions-based image fusion and other improved NSCT-PCNN algorithms, the algorithm presented in this study outperforms them in terms of objective criteria and visual appearance.

References

    1. 1)
      • V. Aslantas , A.N. Toprak .
        1. Aslantas, V., Toprak, A.N.: ‘A pixel based multi-focus image fusion method’, Opt. Commun., 2014, 332, (4), pp. 350358.
        . Opt. Commun. , 4 , 350 - 358
    2. 2)
      • Y.X. Zhang , L. Chen , J. Jia .
        2. Zhang, Y.X., Chen, L., Jia, J., et al: ‘Multi-focus image fusion based on non-negative matrix factorization and difference images’, Signal Process., 2014, 105, pp. 8497.
        . Signal Process. , 84 - 97
    3. 3)
      • P. Burt , E. Adelson .
        3. Burt, P., Adelson, E.: ‘The Laplacian pyramid as a compact image code’, IEEE Trans. Commun., 1983, 31, (4), pp. 532540.
        . IEEE Trans. Commun. , 4 , 532 - 540
    4. 4)
      • T. Ranchin , L. Wald .
        4. Ranchin, T., Wald, L.: ‘Efficient data fusion using wavelet transform: the case of SPOT satellite images’, Opt. Eng., 1992, 31, pp. 10261031.
        . Opt. Eng. , 1026 - 1031
    5. 5)
      • M.N. Do , M. Vetterli .
        5. Do, M.N., Vetterli, M.: ‘The contourlet transform: an efficient directional multiresolution image representation’, IEEE Trans. Image Process., 2005, 14, (12), pp. 20912106.
        . IEEE Trans. Image Process. , 12 , 2091 - 2106
    6. 6)
      • A.L. Cunha , J.P. Zhou , M.N. Do .
        6. Cunha, A.L., Zhou, J.P., Do, M.N.: ‘The nonsubsampled contourlet transform: theory,design,and applications’, IEEE Trans. Image Process., 2006, 15, (10), pp. 30893101.
        . IEEE Trans. Image Process. , 10 , 3089 - 3101
    7. 7)
      • Y. Yang , S. Tong , S. Huang .
        7. Yang, Y., Tong, S., Huang, S., et al: ‘Multifocus image fusion based on NSCT and focused area detection’, IEEE Sens. J., 2014, 15, (5), pp. 28242838.
        . IEEE Sens. J. , 5 , 2824 - 2838
    8. 8)
      • W.W. Kong , Y.J. Lei , Y. Lei .
        8. Kong, W.W., Lei, Y.J., Lei, Y., et al: ‘Image fusion technique based on nonsubsampled contourlet transform and adaptive unit-fast-linking pulse coupled neural network’, IET Image Process., 2011, 5, (2), pp. 113121.
        . IET Image Process. , 2 , 113 - 121
    9. 9)
      • X.B. Qu , J.W. Yan , H.Z. Xiao .
        9. Qu, X.B., Yan, J.W., Xiao, H.Z., et al: ‘Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain’, Acta Autom. Sin., 2008, 34, (12), pp. 15081514.
        . Acta Autom. Sin. , 12 , 1508 - 1514
    10. 10)
      • T.Z. Xiang , L. Yan , R.R. Gao .
        10. Xiang, T.Z., Yan, L., Gao, R.R.: ‘A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain’, Infrared Phys. Technol., 2015, 69, pp. 5361.
        . Infrared Phys. Technol. , 53 - 61
    11. 11)
      • C.M. Yan , B.L. Guo , M. Yi .
        11. Yan, C.M., Guo, B.L., Yi, M.: ‘Fast algorithm for nonsubsampled contourlet transform’, Acta Autom. Sin., 2014, 40, (4), pp. 757762.
        . Acta Autom. Sin. , 4 , 757 - 762
    12. 12)
      • B.J. Chen , H.Z. Shu , G. Coatrieux .
        12. Chen, B.J., Shu, H.Z., Coatrieux, G., et al: ‘Color image analysis by quaternion-type moments’, J. Math. Imaging Vis., 2015, 51, (1), pp. 124144.
        . J. Math. Imaging Vis. , 1 , 124 - 144
    13. 13)
      • M. Sayed-Mouchawe , E. Lughofer . (2012)
        13. Sayed-Mouchawe, M., Lughofer, E.: ‘Learning in non-stationary environments: methods and applications’ (Springer, New York, 2012).
        .
    14. 14)
      • E. Lughofer , M. Sayed-Mouchaweh .
        14. Lughofer, E., Sayed-Mouchaweh, M.: ‘Autonomous data stream clustering implementing split-and-merge concepts-towards a plug-and-play approach’, Inf. Sci., 2015, 304, pp. 5479.
        . Inf. Sci. , 54 - 79
    15. 15)
      • R. Eckhorn , H.J. Reitboeck , M. Arndt .
        15. Eckhorn, R., Reitboeck, H.J., Arndt, M., et al: ‘Feature linking via synchronization among distributed assemblies: simulations of results from cat visual cortex’, Neural Comput., 1990, 2, (3), pp. 293307.
        . Neural Comput. , 3 , 293 - 307
    16. 16)
      • Y. Fang , S.P. Liu .
        16. Fang, Y., Liu, S.P.: ‘Infrared image fusion algorithm based on contourlet transform and improved pulse coupled neural networks’. China Patent, 1873693A, December 2006(in Chinese).
        .
    17. 17)
      • J.L. Johnson , M.L. Padgett .
        17. Johnson, J.L., Padgett, M.L.: ‘PCNN models and applications’, IEEE Trans. Neural Netw., 1999, 10, (3), pp. 480498.
        . IEEE Trans. Neural Netw. , 3 , 480 - 498
    18. 18)
      • M.M. Subashini , S.K. Sahoo .
        18. Subashini, M.M., Sahoo, S.K.: ‘Pulse coupled neural networks and its applications’, Expert Syst. Appl., 2014, 41, (8), pp. 39653974.
        . Expert Syst. Appl. , 8 , 3965 - 3974
    19. 19)
      • J.J. Rubio .
        19. Rubio, J.J.: ‘Least square neural network model of the crude oil blending process’, J. Int. Neural Netw. Soc., 2016, 78, (3), pp. 8896.
        . J. Int. Neural Netw. Soc. , 88 - 96
    20. 20)
      • J.J. Rubio .
        20. Rubio, J.J.: ‘Interpolation neural network model of a manufactured wind turbine’, Neural Comput. Appl., 2016, 28, (8), pp. 112.
        . Neural Comput. Appl. , 1 - 12
    21. 21)
      • Z.B. Wang , Y.D. Ma .
        21. Wang, Z.B., Ma, Y.D.: ‘Medical image fusion using m-PCNN’, Inf. Fusion, 2008, 9, (2), pp. 176185.
        . Inf. Fusion , 2 , 176 - 185
    22. 22)
      • V.S. Petrovic , C.S. Xydeas .
        22. Petrovic, V.S., Xydeas, C.S.: ‘Gradient-based multiresolution image fusion’, IEEE Trans. Image Process., 2004, 13, (2), pp. 228237.
        . IEEE Trans. Image Process. , 2 , 228 - 237
    23. 23)
      • W.D. Zhao , Z.J. Xu , J. Zhao .
        23. Zhao, W.D., Xu, Z.J., Zhao, J.: ‘Gradient entropy metric and p-Laplace diffusion constraint-based algorithm for noisy multispectral image fusion’, Inf. Fusion, 2016, 27, pp. 138149.
        . Inf. Fusion , 138 - 149
    24. 24)
      • J. Tian , L. Chen , L.H. Ma .
        24. Tian, J., Chen, L., Ma, L.H., et al: ‘Multi-focus image fusion using a bilateral gradient-based sharpness criterion’, Opt. Commun., 2011, 284, (1), pp. 8087.
        . Opt. Commun. , 1 , 80 - 87
    25. 25)
      • A. Agrawal , R. Raskar , R. Chellappa .
        25. Agrawal, A., Raskar, R., Chellappa, R.: ‘Edge suppression by gradient field transformation using cross-projection tensors’, Comput. Vis. Pattern Recognit., 2006, 2, pp. 23012308.
        . Comput. Vis. Pattern Recognit. , 2301 - 2308
    26. 26)
      • S. Kaur , K. Kaur .
        26. Kaur, S., Kaur, K.: ‘Multi-focus image fusion using denoising and sharpness criterion’, Int. J. Electron. Comput. Sci. Eng., 2013, 2, (1), pp. 1822.
        . Int. J. Electron. Comput. Sci. Eng. , 1 , 18 - 22
    27. 27)
      • C.Y. Wee , R. Paramesran .
        27. Wee, C.Y., Paramesran, R.: ‘Measure of image sharpness using eigenvalues’, Inf. Sci., 2007, 177, (12), pp. 25332552.
        . Inf. Sci. , 12 , 2533 - 2552
    28. 28)
      • P. Kovesi .
        28. Kovesi, P.: ‘Image features from phase congruency’, Videre:J. Comput. Vis. Res., 1999, 1, (3), pp. 126.
        . Videre:J. Comput. Vis. Res. , 3 , 1 - 26
    29. 29)
      • R. Hassen , Z. Wang , M.M. Salama .
        29. Hassen, R., Wang, Z., Salama, M.M.: ‘Image sharpness assessment based on local phase coherence’, IEEE Trans. Image Process., 2013, 22, (7), pp. 27982810.
        . IEEE Trans. Image Process. , 7 , 2798 - 2810
    30. 30)
      • C.V. Bourantas , M.I. Papafaklis , K.K. Naka .
        30. Bourantas, C.V., Papafaklis, M.I., Naka, K.K., et al: ‘Fusion of optical coherence tomography and coronary angiography - In vivo assessment of shear stress in plaque rupture’, Int. J. Cardiol., 2012, 155, (2), pp. e24e26.
        . Int. J. Cardiol. , 2 , e24 - e26
    31. 31)
      • L. Ma , J. Tian , W. Yu .
        31. Ma, L., Tian, J., Yu, W.: ‘Visual saliency detection in image using ant colony optimisation and local phase coherence’, Electron. Lett., 2010, 46, (15), pp. 10661068.
        . Electron. Lett. , 15 , 1066 - 1068
    32. 32)
      • W.W. Kong , L.J. Zhang , Y. Lei .
        32. Kong, W.W., Zhang, L.J., Lei, Y.: ‘Novel fusion method for visible light and infrared images based on NSST–SF–PCNN’, Infrared Phys. Technol., 2014, 65, pp. 103112.
        . Infrared Phys. Technol. , 103 - 112
    33. 33)
      • Z.Q. Zhou , S. Li , B. Wang .
        33. Zhou, Z.Q., Li, S., Wang, B.: ‘Multi-scale weighted gradient-based fusion for multi-focus images’, Inf. Fusion, 2014, 20, pp. 6072.
        . Inf. Fusion , 60 - 72
    34. 34)
      • Z.B. Wang , Y.D. Ma , J. Gu .
        34. Wang, Z.B., Ma, Y.D., Gu, J.: ‘Multi-focus image fusion using PCNN’, Pattern Recognit., J. Pattern Recognit. Soc., 2010, 43, (6), pp. 20032016.
        . Pattern Recognit., J. Pattern Recognit. Soc. , 6 , 2003 - 2016
    35. 35)
      • W. Huang , Z.L. Jing .
        35. Huang, W., Jing, Z.L.: ‘Multi-focus image fusion using pulse coupled neural network’, Pattern Recognit. Lett., 2007, 28, (9), pp. 11231132.
        . Pattern Recognit. Lett. , 9 , 1123 - 1132
    36. 36)
      • F. Liu , J. Li , C.Y. Huang .
        36. Liu, F., Li, J., Huang, C.Y.: ‘Image fusion algorithm based on simplified PCNN in nonsubsampled contourlet transform domain’, Procedia Eng., 2012, 29, pp. 14341438.
        . Procedia Eng. , 1434 - 1438
    37. 37)
      • J. Li , X.L. Li , B. Yang .
        37. Li, J., Li, X.L., Yang, B., et al: ‘Segmentation-based image copy-move forgery detection scheme’, IEEE Trans. Inf. Forensics Sec., 2015, 10, (3), pp. 507518.
        . IEEE Trans. Inf. Forensics Sec. , 3 , 507 - 518
    38. 38)
      • Z. Liu , D.S. Forsyth , R. Laganiere .
        38. Liu, Z., Forsyth, D.S., Laganiere, R.: ‘A feature-based metric for the quantitative evaluation of pixel-level image fusion’, Comput. Vis. Image Underst., 2008, 109, (1), pp. 5668.
        . Comput. Vis. Image Underst. , 1 , 56 - 68
    39. 39)
      • I. Barnaure , P. Pollak , S. Momjian .
        39. Barnaure, I., Pollak, P., Momjian, S., et al: ‘Evaluation of electrode position in deep brain stimulation by image fusion (MRI and CT)’, Neuroradiology, 2015, 57, pp. 903908.
        . Neuroradiology , 903 - 908
    40. 40)
      • A.M. Eskicioglu , P.S. Fisher .
        40. Eskicioglu, A.M., Fisher, P.S.: ‘Image quality measures and their performance’, IEEE Trans. Commun., 1995, 43, (12), pp. 29592965.
        . IEEE Trans. Commun. , 12 , 2959 - 2965
    41. 41)
      • Q. Zhu , S. Ding .
        41. Zhu, Q., Ding, S.: ‘Self-adaptation NSCT-PCNN image fusion based GA optimization’, J. Chi. Comput. Syst., 2016, 37, (7), pp. 15831587.
        . J. Chi. Comput. Syst. , 7 , 1583 - 1587
    42. 42)
      • S. Das , K.K. Malay .
        42. Das, S., Malay, K.K.: ‘NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency’, Med. Biol. Eng. Comput., 2012, 50, (10), pp. 11051114.
        . Med. Biol. Eng. Comput. , 10 , 1105 - 1114
    43. 43)
      • S. Li , X. Kang , J. Hu .
        43. Li, S., Kang, X., Hu, J.: ‘Image fusion with guided filtering’, IEEE Trans. Image Process., 2013, 22, (7), pp. 28642875.
        . IEEE Trans. Image Process. , 7 , 2864 - 2875
    44. 44)
      • Y. Liu , S. Liu , Z. Wang .
        44. Liu, Y., Liu, S., Wang, Z.: ‘A general framework for image fusion based on multi-scale transform and sparse representation’, Inf. Fusion, 2015, 24, pp. 147164.
        . Inf. Fusion , 147 - 164
    45. 45)
      • Y. Liu , S. Liu , Z. Wang .
        45. Liu, Y., Liu, S., Wang, Z.: ‘Medical image fusion by combining nonsubsampled contourlet transform and sparse representation’, Pattern Recognit., 2014, 484, pp. 372381.
        . Pattern Recognit. , 372 - 381
    46. 46)
      • L. Tang , L. Li , J. Qian .
        46. Tang, L., Li, L., Qian, J., et al: ‘NSCT-based multimodal medical image fusion with sparse representation and pulse coupled neural network’, J. Inf. Hiding Multimedia Signal Process., 2016, 7, (6), pp. 13061316.
        . J. Inf. Hiding Multimedia Signal Process. , 6 , 1306 - 1316
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0285
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0285
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address