access icon free NSST domain CT–MR neurological image fusion using optimised biologically inspired neural network

Diagnostic medical imaging plays an imperative role in clinical assessment and treatment of medical abnormalities. The fusion of multimodal medical images merges complementary information present in the multi-source images and provides a better interpretation with improved diagnostic accuracy. This paper presents a CT–MR neurological image fusion method using an optimised biologically inspired neural network in nonsubsampled shearlet (NSST) domain. NSST decomposed coefficients are utilised to activate the optimised neural model using particle swarm optimisation method and to generate the firing maps. Low and high-frequency NSST subbands get fused using max-rule based on firing maps. In the optimisation process, a fitness function is evaluated based on spatial frequency and edge index of the resultant fused image. To analyse the fusion performance, extensive experiments are conducted on the different CT–MR neurological image dataset. Objective performance is evaluated based on different metrics to highlight the clarity, contrast, correlation, visual quality, complementary information, salient information, and edge information present in the fused images. Experimental results show that the proposed method is able to provide better-fused images and outperforms other existing methods in both visual and quantitative assessments.

Inspec keywords: computerised tomography; neural nets; neurophysiology; image denoising; particle swarm optimisation; image fusion; medical image processing; image resolution

Other keywords: edge information present; edge index; particle swarm optimisation; nonsubsampled shearlet domain; resultant fused image; spatial frequency; better-fused images; NSST domain CT–MR neurological image fusion; inverse NSST; high-frequency NSST subbands; firing maps; tomography–magnetic resonance neurological image fusion method; optimised neural model; fusion performance; diagnostic medical imaging; clinical diagnosis; multimodal medical images; medical abnormalities; optimisation process; imperative role; CT–MR neurological image datasets; fused subbands; optimised biologically inspired neural network; clinical assessment; complementary information present; multisource images

Subjects: Neural nets; Biology and medical computing; Computer vision and image processing techniques; X-rays and particle beams (medical uses); Optical, image and video signal processing; Patient diagnostic methods and instrumentation; Optimisation techniques; Sensor fusion; X-ray techniques: radiography and computed tomography (biomedical imaging/measurement); Optimisation techniques

References

    1. 1)
      • 7. Li, X., Guo, X., Han, P., et al: ‘Laplacian re-decomposition for multimodal medical image fusion’, IEEE Trans. Instrum. Meas., 2020, 69, (9), pp. 68806890.
    2. 2)
      • 21. Ramlal, S.D., Sachdeva, J., Ahuja, C.K., et al: ‘Multimodal medical image fusion using non-subsampled shearlet transform and pulse coupled neural network incorporated with morphological gradient’, Signal Image Video Process., 2018, 12, (8), pp. 14791487.
    3. 3)
      • 45. Chen, Y., Blum, R.S.: ‘A new automated quality assessment algorithm for image fusion’, Image Vis. Comput., 2009, 27, (10), pp. 14211432.
    4. 4)
      • 14. Li, X., Zhao, J.: ‘A novel multi-modal medical image fusion algorithm’, J. Ambient Intell. Human. Comput., 2020, pp. 18.
    5. 5)
      • 25. Peng, G., Wang, Z., Liu, S., et al: ‘Image fusion by combining multiwavelet with nonsubsampled direction filter bank’, Soft Comput., 2017, 21, (8), pp. 19771989.
    6. 6)
      • 13. Das, S., Kundu, M.K.: ‘NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency’, Med. Biol. Eng. Comput., 2012, 50, (10), pp. 11051114.
    7. 7)
      • 6. Wang, J., Li, X., Zhang, Y., et al: ‘Adaptive decomposition method for multi-modal medical image fusion’, IET Image Process., 2018, 12, (8), pp. 14031412.
    8. 8)
      • 18. Geng, P., Wang, Z., Zhang, Z., et al: ‘Image fusion by pulse couple neural network with shearlet’, Opt. Eng., 2012, 51, (6), p. 067005.
    9. 9)
      • 5. Wang, Z., Wang, S., Zhu, Y., et al: ‘Review of image fusion based on pulse-coupled neural network’, Arch. Comput. Methods Eng., 2016, 23, (4), pp. 659671.
    10. 10)
      • 10. Srivastava, R., Prakash, O., Khare, A.: ‘Local energy-based multimodal medical image fusion in curvelet domain’, IET Comput. Vis., 2016, 10, (6), pp. 513527.
    11. 11)
      • 30. Singh, S., Anand, R.: ‘Multimodal neurological image fusion based on adaptive biological inspired neural model in nonsubsampled shearlet domain’, Int. J. Imaging Syst. Technol., 2019, 29, (1), pp. 5064.
    12. 12)
      • 27. Singh, S., Anand, R.: ‘Ripplet domain fusion approach for CT and MR medical image information’, Biomed. Signal Proc. Control, 2018, 46, pp. 281292.
    13. 13)
      • 40. Xydeas, C.a., Petrovic, V.: ‘Objective image fusion performance measure’, Electron. Lett., 2000, 36, (4), pp. 308309.
    14. 14)
      • 4. Wang, Z., Ma, Y., Cheng, F., et al: ‘Review of pulse-coupled neural networks’, Image Vis. Comput., 2010, 28, (1), pp. 513.
    15. 15)
      • 36. Guorong, G., Luping, X., Dongzhu, F: ‘Multi-focus image fusion based on non-subsampled shearlet transform’, IET Image Process., 2013, 7, (6), pp. 633639.
    16. 16)
      • 26. Zhu, S., Wang, L., Duan, S.: ‘Memristive pulse coupled neural network with applications in medical image processing’, Neurocomputing, 2017, 227, pp. 149157.
    17. 17)
      • 3. Dogra, A., Goyal, B., Agrawal, S.: ‘From multi-scale decomposition to non-multi-scale decomposition methods: a comprehensive survey of image fusion techniques and its applications’, IEEE Access, 2017, 5, pp. 1604016067.
    18. 18)
      • 11. Kanmani, M., Narasimhan, V.: ‘An optimal weighted averaging fusion strategy for remotely sensed images’, Multidimens. Syst. Signal Process., 2019, 30, (4), pp. 19111935.
    19. 19)
      • 1. James, A.P., Dasarathy, B.V.: ‘Medical image fusion: a survey of the state of the art’, Inf. Fusion, 2014, 19, pp. 419.
    20. 20)
      • 2. Li, S., Kang, X., Fang, L., et al: ‘Pixel-level image fusion: a survey of the state of the art’, Inf. Fusion, 2017, 33, pp. 100112.
    21. 21)
      • 41. Wang, Q., Shen, Y.: ‘Performances evaluation of image fusion techniques based on nonlinear correlation measurement’. 21st IEEE Instrumentation and Measurement Technology Conf., Como, Italy, 2004, pp. 472475.
    22. 22)
      • 22. Gupta, D.: ‘Nonsubsampled shearlet domain fusion techniques for CT–MR neurological images using improved biological inspired neural model’, Biocybern. Biomed. Eng., 2018, 38, (2), pp. 262274.
    23. 23)
      • 39. Qu, G., Zhang, D., Yan, P.: ‘Information measure for performance of image fusion’, Electron. Lett., 2002, 38, (7), pp. 313315.
    24. 24)
      • 24. Zhang, X., Ren, J., Huang, Z., et al: ‘Spiking cortical model based multimodal medical image fusion by combining entropy information with weber local descriptor’, Sensors, 2016, 16, (9), p. 1503.
    25. 25)
      • 44. Yang, C., Zhang, J.-Q., Wang, X.-R., et al: ‘A novel similarity based quality metric for image fusion’, Inf. Fusion, 2008, 9, (2), pp. 156160.
    26. 26)
      • 8. Ouerghi, H., Mourali, O., Zagrouba, E.: ‘Non-subsampled shearlet transform based MRI and PET brain image fusion using simplified pulse coupled neural network and weight local features in YIQ colour space’, IET Image Process., 2018, 12, (10), pp. 18731880.
    27. 27)
      • 28. Xu, X., Wang, G., Ding, S., et al: ‘Pulse-coupled neural networks and parameter optimization methods’, Neural Comput. Appl., 2017, 28, (1), pp. 671681.
    28. 28)
      • 31. Yin, M., Liu, X., Liu, Y., et al: ‘Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain’, IEEE Trans. Instrum. Meas., 2018, 68, (1), pp. 4964.
    29. 29)
      • 20. Hou, B., Zhang, X., Bu, X., et al: ‘SAR image despeckling based on nonsubsampled shearlet transform’, IEEE J. Sel. Top. Appl. Earth Obser. Remote Sens., 2012, 5, (3), pp. 809823.
    30. 30)
      • 23. Tan, W., Tiwari, P., Pandey, H.M., et al: ‘Multimodal medical image fusion algorithm in the era of big data’. Neural Comput. Appl., 2020, pp. 121.
    31. 31)
      • 38. Kanmani, M., Narasimhan, V.: ‘An optimal weighted averaging fusion strategy for thermal and visible images using dual tree discrete wavelet transform and self tunning particle swarm optimization’, Multimedia Tools Appl., 2017, 76, (20), pp. 2098921010.
    32. 32)
      • 9. Bhateja, V., Krishn, A., Patel, H., et al: ‘Medical image fusion in wavelet and ridgelet domains: a comparative evaluation’, Int. J. Rough Sets Data Anal., 2015, 2, (2), pp. 7891.
    33. 33)
      • 12. Bhateja, V., Patel, H., Krishn, A., et al: ‘Multimodal medical image sensor fusion framework using cascade of wavelet and contourlet transform domains’, IEEE Sens. J., 2015, 15, (12), pp. 67836790.
    34. 34)
      • 32. Panigrahy, C., Seal, A., Mahato, N.K.: ‘MRI and SPECT image fusion using a weighted parameter adaptive dual channel PCNN’, IEEE Signal Process. Lett., 2020, 27, pp. 690694.
    35. 35)
      • 33. Liu, Y., Chen, X., Ward, R.K., et al: ‘Medical image fusion via convolutional sparsity based morphological component analysis’, IEEE Signal Process. Lett., 2019, 26, (3), pp. 485489.
    36. 36)
      • 19. Easley, G., Labate, D., Lim, W.-Q.: ‘Sparse directional image representations using the discrete shearlet transform’, Appl. Comput. Harmon. Anal., 2008, 25, (1), pp. 2546.
    37. 37)
      • 46. Sengupta, A., Seal, A., Panigrahy, C., et al: ‘Edge information based image fusion metrics using fractional order differentiation and sigmoidal functions’, IEEE Access, 2020, 8, pp. 8838588398.
    38. 38)
      • 42. Piella, G., Heijmans, H.: ‘A new quality metric for image fusion’. Int. Conf. on Image Processing, Barcelona, Spain, 2003, pp. III173.
    39. 39)
      • 34. Liu, Y., Chen, X., Cheng, J., et al: ‘A medical image fusion method based on convolutional neural networks’. 20th Int. Conf. on Information Fusion, Xi'an, People's Republic of China, 2017, pp. 17.
    40. 40)
      • 43. Cvejic, N., Loza, A., Bull, D., et al: ‘A similarity metric for assessment of image fusion algorithms’, Int. J. Signal Process., 2005, 2, (3), pp. 178182.
    41. 41)
      • 35. Singh, S., Anand, R.S.: ‘Multimodal medical image sensor fusion model using sparse K-SVD dictionary learning in nonsubsampled shearlet domain’, IEEE Trans. Instrum. Meas., 2020, 69, (2), pp. 593607.
    42. 42)
      • 29. Tang, L., Tian, C., Xu, K.: ‘IGM–based perceptual multimodal medical image fusion using free energy motivated adaptive PCNN’, Int. J. Imaging Syst. Technol., 2018, 28, (2), pp. 99105.
    43. 43)
      • 17. Labate, D., Lim, W.-Q., Kutyniok, G., et al: ‘Sparse multidimensional representation using shearlets’. Proc. SPIE, San Diego, CA, USA, 2005.
    44. 44)
      • 37. Xinzheng, X., Shifei, D., Zhongzhi, S., et al: ‘Particle swarm optimization for automatic parameters determination of pulse coupled neural network’, J. Comput., 2011, 6, (8), pp. 15461553.
    45. 45)
      • 15. Singh, S., Anand, R.S., Gupta, D.: ‘CT and MR image information fusion scheme using a cascaded framework in ripplet and NSST domain’, IET Image Process., 2018, 12, (5), pp. 696707.
    46. 46)
      • 16. Ganasala, P., Kumar, V.: ‘Feature-motivated simplified adaptive PCNN-based medical image fusion algorithm in NSST domain’, J. Digit. Imaging, 2016, 29, (1), pp. 7385.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2020.0219
Loading

Related content

content/journals/10.1049/iet-ipr.2020.0219
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading