CT and MR image information fusion scheme using a cascaded framework in ripplet and NSST domain

CT and MR image information fusion scheme using a cascaded framework in ripplet and NSST domain

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The fusion of multimodal medical information is considered as an assisted approach for the medical professionals. Computed tomography and magnetic resonance (CT–MR) medical image fusion are able to help the radiologist in precise diagnosis of disease and deciding the required treatment in accord with the patient's condition. Therefore, a cascaded framework is proposed in this study that presents a fusion approach for multimodal medical information in ripplet transform (RT) and non-subsampled shearlet (NSST) domain. The RT and NSST having different features are utilised in a cascade manner that provides several directional decomposition coefficients and increases shift invariance information in the fused images. At the first stage decomposition, a biologically inspired neural model, motivated by novel sum-modified Laplacian and spatial frequency is utilised to fuse the low- and high-frequency coefficients, respectively, and the max fusion rule based on regional energy is utilised at stage 2. This model is used to preserve the redundant information also. The fusion performance is also validated by extensive simulations performed on different CT–MR image datasets of different diseases. Experimental results demonstrate that the proposed method provides better fused images in terms of visual quality along with the quantitative indices compared with several existing fusion approaches.

Related content

This is a required field
Please enter a valid email address