Robust image fusion with block sparse representation and online dictionary learning

Robust image fusion with block sparse representation and online dictionary learning

For access to this article, please select a purchase option:

Buy eFirst article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
— Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

For many image fusion problems, the most used technique is selecting features with rich information. The robust image fusion method based on block compressive sensing principle is studied here. Compressive sensing is known to provide an effective method with high accuracy. The framework of the proposed method is given in various perspectives: block sparse representations, restoration algorithms, feature extraction, online dictionary learning, and fusion rules. In terms of restoration of fused images, the split Bregman iteration is adopted. The proposed method can acquire well fusion image from source images and remove some degradations simultaneously, such as noises and blurring effect. In addition, both ‘maximum selection’ and ‘weighted mean’ are investigated as fusion rules, which can preserve more information. Generally, the proposed method can achieve better fusion result from the source images. The experiments with or without noise source images both illustrate that the proposed method has relatively comparative fusion results.

Related content

This is a required field
Please enter a valid email address