Image fusion using multiscale edge-preserving decomposition based on weighted least squares filter

Image fusion using multiscale edge-preserving decomposition based on weighted least squares filter

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

For pixel-level image fusion, the edges of the source images should be as much as possible integrated into the fused image because the human visual system is sensitive to them. In this study, the authors utilise the multiscale edge-preserving decomposition (MSEPD) based on the weighted least squares filter to fuse the source images. In the authors’ method, first, the source images are decomposed by the MSEPD into a base image and a series of detail images, respectively. Then, the detail images of same scale are combined via the different fusion rules designed for different kinds of source images; the base images are combined via the average-value rule. Finally, the fused image is constructed by adding the fused base image and detail images together. The proposed fusion method is verified on several kinds of images and compared with some methods based on multiscale decomposition. The experimental results indicate that the proposed method can provide better fused images, meanwhile manifesting a good edge-preserving performance.


    1. 1)
    2. 2)
    3. 3)
    4. 4)
      • 4. Rockinger, O.: ‘Pixel level fusion of image sequences using wavelet frames’. Proc. 16th Leeds Annual Statistical Research Workshop, Leeds, UK, July 1996, pp. 149154.
    5. 5)
    6. 6)
    7. 7)
    8. 8)
      • 8. Tomasi, C., Manduchi, R.: ‘Bilateral filtering for gray and color images’. Proc. Int. Conf. on Computer Vision, Bombay, India, January 1998, pp. 839846.
    9. 9)
    10. 10)
      • 10. Xu, L., Lu, C., Xu, Y., Jia, J.: ‘Image smoothing via L0 gradient minimization’, ACM Trans. Graph., 2011, 30, (6), pp. 174:1174:12.
    11. 11)
    12. 12)
    13. 13)
    14. 14)
    15. 15)
    16. 16)
    17. 17)
    18. 18)
      • 18., accessed January 2013.
    19. 19)
    20. 20)
    21. 21)
    22. 22)
    23. 23)
    24. 24)
    25. 25)
    26. 26)
      • 26. Wang, W., Tang, P., Zhu, C.: ‘A wavelet transform based image fusion method’, J. Image Graph., 2001, 6, (11), pp. 11301136.
    27. 27)
    28. 28)
    29. 29)
      • 29. Rockinger, O.: ‘Image sequence fusion using a shift-invariant wavelet transform’. Proc. of Int. Conf. on Image Processing, Washington, DC, USA, October 1997, pp. 288291.
    30. 30)
    31. 31)
      • 31. Tessens, L., Ledda, A., Pizurica, A., Philips, W.: ‘Extending the depth of field in microscopy through curvelet-based frequency-adaptive image fusion’. Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Honolulu, Hawaii, USA, April 2007, pp. 861864.
    32. 32)
      • 32. Ren, S., Cheng, J., Li, M.: ‘Multiresolution fusion of PAN and MS images based on the curvelet transform’. Proc. IEEE Int. Conf. on Geoscience and Remote Sensing, Honolulu, Hawaii, USA, July 2010, pp. 472475.
    33. 33)
      • 33. Tang, L., Zhao, F., Zhao, Z.-G.: ‘The nonsubsampled contourlet transform for image fusion’. Proc. of Int. Conf. on Wavelet Analysis and Pattern Recognition, Beijing, China, November 2007, pp. 305310.

Related content

This is a required field
Please enter a valid email address