access icon free Enhancing scene perception using a multispectral fusion of visible–near-infrared image pair

In this study, a method for fusing of visible (or standard RGB) with near-infrared (NIR) image pair for enhancing a hazy image is proposed. Better image enhancement in terms of contrast, sharpness, increased perception is realised by combining the components from both the spectra. While there are a number of applications that use NIR images, very few combine RGB and NIR information of the same scene taken from two separate imaging devices. Although NIR images are greyscaled in nature, they have intrinsic properties desirable in both colour and visible imagery of a number of scenes. Instances are increased the contrast between sky and clouds; shadowed and non-shadowed regions; and increased optical depth. These features are in most cases missed by a standard RGB camera and therefore, a fusion of such RGB–NIR image pairs is highly beneficial. The scheme for fusing these image pairs is realised by using a novel, fuzzy clustering algorithm along with wavelet transform so that not only contrast and sharpness are enhanced, but also the chromaticity of the original RGB image is retained. The proposed technique is well demonstrated for various outdoor scenes, and better results are obtained when compared against some of the state-of-the-art techniques proposed recently.

Inspec keywords: image fusion; fuzzy systems; image colour analysis; cameras; pattern clustering; wavelet transforms; infrared imaging; image enhancement

Other keywords: NIR image fusion; image enhancement; visible imagery; RGB–NIR image pairs; near-infrared image fusion; standard RGB camera; wavelet transform; multispectral visible–near-infrared image pair fusion; fuzzy clustering algorithm; hazy image enhancement; scene perception enhancement

Subjects: Integral transforms; Computer vision and image processing techniques; Image sensors; Integral transforms; Optical, image and video signal processing

References

    1. 1)
      • 4. Csurka, G., Perronnin, F.: ‘An efficient approach to semantic segmentation’, Int. J. Comput. Vis., 2011, 95, (2), pp. 198212.
    2. 2)
      • 16. Tan, R.T.: ‘Visibility in bad weather from a single image’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, CVPR 2008, Anchorage, USA, 2008, pp. 18, doi: 10.1109/CVPR.2008.4587643.
    3. 3)
      • 14. Kong, S.G., Heo, J., Abidi, B.R., et al: ‘Recent advances in visual and infrared face recognition a review’, Comput. Vis. Image Underst., 2005, 97, (1), pp. 103135.
    4. 4)
      • 29. Tang, K., Yang, J., Wang, J.: ‘Investigating haze-relevant features in a learning framework for image dehazing’. Proc. IEEE CVPR, Columbus, OH, USA, June 2014, pp. 29953002.
    5. 5)
      • 37. Zuiderveld, K.: ‘Contrast limited adaptive histograph equalization’, Graphic Gems IV (Academic Press Professional, San Diego, 1994), pp. 474485.
    6. 6)
      • 6. Haghighat, M.B.A., Aghagolzadeh, A., Seyedarabi, H.: ‘Multi-focus image fusion for visual sensor networks in DCT domain’, Comput. Electr. Eng., 2011, 37, (5), pp. 789797, doi: 10.1016/j.compeleceng.2011.04.016.
    7. 7)
      • 10. Meng, G., Wang, Y., Duan, J., et al: ‘Efficient image dehazing with boundary constraint and contextual regularization’. Proc. IEEE ICCV, Sydney, Australia, December 2013, pp. 617624.
    8. 8)
      • 43. Zhou, W., Bovik, A.C., Sheikh, H.R., et al: ‘Image quality assessment: from error visibility to structural similarity’, IEEE Trans. Image Process., 2004, 13, (4), pp. 600612.
    9. 9)
      • 35. Li, J., Song, M., Peng, Y.: ‘Infrared and visible image fusion based on robust principal component analysis and compressed sensing’, Infrared Phys. Technol., 2018, 89, pp. 129139.
    10. 10)
      • 31. Ancuti, C.O., Ancuti, C.: ‘Single image dehazing by multi-scale fusion’, IEEE Trans. Image Process., 2013, 22, (8), pp. 32713282, doi: 10.1109/TIP.2013.2262284.
    11. 11)
      • 28. Li, Z., Tan, P., Tan, R.T., et al: ‘Simultaneous video defogging and stereo reconstruction’. Proc. IEEE CVPR, Boston, MA, USA, June 2015, pp. 49884997.
    12. 12)
      • 34. Varjo, S., Hannuksela, J.: ‘Comparison of near infrared and visible image fusion methods’. Proc. Int. Workshop on Applications Systems and Services for Camera Phone Sensing, San Francisco, USA, 2011.
    13. 13)
      • 44. Mittal, A., Moorthy, A.K., Bovik, A.C.: ‘No-reference image quality assessment in the spatial domain’, IEEE Trans. Image Process., 2012, 21, (12), pp. 46954708.
    14. 14)
      • 27. Schaul, L., Fredembach, C., Susstrunk, S.: ‘Color image dehazing using the near-infrared’. Proc. of the 16th IEEE Int. Conf. on Image Processing, ICIP 2009, Cairo, Egypt, 2009, pp. 16291632.
    15. 15)
      • 20. Fattal, R.: ‘Dehazing using color-lines’, ACM Trans. Graph., 2014, 34, (1), pp. 13:113:14, doi: 10.1145/2651362.
    16. 16)
      • 11. Morris, N.J.W., Avidan, S., Matusik, W., et al: ‘Statistics of infrared images’. 2007 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Minneapolis, USA, 2007, doi: 10.1109/CVPR.2007.383003.
    17. 17)
      • 40. Elazab, A., Wang, C., Jia, F., et al: ‘Segmentation of brain tissues from magnetic resonance images using adaptively regularized kernel-based fuzzy C-means clustering’, Comput. Math. Methods Med., 2016, 2015, 12pp, Article ID 485495. Available at http://dx.doi.org/10.1155/2015/485495.
    18. 18)
      • 23. Sulami, M., Geltzer, I., Fattal, R., et al: ‘Automatic recovery of the atmospheric light in hazy images’. Proc. of the IEEE Int. Conf. on Computational Photography, ICCP 2014, Santa Clara, USA, 2014, pp. 111, doi: 10.1109/ICCPHOT.2014.6831817.
    19. 19)
      • 12. Zhou, W., Huang, G., Troy, A., et al: ‘Object based land cover classification of shaded areas in high spatial resolution imagery of urban areas: a comparison study’, Remote Sens. Environ., 2009, 113, (8), pp. 17691777.
    20. 20)
      • 19. Lan, X., Zhang, L., Shaen, H., et al: ‘Single image haze removal considering sensor blur and noise’, EURASIP J. Adv. Signal Process., 2013, 2013, p. 86, doi: 10.1186/1687-6180-2013-86.
    21. 21)
      • 21. Tarel, J.P., Hautie're, N.: ‘Fast visibility restoration from a single color or gray level image’. Proc. of the IEEE Int. Conf. on Computer Vision, Kyoto, Japan, 2009, pp. 22012208, doi: 10.1109/ICCV.2009.5459251.
    22. 22)
      • 36. Brown, M., Susstrunk, S.: ‘Multi-spectral sift for scene category recognition’. Proc. Conf. on Computer Vision and Pattern Recognition, Colorado Springs, USA, 2011, pp. 177184.
    23. 23)
      • 2. Verbeek, J., Triggs, W.: ‘Scene segmentation with CRFs learned from partially labeled images’. Proc. Advances in Neural Information Processing Systems, Vancouver, Canada, December 2007, pp. 15531560.
    24. 24)
      • 22. Zhu, Q., Mai, J., Shao, L.: ‘A fast single image haze removal algorithm using color attenuation prior’, IEEE Trans. Image Process., 2015, 24, (11), pp. 35223533, doi: 10.1109/TIP.2015.2446191.
    25. 25)
      • 13. Walter, V.: ‘Object-based classification of remote sensing data for change detection’, ISPRS J. Photogramm. Remote Sens., 2004, 58, (3), pp. 225238.
    26. 26)
      • 38. Ford, A., Roberts, A.: ‘Colour space conversions’. Technical Report, University of Westminster, London, August 1998.
    27. 27)
      • 30. Jang, D.-W., Park, R.-H.: ‘Colour image dehazing using near-infrared fusion’, IET Image Process., 2017, 11, (8), pp. 587594.
    28. 28)
      • 41. Hautiere, N., Tarel, J.-P., Aubert, D., et al: ‘Blind contrast enhancement assessment by gradient rationing at visible edges’, Image Anal. Stereol., 2008, 27, (2), pp. 8795.
    29. 29)
      • 15. Fattal, R.: ‘Single image dehazing’, ACM Trans. Graph., 2008, 27, (3), pp. 988992, doi: 10.1145/1360612.1360671.
    30. 30)
      • 8. Salamati, N., Fredembach, C., Sustrunk, S.: ‘Material classification using color and NIR images’. Color Imaging Conf., USA, 2009, pp. 216222.
    31. 31)
      • 33. Sappa, A.D., Carvajal, J.A., Aguilera, C.A., et al: ‘Wavelet-based visible and infrared image fusion: a comparative study’, Sensors (Basel), 2016, 16, (6), p. E861, doi: 10.3390/s16060861.
    32. 32)
      • 1. Shotton, J., Winn, J., Rother, C., et al: ‘Textonboost: joint appearance, shape and context modeling for multi-class object recognition and segmentation’. Proc. European Conf. on Computer Vision, Berlin, Germany, 2006, pp. 115.
    33. 33)
      • 17. He, K.M., Sun, J., Tang, X.O.: ‘Single image haze removal using dark channel prior’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Miami, USA, 2009, pp. 19561963, doi: 10.1109/CVPR.2009.5206515.
    34. 34)
      • 9. Salamati, N., Larlus, D., Csurka, G., et al: ‘Semantic image segmentation using visible and near-infrared channels’. Proc. ECCV Workshop on Color and Photometry in Computer Vision, Berlin, Germany, 2012, pp. 461471.
    35. 35)
      • 42. Mittal, A., Soundararajan, R., Bovik, A.C.: ‘Making a completely blind image quality analyzer’, IEEE Signal Process. Lett., 2013, 22, (3), pp. 209212.
    36. 36)
      • 24. Shwartz, S., Namer, E., Schechner, Y.: ‘Blind haze separation’. Proc. of the IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, CVPR 2006, New York, USA, 2006, vol. 2, pp. 19841991, doi: 10.1109/CVPR.2006.71.
    37. 37)
      • 5. Blaschke, T.: ‘Object based image analysis for remote sensing’, J. Photogramm. Remote Sens., 2010, 65, (1), pp. 216.
    38. 38)
      • 26. Zhang, X., Sim, T., Miao, X.: ‘Enhancing photographs with near infrared images’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, CVPR 2008, Anchorage, USA, 2008, doi: 10.1109/CVPR.2008.4587825.
    39. 39)
      • 25. Feng, C., Zhuo, S., Zhang, X., et al: ‘Near-infrared guided color image dehazing’. Proc. of the 20th IEEE Int. Conf. on Image Processing, ICIP 2013, Melbourne, Australia, 2013, pp. 23632367, doi: 10.1109/ICIP.2013.6738487.
    40. 40)
      • 3. Pantofaru, C., Schmid, C., Hebert, M.: ‘Object recognition by integrating multiple image segmentations’. Proc. European Conf. on Computer Vision, Berlin, Germany, 2008, pp. 481494.
    41. 41)
      • 7. Salamati, N., Larius, D., Csurka, G., et al: ‘Incorporating near-infrared information into semantic image segmentation’, arXiv:1406.6147.
    42. 42)
      • 18. Zhang, Y.Q., Ding, Y., Xiao, J.S., et al: ‘Visibility enhancement using an image filtering approach’, EURASIP J. Adv. Signal Process., 2012, 2012, p. 220, doi: 10.1186/1687-6180-2012-220.
    43. 43)
      • 32. Kudo, Y., Kubota, A.: ‘Image dehazing method by fusing weighted near-infrared image’. Int. Workshop on Advanced Image Technology (IWAIT), Chiang Mai, Thailand, January 2018, doi: 10.1109/IWAIT.2018.8369744.
    44. 44)
      • 39. Nongmeikapam, K., Kumar, W.K., Singh, A.D.: ‘Fast and automatically adjustable GRBF kernel based fuzzy C-means for cluster-wise coloured feature extraction and segmentation of MR images’, IET Image Process., 2018, 12, (4), pp. 513524, doi: 10.1049/iet-ipr.2017.1102.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2018.5812
Loading

Related content

content/journals/10.1049/iet-ipr.2018.5812
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading