Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Comprehensive evaluation of image enhancement for unsupervised image description and matching

The performance of an image enhancer is usually evaluated either perceptually or functionally. The perceptual evaluation is carried out from a human perspective, i.e. by considering features related to human perception of image content and details. The functional evaluation is made instead from a machine perspective, i.e. by judging the enhancer effects with a specific machine application. his work proposes a comprehensive, empirical evaluation accounting for both perceptual and functional aspects. Precisely, 13 enhancers lowering undesired illumination effects are considered within the keypoint based image description and matching task, which is relevant to many computer vision fields. Each enhancer is first evaluated perceptually, then it is employed as a pre-processing step of the popular algorithms SIFT and ORB and judged by measuring how its use influences the performance of these algorithms. his study, conducted on a freely available data set, shows that the enhancement generally improves the perceptual features of the input image as well as the SIFT and ORB performance. More importantly, it reveals the existence of a correlation among some of their perceptual and functional measures. In this way, this work contributes to promote a more aware use of enhancement techniques within the mage description and matching task.

References

    1. 1)
      • 39. Wang, Z., Simoncelli, E.P., Bovik, A.C.: ‘Multiscale structural similarity for image quality assessment’. The Thrity-Seventh Asilomar Conf. on Signals, Systems & Computers, 2003, PacifiPacific Grove, CA, USA, 2003, vol. 2, pp. 13981402.
    2. 2)
      • 4. Gaiani, M., Remondino, F., Apollonio, F.I., et al: ‘An advanced pre-processing pipeline to improve automated photogrammetric reconstructions of architectural scenes’, Remote Sens., 2016, 8, (3), p. 178.
    3. 3)
      • 7. Jende, P., Nex, F., Gerke, M., et al: ‘A fully automatic approach to register mobile mapping and airborne imagery to support the correction of platform trajectories in gnss-denied urban areas’, ISPRS J. Photogramm. Remote Sens., 2018, 141, pp. 8699.
    4. 4)
      • 47. Roshanbin, N., Miller, J.: ‘A comparative study of the performance of local feature-based pattern recognition algorithms’, Pattern Anal. Appl., 2017, 20, (4), pp. 11451156.
    5. 5)
      • 19. Wang, S., Zheng, J., Hu, H.M., et al: ‘Naturalness preserved enhancement algorithm for non-uniform illumination images’, IEEE Trans. Image Process., 2013, 22, (9), pp. 35383548.
    6. 6)
      • 16. Lecca, M., Messelodi, S.: ‘SuPeR: Milano Retinex implementation exploiting a regular image grid’, J. Opt. Soc. Am A, 2019, 36, (8), pp. 14231432. Available at http://josaa.osa.org/abstract.cfm?URI=josaa-36-8-1423.
    7. 7)
      • 8. Dabov, K., Foi, A., Katkovnik, V., et al: ‘Image denoising by sparse 3-d transform-domain collaborative filtering’, IEEE Trans. Image Process., 2007, 16, (8), pp. 20802095.
    8. 8)
      • 25. Rublee, E., Rabaud, V., Konolige, K., et al: ‘ORB: an efficient alternative to Sift or SURF’. 2011 Int. Conf. on Computer Vision, Barcelona, Spain, 2011, pp. 25642571.
    9. 9)
      • 32. Tola, E., Lepetit, V., Fua, P.: ‘DAISY: an efficient dense descriptor applied to wide baseline stereo’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (5), pp. 815830. Available at http://infoscience.epfl.ch/record/138785.
    10. 10)
      • 44. Lecca, M., Torresani, A., Remondino, F.: ‘On image enhancement for unsupervised image description and matching’. Int. Conf. on Image Analysis and Processing. (Springer), Trento, Italy, 2019, pp. 8292.
    11. 11)
      • 29. Mur-Artal, R., Tardós, J.D.: ‘ORB-SLAM2: an open-source slam system for monocular, stereo, and rgb-d cameras’, IEEE Trans. Robot., 2017, 33, (5), pp. 12551262.
    12. 12)
      • 9. Simoncelli, E.P., Adelson, E.H.: ‘Noise removal via bayesian wavelet coring’. Proc. of 3rd IEEE Int. Conf. on Image Processing, Lausanne, Switzerland, 1996, vol. 1, pp. 379382.
    13. 13)
      • 38. Wang, Z., Bovik, A.C., Sheikh, H.R., et al: ‘Image quality assessment: from error visibility to structural similarity’, IEEE Trans. Image Process., 2004, 13, (4), pp. 600612.
    14. 14)
      • 20. Fu, X., Zeng, D., Huang, Y., et al: ‘A fusion-based enhancing method for weakly illuminated images’, Signal Process., 2016, 129, pp. 8296.
    15. 15)
      • 13. Banić, N., Lončarić, S.: ‘Light Random Sprays Retinex: exploiting the noisy illumination estimation’, IEEE Signal Process. Lett., 2013, 20, (12), pp. 12401243.
    16. 16)
      • 3. Eskicioglu, A.M., Fisher, P.S.: ‘Image quality measures and their performance’, IEEE Trans. Commun., 1995, 43, (12), pp. 29592965.
    17. 17)
      • 41. Ronneberger, O., Fischer, P., Brox, T.: ‘U-Net: convolutional networks for biomedical image segmentation’. Int. Conf. on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015, pp. 234241.
    18. 18)
      • 17. Banić, N., Lončarić, S.: ‘Firefly: a hardware-friendly real-time local brightness adjustment method’. 2015 IEEE Int. Conf. on Image Processing (ICIP), Quebec City, QC, Canada, 2015, pp. 39513955.
    19. 19)
      • 22. Wei, C., Wang, W., Yang, W., et al: ‘Deep retinex decomposition for low-light enhancement’. BMVC, Newcastle, UK, 2018.
    20. 20)
      • 26. Agarwal, S., Furukawa, Y., Snavely, N., et al: ‘Building rome in a day’, Commun. ACM, 2011, 54, (10), pp. 105112.
    21. 21)
      • 37. Lecca, M.: ‘A generalized equation for real-world image enhancement by Milano Retinex family’, J. Opt. Soc. Am. A, 2020, 37, (5), pp. 849858, to appear.
    22. 22)
      • 14. Lecca, M., Rizzi, A., Serapioni, R.P.: ‘GREAT: a gradient-based color sampling scheme for Retinex’, J. Opt. Soc. Am A, 2017, 34, (4), pp. 513522.
    23. 23)
      • 46. Karami, E., Prasad, S., Shehata, M.: ‘Image matching using sift, surf, brief and orb: performance comparison for distorted images’, arXiv preprint arXiv:171002726, 2017.
    24. 24)
      • 34. Alcantarilla, P.F., Bartoli, A., Davison, A.J.: ‘KAZE features’. European Conf. on Computer Vision, Florence, Italy, 2012, pp. 214227.
    25. 25)
      • 2. Engelke, U., Zepernick, H.J.: ‘Perceptual-based quality metrics for image and video services: a survey’. 2007 Next Generation Internet Networks, Trondheim, Norway, 2007, pp. 190197.
    26. 26)
      • 15. Lecca, M.: ‘STAR: a segmentation-based approximation of point-based sampling Milano Retinex for color image enhancement’, IEEE Trans. Image Process., 2018, 27, (12), pp. 58025812.
    27. 27)
      • 23. Lv, F., Lu, F., Wu, J., et al: ‘MBLLEN: low-light image/video enhancement using cnns’. BMVC, Newcastle, UK, 2018, p. 220.
    28. 28)
      • 27. Schönberger, J.L., Frahm, J.M.: ‘Structure-from-motion revisited’. Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016.
    29. 29)
      • 33. Leutenegger, S., Chli, M., Siegwart, R.Y.: ‘BRISK: binary robust invariant scalable keypoints’. 2011 Int. Conf. on Computer Vision, Barcelona, Spain, 2011, pp. 25482555.
    30. 30)
      • 35. Tareen, S.A.K., Saleem, Z.: ‘A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK’. 2018 Int. Conf. on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 2018, pp. 110.
    31. 31)
      • 45. Cox, D.R., Stuart, A.: ‘Some quick sign tests for trend in location and dispersion’, Biometrika, 1955, 42, (1/2), pp. 8095.
    32. 32)
      • 11. Zuiderveld, K.: ‘Contrast limited adaptive histogram equalization’. Graphics gems IV (Academic Press Professional Inc., USA, 1994), pp. 474485.
    33. 33)
      • 36. Rizzi, A., Bonanomi, C.: ‘Milano Retinex family’, J. Electron. Imaging, 2017, 26, (3), pp. 031207031207.
    34. 34)
      • 1. Pedersen, M., Hardeberg, J.Y.: ‘Full-reference image quality metrics: classification and evaluation’, Found. TrendsXXXXXX Comput. Graph. Vis., 2012, 7, (1), pp. 180.
    35. 35)
      • 31. Bay, H., Ess, A., Tuytelaars, T., et al: ‘Speeded-up robust features (surf)’, Comput. Vis. Image Underst., 2008, 110, (3), pp. 346359.
    36. 36)
      • 21. Jiang, Y., Gong, X., Liu, D., et al: ‘Enlightengan: Deep light enhancement without paired supervision’, arXiv preprint arXiv:190606972, 2019.
    37. 37)
      • 30. Ke, Y., Sukthankar, R.: ‘PCA-SIFT: a more distinctive representation for local image descriptors’. Proc. of the 2004 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition 2004. CVPR 2004, Washington, DC, USA, 2004, vol. 2, pp. IIII.
    38. 38)
      • 24. Lowe, D.G.: ‘Distinctive image features from scale-invariant keypoints’, Int. J. Comput. Vision, 2004, 60, (2), pp. 91110.
    39. 39)
      • 28. Stathopoulou, E.K., Welponer, M., Remondino, F.: ‘Open-source image-based 3d reconstruction pipelines: review, comparison and evaluation’, Int. Arch. Photogramm., Remote Sens. Spatial Inf. Sci., 2019, 42, pp. 331338.
    40. 40)
      • 43. Mittal, A., Soundararajan, R., Bovik, A.C.: ‘Making a completely blind image quality analyzer’, IEEE Signal Process. Lett., 2013, 20, (3), pp. 209212.
    41. 41)
      • 10. Zhang, B., Fadili, J.M., Starck, J.L.: ‘Wavelets, ridgelets, and curvelets for poisson noise removal’, IEEE Trans. Image Process., 2008, 17, (7), pp. 10931108.
    42. 42)
      • 12. Gianini, G., Manenti, A., Rizzi, A.: ‘QBRIX: a quantile-based approach to Retinex’, J. Opt. Soc. Am A, 2014, 31, (12), pp. 26632673.
    43. 43)
      • 40. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’, arXiv preprint arXiv:14091556, 2014.
    44. 44)
      • 5. Ballabeni, A., Apollonio, F.I., Gaiani, M., et al: ‘Advances in image pre-processing to improve automated 3d reconstruction’. Int. Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences, Avila, Spain, 2015.
    45. 45)
      • 42. Rizzi, A., Algeri, T., Medeghini, G., et al: ‘A proposal for contrast measure in digital images’. CGIV 2004 – 2nd European Conf. on Color in Graphics, Imaging, and Vision and 6th Int. Symp. on Multispectral Color Science. (Aachen), Penang, Malaysia, 2004, pp. 187192.
    46. 46)
      • 18. Guo, X., Li, Y., Ling, H.: ‘Lime: low-light image enhancement via illumination map estimation’, IEEE Trans. Image Process., 2017, 26, (2), pp. 982993.
    47. 47)
      • 6. Aicardi, I., Nex, F., Gerke, M., et al: ‘An image-based approach for the co-registration of multi-temporal uav image datasets’, Remote Sens., 2016, 8, (9), p. 779.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2020.1129
Loading

Related content

content/journals/10.1049/iet-ipr.2020.1129
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address