http://iet.metastore.ingenta.com
1887

Perceptual quality evaluation for motion deblurring

Perceptual quality evaluation for motion deblurring

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Motion deblurring has been widely studied. However, the relevant quality evaluation of motion deblurred images remains an open problem. The motion deblurred images are usually contaminated by noise, ringing and residual blur (NRRB) simultaneously. Unfortunately, most of the existing quality metrics are not designed for multiply distorted images, so they are limited in predicting the quality of motion deblurred images. In this study, the authors propose a new quality metric for motion deblurred images by measuring NRRB. For a motion deblurred image, the noise level is first estimated. Then the ringing effect is measured by incorporating visual saliency model to adapt to the characteristic of the human visual system. A reblurring-based method is proposed to extract similarity features between a motion deblurred image and its re-blurred version for evaluating the residual blur. Finally, the overall quality score of a motion deblurred image is obtained by pooling the scores of noise, ringing and blur. Experimental results conducted on a motion deblurring database demonstrate that the proposed metric significantly outperforms the existing quality metrics. In addition, the proposed NRRB metric is used for improving the existing general-purpose no-reference metrics, and very encouraging results are achieved.

References

    1. 1)
      • 1. Kilickaya, M., Akkus, B.K., Cakici, R., et al: ‘Data-driven image captioning via salient region discovery’, IET Comput. Vis., 2017, 11, (6), pp. 398406.
    2. 2)
      • 2. Penne, R., Veraart, J., Abbeloos, W., et al: ‘Four-point-algorithm for the recovery of the pose of a one-dimensional camera with unknown focal length’, IET Comput. Vis., 2012, 6, (4), pp. 314323.
    3. 3)
      • 3. Lahrache, S., Ouazzani, R.E., Qadi, A.E.: ‘Bag-of-features for image memorability evaluation’, IET Comput. Vis., 2016, 10, (6), pp. 577584.
    4. 4)
      • 4. Wei, Y., Dong, Z., Wu, C.: ‘Depth measurement using single camera with fixed camera parameters’, IET Comput. Vis., 2012, 6, (1), pp. 2939.
    5. 5)
      • 5. Rad, R., Jamzad, M.: ‘Automatic image annotation by a loosely joint non-negative matrix factorisation’, IET Comput. Vis., 2015, 9, (6), pp. 806813.
    6. 6)
      • 6. Xu, X.Y., Sun, D.Q., Pan, J.S., et al: ‘Learning to super-resolve blurry face and text images’. Proc. of IEEE Int. Conf. on Computer Vision (ICCV 2017), Venice, Italy, 22–29 October 2017, pp. 251260.
    7. 7)
      • 7. Pan, J.S., Sun, D.Q., Pfister, H., et al: ‘Blind image deblurring using dark channel prior’. Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR 2016), Las Vegas, USA, 27–30 June 2016, pp. 16281636.
    8. 8)
      • 8. Fu, B., Guo, H., Zhao, X.L., et al: ‘Motion-blurred SIFT invariants based on sampling in image deformation space and univariate search’, IET Comput. Vis., 2016, 10, (7), pp. 709717.
    9. 9)
      • 9. Wang, Z., Bovik, A.C., Sheikh, H.R., et al: ‘Image quality assessment: from error visibility to structural similarity’, IEEE Trans. Image Process., 2004, 13, (4), pp. 600612.
    10. 10)
      • 10. Li, L.D., Zhou, Y., Lin, W.S., et al: ‘No-reference quality assessment of deblocked images’, Neurocomputing, 2016, 177, pp. 572584.
    11. 11)
      • 11. Liu, Y.M., Wang, J., Cho, S., et al: ‘A no-reference metric for evaluating the quality of motion deblurring’, ACM Trans. Graph., 2013, 32, (6), pp. 112.
    12. 12)
      • 12. Moorthy, A.K., Bovik, A.C.: ‘A two-step framework for constructing blind image quality indices’, IEEE Signal Process. Lett., 2010, 17, (5), pp. 513516.
    13. 13)
      • 13. Moorthy, A.K., Bovik, A.C.: ‘Blind image quality assessment: from natural scene statistics to perceptual quality’, IEEE Trans. Image Process., 2011, 20, (12), pp. 33503364.
    14. 14)
      • 14. Saad, M.A., Bovik, A.C.: ‘Blind image quality assessment: A natural scene statistics approach in the DCT domain’, IEEE Trans. Image Process., 2012, 21, (8), pp. 33393352.
    15. 15)
      • 15. Mittal, A., Moorthy, A.K., Bovik, A.C.: ‘No-reference image quality assessment in the spatial domain’, IEEE Trans. Image Process., 2012, 21, (12), pp. 46954708.
    16. 16)
      • 16. Mittal, A., Soundararajan, R., Bovik, A.C.: ‘Making a completely blind image quality analyzer’, IEEE Signal Process. Lett., 2013, 20, (3), pp. 209212.
    17. 17)
      • 17. Liu, L.X., Liu, B., Huang, H., et al: ‘No-reference image quality assessment based on spatial and spectral entropies’, Signal Process. Image Commun., 2014, 29, (8), pp. 856863.
    18. 18)
      • 18. Gu, K., Zhai, G.T., Yang, X.K., et al: ‘Using free energy principle for blind image quality assessment’, IEEE Trans. Multimedia, 2015, 17, (1), pp. 5063.
    19. 19)
      • 19. Zhang, Y., Chandler, D.M.: ‘No-reference image quality assessment based on log-derivative statistics of natural scenes’, J. Electron. Imaging, 2013, 22, (4), pp. 122.
    20. 20)
      • 20. Ye, P., Kumar, J., Kang, L.: ‘Unsupervised feature learning framework for no-reference image quality assessment’. Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR 2012), Providence, RI, USA, 16–21 June 2012, pp. 10981105.
    21. 21)
      • 21. Liu, L.X., Yang, B., Huang, H.: ‘No-reference stereopair quality assessment based on singular value decomposition’, Neurocomputing, 2018, 275, pp. 18231835.
    22. 22)
      • 22. Liu, L.X., Liu, B., Su, C.-C., et al: ‘Binocular spatial activity and reverse saliency driven no-reference stereopair quality assessment’, Signal Process. Image Commun., 2017, 58, pp. 287299.
    23. 23)
      • 23. Liu, L.X., Hua, Y., Zhao, Q.J., et al: ‘Blind image quality assessment by relative gradient statistics and adaboosting neural network’, Signal Process. Image Commun., 2016, 40, pp. 115.
    24. 24)
      • 24. Liu, L.X., Dong, H.P., Huang, H., et al: ‘No-reference image quality assessment in curvelet domain’, Signal Process. Image Commun., 2014, 29, (4), pp. 494505.
    25. 25)
      • 25. Nasonov, A.V., Krylov, A.S.: ‘Basic edges metrics for image deblurring’. Proc. of the 10th Conf. on Pattern Recognition and Image Analysis: New Information Technologies (PRIA2010), Petersburg, Russian, 5–12 December 2010, pp. 243246.
    26. 26)
      • 26. Li, L.D., Yan, Y., Fang, Y.M., et al: ‘Perceptual quality evaluation for image defocus deblurring’, Signal Process. Image Commun., 2016, 48, pp. 8191.
    27. 27)
      • 27. Li, L.D., Yan, Y., Lu, Z.L., et al: ‘No-reference quality assessment of deblurred images based on natural scene statistics’, IEEE Access, 2017, 5, pp. 21632171.
    28. 28)
      • 28. Ng, M.K., Chan, R.H., Tang, W.: ‘A fast algorithm for deblurring models with neumann boundary condition’, Soc. Ind. Appl. Math., 1999, 21, (3), pp. 851866.
    29. 29)
      • 29. Chai, A., Shen, Z.: ‘Deconvolution: A wavelet frame approach’, Numer. Math., 2007, 106, (4), pp. 529587.
    30. 30)
      • 30. Lou, Y., Zhang, X., Osher, S., et al: ‘Image recovery via nonlocal operators’, J. Sci. Comput., 2010, 42, (2), pp. 185197.
    31. 31)
      • 31. Pan, J., Hu, Z., Su, Z., et al: ‘Deblurring text images via L0-regularized intensity and gradient prior’. Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR 2014), Columbus, USA, 23–28 June 2014, pp. 29012908.
    32. 32)
      • 32. Shan, Q., Jia, J., Agarwala, A.: ‘High-quality motion deblurring from a single image’, ACM Trans. Graph., 2008, 27, (3), pp. 110.
    33. 33)
      • 33. Xu, L., Zheng, S., Jia, J.: ‘Unnatural L0 sparse representation for natural image deblurring’. Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR 2013), Portland, USA, 23–28 June 2013, pp. 11071114.
    34. 34)
      • 34. Wang, C., Yue, Y., Dong, F., et al: ‘Nonedge-specific adaptive scheme for highly robust blind motion deblurring of natural imagess’, IEEE Trans. Image Process., 2013, 22, (3), pp. 884897.
    35. 35)
      • 35. Goldstein, A., Fattal, R.: ‘Blur-kernel estimation from spectral irregularities’. European Conf. on Computer Vision (ECCV 2012), Florence, Italy, 7–13 October 2012, pp. 622635.
    36. 36)
      • 36. Cho, S., Lee, S.: ‘Fast motion deblurring’, ACM Trans. Graph., 2009, 25, (8), pp. 18.
    37. 37)
      • 37. Levin, A., Weiss, Y., Durand, F., et al: ‘Efficient marginal likelihood optimization in blind deconvolution’. Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR 2011), Colorado Springs, CO, USA, 20–25 June 2011, pp. 26572664.
    38. 38)
      • 38. Rakhshanfar, M., Amer, M.A.: ‘Estimation of Gaussian, poissonian-Gaussian, and processed visual noise and its level function’, IEEE Trans. Image Process., 2016, 25, (9), pp. 41274185.
    39. 39)
      • 39. Pyatykh, S., Hesser, J., Zheng, L.: ‘Image noise level estimation by principal component analysis’, IEEE Trans. Image Process., 2013, 22, (2), pp. 687699.
    40. 40)
      • 40. Zoran, D., Weiss, Y.: ‘Scale invariance and noise in natural images’. Proc. of IEEE Int. Conf. on Computer Vision (ICCV 2009), Kyoto, Japan, 29–2 September 2009, pp. 22092216.
    41. 41)
      • 41. Panetta, K., Gao, C., Agaian, S.: ‘No reference color image contrast and quality measures’, IEEE Trans. Consumer Electron., 2013, 59, (3), pp. 643651.
    42. 42)
      • 42. Li, L.D., Cao, H., Zhang, Y.B., et al: ‘Sparse representation-based image quality index with adaptive sub-dictionaries’, IEEE Trans. Image Process., 2016, 25, (8), pp. 37753786.
    43. 43)
      • 43. Gu, K., Liu, M., Zhai, G.T., et al: ‘Quality assessment considering viewing distance and image resolution’, IEEE Trans. Broadcast., 2015, 61, (3), pp. 520531.
    44. 44)
      • 44. Gu, K., Zhai, G.T., Yang, X.K., et al: ‘An efficient color image quality metric with local-tuned-global model’. Proc. of IEEE Int. Conf. on Image Processing (ICIP 2014), Paris, France, 27–30 October 2014, pp. 506510.
    45. 45)
      • 45. Zhang, L., Gu, Z.Y., Li, H.Y.: ‘SDSP: A novel saliency detection method by combining simple priors’. Proc. of IEEE Int. Conf. on Image Processing (ICIP 2013), Melbourne, Australia, 15–18 September 2013, pp. 171175.
    46. 46)
      • 46. Hou, X.D., Zhang, L.Q: ‘Saliency detection: A spectral residual approach’. Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR 2007), Minneapolis, USA, 17–22 June 2007, pp. 18.
    47. 47)
      • 47. Hou, X.D., Harel, J., Koch, C.: ‘Image signature: highlighting sparse salient regions’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (1), pp. 194201.
    48. 48)
      • 48. Zhang, L., Tong, M.H., Marks, T.K., et al: ‘SUN: A Bayesian framework for saliency using natural statistics’, J. Vis., 2008, 8, (7), pp. 120.
    49. 49)
      • 49. Achanta, R., Hemami, S., Estrada, F., et al: ‘Frequency-tuned salient region detection’. Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR 2009), Miami, FL, USA, 20–25 June 2009, pp. 15971604.
    50. 50)
      • 50. Itti, L., Koch, C., Niebur, E.: ‘A model of saliency-based visual attention for rapid scene analysis’, IEEE Trans. Pattern Anal. Mach. Intell., 1998, 20, (11), pp. 12541259.
    51. 51)
      • 51. Bradley, R.A., Terry, M.E.: ‘Rank analysis of incomplete block designs the method of paired comparisons’, Biometrika, 1952, 39, (3), pp. 324345.
    52. 52)
      • 52. Wang, Z., Li, Q.: ‘Information content weighting for perceptual image quality assessment’, IEEE Trans. Image Process., 2011, 20, (5), pp. 11851198.
    53. 53)
      • 53. Final Report from the Video Quality Experts Group on the Validation of Objective Models of Video Quality Assessment, Phase II. Available at http://www.vqeg.org.
    54. 54)
      • 54. Li, Q.H., Lin, W.S., Fang, Y.M.: ‘No-reference quality assessment for multiply-distorted images in gradient domain’, IEEE Signal Process. Lett., 2016, 23, (4), pp. 541545.
    55. 55)
      • 55. Gu, K., Zhai, G.T., Liu, M., et al: ‘FISBLIM: A five-step blind metric for quality assessment of multiply distorted images’. Proc. of IEEE Workshop on Signal Processing Systems (SiPS 2013), Taipei City, Taiwan, 16–18 October 2013, pp. 241246.
    56. 56)
      • 56. Gu, K., Zhai, G.T., Yang, X.K., et al: ‘Hybrid no-reference quality metric for singly and multiply distorted images’, IEEE Trans. Broadcast., 2014, 60, (3), pp. 555567.
    57. 57)
      • 57. Li, L.D., Wu, D., Wu, J.J., et al: ‘Image sharpness assessment by sparse representation’, IEEE Trans. Multimed., 2016, 18, (6), pp. 10851097.
    58. 58)
      • 58. Zhang, H.L., Xia, C.X., Gao, X.J.: ‘Robust saliency detection via corner information and an energy function’, IET Comput. Vis., 2017, 11, (6), pp. 379388.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0478
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0478
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address