Perceptual quality evaluation for motion deblurring
- Author(s): Bo Hu 1 ; Leida Li 1 ; Jiansheng Qian 1
-
-
View affiliations
-
Affiliations:
1:
School of Information and Control Engineering, China University of Mining and Technology , Xuzhou 221116 , People's Republic of China
-
Affiliations:
1:
School of Information and Control Engineering, China University of Mining and Technology , Xuzhou 221116 , People's Republic of China
- Source:
Volume 12, Issue 6,
September
2018,
p.
796 – 805
DOI: 10.1049/iet-cvi.2017.0478 , Print ISSN 1751-9632, Online ISSN 1751-9640
Motion deblurring has been widely studied. However, the relevant quality evaluation of motion deblurred images remains an open problem. The motion deblurred images are usually contaminated by noise, ringing and residual blur (NRRB) simultaneously. Unfortunately, most of the existing quality metrics are not designed for multiply distorted images, so they are limited in predicting the quality of motion deblurred images. In this study, the authors propose a new quality metric for motion deblurred images by measuring NRRB. For a motion deblurred image, the noise level is first estimated. Then the ringing effect is measured by incorporating visual saliency model to adapt to the characteristic of the human visual system. A reblurring-based method is proposed to extract similarity features between a motion deblurred image and its re-blurred version for evaluating the residual blur. Finally, the overall quality score of a motion deblurred image is obtained by pooling the scores of noise, ringing and blur. Experimental results conducted on a motion deblurring database demonstrate that the proposed metric significantly outperforms the existing quality metrics. In addition, the proposed NRRB metric is used for improving the existing general-purpose no-reference metrics, and very encouraging results are achieved.
Inspec keywords: feature extraction; image restoration; image motion analysis
Other keywords: quality metrics; distorted images; motion deblurring database; re-blurred version; motion deblurred image; human visual system; similarity features; perceptual quality evaluation
Subjects: Computer vision and image processing techniques; Optical, image and video signal processing
References
-
-
1)
-
22. Liu, L.X., Liu, B., Su, C.-C., et al: ‘Binocular spatial activity and reverse saliency driven no-reference stereopair quality assessment’, Signal Process. Image Commun., 2017, 58, pp. 287–299.
-
-
2)
-
35. Goldstein, A., Fattal, R.: ‘Blur-kernel estimation from spectral irregularities’. European Conf. on Computer Vision (ECCV 2012), Florence, Italy, 7–13 October 2012, pp. 622–635.
-
-
3)
-
24. Liu, L.X., Dong, H.P., Huang, H., et al: ‘No-reference image quality assessment in curvelet domain’, Signal Process. Image Commun., 2014, 29, (4), pp. 494–505.
-
-
4)
-
26. Li, L.D., Yan, Y., Fang, Y.M., et al: ‘Perceptual quality evaluation for image defocus deblurring’, Signal Process. Image Commun., 2016, 48, pp. 81–91.
-
-
5)
-
3. Lahrache, S., Ouazzani, R.E., Qadi, A.E.: ‘Bag-of-features for image memorability evaluation’, IET Comput. Vis., 2016, 10, (6), pp. 577–584.
-
-
6)
-
52. Wang, Z., Li, Q.: ‘Information content weighting for perceptual image quality assessment’, IEEE Trans. Image Process., 2011, 20, (5), pp. 1185–1198.
-
-
7)
-
41. Panetta, K., Gao, C., Agaian, S.: ‘No reference color image contrast and quality measures’, IEEE Trans. Consumer Electron., 2013, 59, (3), pp. 643–651.
-
-
8)
-
54. Li, Q.H., Lin, W.S., Fang, Y.M.: ‘No-reference quality assessment for multiply-distorted images in gradient domain’, IEEE Signal Process. Lett., 2016, 23, (4), pp. 541–545.
-
-
9)
-
58. Zhang, H.L., Xia, C.X., Gao, X.J.: ‘Robust saliency detection via corner information and an energy function’, IET Comput. Vis., 2017, 11, (6), pp. 379–388.
-
-
10)
-
28. Ng, M.K., Chan, R.H., Tang, W.: ‘A fast algorithm for deblurring models with neumann boundary condition’, Soc. Ind. Appl. Math., 1999, 21, (3), pp. 851–866.
-
-
11)
-
36. Cho, S., Lee, S.: ‘Fast motion deblurring’, ACM Trans. Graph., 2009, 25, (8), pp. 1–8.
-
-
12)
-
23. Liu, L.X., Hua, Y., Zhao, Q.J., et al: ‘Blind image quality assessment by relative gradient statistics and adaboosting neural network’, Signal Process. Image Commun., 2016, 40, pp. 1–15.
-
-
13)
-
40. Zoran, D., Weiss, Y.: ‘Scale invariance and noise in natural images’. Proc. of IEEE Int. Conf. on Computer Vision (ICCV 2009), Kyoto, Japan, 29–2 September 2009, pp. 2209–2216.
-
-
14)
-
39. Pyatykh, S., Hesser, J., Zheng, L.: ‘Image noise level estimation by principal component analysis’, IEEE Trans. Image Process., 2013, 22, (2), pp. 687–699.
-
-
15)
-
32. Shan, Q., Jia, J., Agarwala, A.: ‘High-quality motion deblurring from a single image’, ACM Trans. Graph., 2008, 27, (3), pp. 1–10.
-
-
16)
-
8. Fu, B., Guo, H., Zhao, X.L., et al: ‘Motion-blurred SIFT invariants based on sampling in image deformation space and univariate search’, IET Comput. Vis., 2016, 10, (7), pp. 709–717.
-
-
17)
-
42. Li, L.D., Cao, H., Zhang, Y.B., et al: ‘Sparse representation-based image quality index with adaptive sub-dictionaries’, IEEE Trans. Image Process., 2016, 25, (8), pp. 3775–3786.
-
-
18)
-
1. Kilickaya, M., Akkus, B.K., Cakici, R., et al: ‘Data-driven image captioning via salient region discovery’, IET Comput. Vis., 2017, 11, (6), pp. 398–406.
-
-
19)
-
31. Pan, J., Hu, Z., Su, Z., et al: ‘Deblurring text images via L0-regularized intensity and gradient prior’. Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR 2014), Columbus, USA, 23–28 June 2014, pp. 2901–2908.
-
-
20)
-
13. Moorthy, A.K., Bovik, A.C.: ‘Blind image quality assessment: from natural scene statistics to perceptual quality’, IEEE Trans. Image Process., 2011, 20, (12), pp. 3350–3364.
-
-
21)
-
48. Zhang, L., Tong, M.H., Marks, T.K., et al: ‘SUN: A Bayesian framework for saliency using natural statistics’, J. Vis., 2008, 8, (7), pp. 1–20.
-
-
22)
-
12. Moorthy, A.K., Bovik, A.C.: ‘A two-step framework for constructing blind image quality indices’, IEEE Signal Process. Lett., 2010, 17, (5), pp. 513–516.
-
-
23)
-
33. Xu, L., Zheng, S., Jia, J.: ‘Unnatural L0 sparse representation for natural image deblurring’. Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR 2013), Portland, USA, 23–28 June 2013, pp. 1107–1114.
-
-
24)
-
11. Liu, Y.M., Wang, J., Cho, S., et al: ‘A no-reference metric for evaluating the quality of motion deblurring’, ACM Trans. Graph., 2013, 32, (6), pp. 1–12.
-
-
25)
-
4. Wei, Y., Dong, Z., Wu, C.: ‘Depth measurement using single camera with fixed camera parameters’, IET Comput. Vis., 2012, 6, (1), pp. 29–39.
-
-
26)
-
18. Gu, K., Zhai, G.T., Yang, X.K., et al: ‘Using free energy principle for blind image quality assessment’, IEEE Trans. Multimedia, 2015, 17, (1), pp. 50–63.
-
-
27)
-
21. Liu, L.X., Yang, B., Huang, H.: ‘No-reference stereopair quality assessment based on singular value decomposition’, Neurocomputing, 2018, 275, pp. 1823–1835.
-
-
28)
-
50. Itti, L., Koch, C., Niebur, E.: ‘A model of saliency-based visual attention for rapid scene analysis’, IEEE Trans. Pattern Anal. Mach. Intell., 1998, 20, (11), pp. 1254–1259.
-
-
29)
-
53. Final Report from the Video Quality Experts Group on the Validation of Objective Models of Video Quality Assessment, Phase II. Available at http://www.vqeg.org.
-
-
30)
-
5. Rad, R., Jamzad, M.: ‘Automatic image annotation by a loosely joint non-negative matrix factorisation’, IET Comput. Vis., 2015, 9, (6), pp. 806–813.
-
-
31)
-
51. Bradley, R.A., Terry, M.E.: ‘Rank analysis of incomplete block designs the method of paired comparisons’, Biometrika, 1952, 39, (3), pp. 324–345.
-
-
32)
-
27. Li, L.D., Yan, Y., Lu, Z.L., et al: ‘No-reference quality assessment of deblurred images based on natural scene statistics’, IEEE Access, 2017, 5, pp. 2163–2171.
-
-
33)
-
37. Levin, A., Weiss, Y., Durand, F., et al: ‘Efficient marginal likelihood optimization in blind deconvolution’. Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR 2011), Colorado Springs, CO, USA, 20–25 June 2011, pp. 2657–2664.
-
-
34)
-
6. Xu, X.Y., Sun, D.Q., Pan, J.S., et al: ‘Learning to super-resolve blurry face and text images’. Proc. of IEEE Int. Conf. on Computer Vision (ICCV 2017), Venice, Italy, 22–29 October 2017, pp. 251–260.
-
-
35)
-
34. Wang, C., Yue, Y., Dong, F., et al: ‘Nonedge-specific adaptive scheme for highly robust blind motion deblurring of natural imagess’, IEEE Trans. Image Process., 2013, 22, (3), pp. 884–897.
-
-
36)
-
45. Zhang, L., Gu, Z.Y., Li, H.Y.: ‘SDSP: A novel saliency detection method by combining simple priors’. Proc. of IEEE Int. Conf. on Image Processing (ICIP 2013), Melbourne, Australia, 15–18 September 2013, pp. 171–175.
-
-
37)
-
29. Chai, A., Shen, Z.: ‘Deconvolution: A wavelet frame approach’, Numer. Math., 2007, 106, (4), pp. 529–587.
-
-
38)
-
47. Hou, X.D., Harel, J., Koch, C.: ‘Image signature: highlighting sparse salient regions’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (1), pp. 194–201.
-
-
39)
-
43. Gu, K., Liu, M., Zhai, G.T., et al: ‘Quality assessment considering viewing distance and image resolution’, IEEE Trans. Broadcast., 2015, 61, (3), pp. 520–531.
-
-
40)
-
16. Mittal, A., Soundararajan, R., Bovik, A.C.: ‘Making a completely blind image quality analyzer’, IEEE Signal Process. Lett., 2013, 20, (3), pp. 209–212.
-
-
41)
-
17. Liu, L.X., Liu, B., Huang, H., et al: ‘No-reference image quality assessment based on spatial and spectral entropies’, Signal Process. Image Commun., 2014, 29, (8), pp. 856–863.
-
-
42)
-
57. Li, L.D., Wu, D., Wu, J.J., et al: ‘Image sharpness assessment by sparse representation’, IEEE Trans. Multimed., 2016, 18, (6), pp. 1085–1097.
-
-
43)
-
55. Gu, K., Zhai, G.T., Liu, M., et al: ‘FISBLIM: A five-step blind metric for quality assessment of multiply distorted images’. Proc. of IEEE Workshop on Signal Processing Systems (SiPS 2013), Taipei City, Taiwan, 16–18 October 2013, pp. 241–246.
-
-
44)
-
20. Ye, P., Kumar, J., Kang, L.: ‘Unsupervised feature learning framework for no-reference image quality assessment’. Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR 2012), Providence, RI, USA, 16–21 June 2012, pp. 1098–1105.
-
-
45)
-
38. Rakhshanfar, M., Amer, M.A.: ‘Estimation of Gaussian, poissonian-Gaussian, and processed visual noise and its level function’, IEEE Trans. Image Process., 2016, 25, (9), pp. 4127–4185.
-
-
46)
-
25. Nasonov, A.V., Krylov, A.S.: ‘Basic edges metrics for image deblurring’. Proc. of the 10th Conf. on Pattern Recognition and Image Analysis: New Information Technologies (PRIA2010), Petersburg, Russian, 5–12 December 2010, pp. 243–246.
-
-
47)
-
14. Saad, M.A., Bovik, A.C.: ‘Blind image quality assessment: A natural scene statistics approach in the DCT domain’, IEEE Trans. Image Process., 2012, 21, (8), pp. 3339–3352.
-
-
48)
-
30. Lou, Y., Zhang, X., Osher, S., et al: ‘Image recovery via nonlocal operators’, J. Sci. Comput., 2010, 42, (2), pp. 185–197.
-
-
49)
-
9. Wang, Z., Bovik, A.C., Sheikh, H.R., et al: ‘Image quality assessment: from error visibility to structural similarity’, IEEE Trans. Image Process., 2004, 13, (4), pp. 600–612.
-
-
50)
-
7. Pan, J.S., Sun, D.Q., Pfister, H., et al: ‘Blind image deblurring using dark channel prior’. Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR 2016), Las Vegas, USA, 27–30 June 2016, pp. 1628–1636.
-
-
51)
-
19. Zhang, Y., Chandler, D.M.: ‘No-reference image quality assessment based on log-derivative statistics of natural scenes’, J. Electron. Imaging, 2013, 22, (4), pp. 1–22.
-
-
52)
-
44. Gu, K., Zhai, G.T., Yang, X.K., et al: ‘An efficient color image quality metric with local-tuned-global model’. Proc. of IEEE Int. Conf. on Image Processing (ICIP 2014), Paris, France, 27–30 October 2014, pp. 506–510.
-
-
53)
-
2. Penne, R., Veraart, J., Abbeloos, W., et al: ‘Four-point-algorithm for the recovery of the pose of a one-dimensional camera with unknown focal length’, IET Comput. Vis., 2012, 6, (4), pp. 314–323.
-
-
54)
-
49. Achanta, R., Hemami, S., Estrada, F., et al: ‘Frequency-tuned salient region detection’. Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR 2009), Miami, FL, USA, 20–25 June 2009, pp. 1597–1604.
-
-
55)
-
46. Hou, X.D., Zhang, L.Q: ‘Saliency detection: A spectral residual approach’. Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR 2007), Minneapolis, USA, 17–22 June 2007, pp. 1–8.
-
-
56)
-
15. Mittal, A., Moorthy, A.K., Bovik, A.C.: ‘No-reference image quality assessment in the spatial domain’, IEEE Trans. Image Process., 2012, 21, (12), pp. 4695–4708.
-
-
57)
-
10. Li, L.D., Zhou, Y., Lin, W.S., et al: ‘No-reference quality assessment of deblocked images’, Neurocomputing, 2016, 177, pp. 572–584.
-
-
58)
-
56. Gu, K., Zhai, G.T., Yang, X.K., et al: ‘Hybrid no-reference quality metric for singly and multiply distorted images’, IEEE Trans. Broadcast., 2014, 60, (3), pp. 555–567.
-
-
1)