access icon free FQI: feature-based reduced-reference image quality assessment method for screen content images

In this study, a reduced-reference image-quality-assessment (IQA) method for screen content images, named as feature-quality-index (FQI) is proposed. The proposed method is based on the fact that the human visual system is more sensitive towards change in features than intensity or structure. Reduced features from the reference and distorted images are first extracted. In order to find the preserved features in the distorted image, a feature matching process with a reduced number of distance calculations is proposed, namely reduced-distance method. To reflect the importance of the matched features and their distance, the inner product between the normalised scale and distance vector is obtained. Extensive comparisons are performed on two available benchmark databases namely SIQAD and QACS, with eight reduced-reference, and nine full-reference state-of-the-art IQA techniques to demonstrate the consistency, accuracy, and robustness of the proposed FQI. The subjective evaluation of mean opinion score shows that FQI outperforms the current state-of-the-art IQA techniques.

Inspec keywords: feature extraction; image matching

Other keywords: feature-quality-index; reduced number; distorted image; screen content images; full-reference IQA techniques; FQI; SIQAD benchmark database; feature matching process; distance calculations; feature-based reduced-reference image quality assessment method; matched features; QACS benchmark database; preserved features; human visual system; reduced-distance method; reduced feature extraction

Subjects: Computer vision and image processing techniques; Image recognition

References

    1. 1)
      • 48. Rohaly, A.M., Corriveau, P.J., Libert, J.M., et al: ‘Video quality experts group: current results and future directions’. Int. Conf. on Visual Communications and Image Processing (VCIP), Perth, Australia, 2000, Vol. 4067, pp. 742754.
    2. 2)
      • 42. Wells, W.M.: ‘Efficient synthesis of Gaussian filters by cascaded uniform filters’, IEEE Trans. Pattern Anal. Mach. Intell., 1986, 8, (2), pp. 234239.
    3. 3)
      • 2. Lu, Y., Li, S., Shen, H.: ‘Virtualized screen: a third element for cloud–mobile convergence’, IEEE Multimedia, 2011, 18, (2), pp. 411.
    4. 4)
      • 46. Wang, Z.: ‘SSIM index implementation version 1’. Available at https://ece.uwaterloo.ca/∼z70wang/research/ssim/ssim.m, accessed October 2018.
    5. 5)
      • 12. Wang, Z., Simoncelli, E.P., Bovik, A.C.: ‘Multiscale structural similarity for image quality assessment’. Conf. Record of the Thirty-Seventh Asilomar Conf. on Signals, Systems and Computers, Pacific Grove, USA, 2003, vol. 2, pp. 13981402.
    6. 6)
      • 13. Liu, A., Lin, W., Narwaria, M.: ‘Image quality assessment based on gradient similarity’, IEEE Trans. Image Process., 2012, 21, (4), pp. 15001512.
    7. 7)
      • 7. Saha, A., Wu, Q.M.J.: ‘Full-reference image quality assessment by combining global and local distortion measures’, Signal Process., 2016, 128, pp. 186197.
    8. 8)
      • 6. Wang, Z., Bovik, A.C.: ‘A universal image quality index’, IEEE Signal Process. Lett., 2002, 9, (3), pp. 8184.
    9. 9)
      • 35. Wang, S., Gu, K., Zhang, X., et al: ‘Subjective and objective quality assessment of compressed screen content images’, IEEE J. Emerg. Sel. Top. Circuits Syst., 2016, 6, (4), pp. 532543.
    10. 10)
      • 11. Fang, Y., Zeng, K., Wang, Z., et al: ‘Objective quality assessment for image retargeting based on structural similarity’, IEEE J. Emerg. Sel. Top. Circuits Syst., 2014, 4, (1), pp. 95105.
    11. 11)
      • 17. Gu, K., Wang, S., Yang, H., et al: ‘Saliency-guided quality assessment of screen content images’, IEEE Trans. Multimed., 2016, 18, (6), pp. 10981110.
    12. 12)
      • 22. Wu, Q., Li, H., Meng, F., et al: ‘Blind image quality assessment based on multichannel feature fusion and label transfer’, IEEE Trans. Circuits Syst. Video Technol., 2016, 26, (3), pp. 425440.
    13. 13)
      • 21. Shi, S., Zhang, X., Wang, S., et al: ‘Study on subjective quality assessment of screen content images’. Picture Coding Symp. (PCS), Cairns, Australia, 2015, pp. 7579.
    14. 14)
      • 27. Narwaria, M., Lin, W., McLoughlin, I.V., et al: ‘Fourier transform-based scalable image quality measure’, IEEE Trans. Image Process., 2012, 21, (8), pp. 33643377.
    15. 15)
      • 44. Harris, C., Stephens, M.: ‘Citeseer. A combined corner and edge detector’. Proc. Fourth Alvey Vision Conf., Manchester, UK, 1988, Vol. 15, no. 50, pp. 147151.
    16. 16)
      • 40. Chen, G., Coulombe, S.: ‘An image visual quality assessment method based on sift features’, J. Pattern Recognit. Res., 2013, 1, pp. 8597.
    17. 17)
      • 37. Temel, D., AlRegib, G.: ‘ReSIFT: reliability-weighted sift-based image quality assessment’. IEEE Int. Conf. on Image Processing (ICIP), Phoenix, USA, 2016, pp. 20472051.
    18. 18)
      • 29. Gu, K., Zhai, G., Yang, X., et al: ‘A new reduced-reference image quality assessment using structural degradation model’. IEEE Int. Symp. on Circuits and Systems (ISCAS), Beijing, China, 2013, pp. 10951098.
    19. 19)
      • 18. Gu, K., Qiao, J., Min, X., et al: ‘Evaluating quality of screen content images via structural variation analysis’, IEEE Trans. Vis. Comput. Graphics, 2018, 24, (10), pp. 26892701.
    20. 20)
      • 30. Zhang, M., Wufeng Xue, X.M.: ‘Reduced reference image quality assessment based on statistics of edge’, Proc. SPIE, 2011, 7876, pp. 78767876.
    21. 21)
      • 38. Wen, G.L., Liu, G., Zheng, S.G., et al: ‘Enhanced image quality evaluation based on sift feature’. Int. Conf. on Machine Learning and Cybernetics (ICMLC), Lanzhou, China, 2014, vol. 1, pp. 221226.
    22. 22)
      • 3. Chang, T.H., Li, Y.: ‘Deep shot: a framework for migrating tasks across devices using mobile phone cameras’. Proc. SIGCHI Conf. on Human Factors in Computing Systems, Vancouver, Canada, 2011, pp. 21632172.
    23. 23)
      • 23. Saad, M.A., Bovik, A.C., Charrier, C.: ‘Blind image quality assessment: a natural scene statistics approach in the DCT domain’, IEEE Trans. Image Process., 2012, 21, (8), pp. 33393352.
    24. 24)
      • 32. Gu, K., Zhai, G., Lin, W., et al: ‘The analysis of image contrast: from quality assessment to automatic enhancement’, IEEE Trans. Cybern., 2016, 46, (1), pp. 284297.
    25. 25)
      • 1. Shen, H., Lu, Y., Wu, F., et al: ‘A high-performance remote computing platform’. IEEE Int. Conf. on Pervasive Computing and Communications, Galveston, USA, 2009, pp. 16.
    26. 26)
      • 9. Chandler, D.M., Hemami, S.S.: ‘VSNR: a wavelet-based visual signal-to-noise ratio for natural images’, IEEE Trans. Image Process., 2007, 16, (9), pp. 22842298.
    27. 27)
      • 43. Burt, P.J., Adelson, E.H.: ‘The Laplacian pyramid as a compact image code’, IEEE Trans. Commun., 1983, 31, (4), pp. 532540.
    28. 28)
      • 15. Gao, F., Yu, J.: ‘Biologically inspired image quality assessment’, Signal Process., 2016, 124, pp. 210219.
    29. 29)
      • 26. Wang, Z., Simoncelli, E.P.: ‘Reduced-reference image quality assessment using a wavelet-domain natural image statistic model’, Hum. Vis. Electron. Imag., 2005, 5666, pp. 149159.
    30. 30)
      • 25. Gu, K., Zhai, G., Lin, W., et al: ‘Learning a blind quality evaluation engine of screen content images’, Neurocomputing, 2016, 196, pp. 140149.
    31. 31)
      • 45. Zhang, L., Shen, Y., Li, H.: ‘VSI: a visual saliency-induced index for perceptual image quality assessment’, IEEE Trans. Image Process., 2014, 23, (10), pp. 42704281.
    32. 32)
      • 34. Che, Z., Zhai, G., Gu, K., et al: ‘Reduced-reference quality metric for screen content image’, IEEE International Conference on Image Processing (ICIP), Beijing, 2017, pp. 18521856.
    33. 33)
      • 33. Wang, S., Gu, K., Zhang, X., et al: ‘Reduced-reference quality assessment of screen content images’, IEEE Trans. Circuits Syst. Video Technol., 2018, 28, (1), pp. 114.
    34. 34)
      • 4. Xu, J., Joshi, R., Cohen, R.A.: ‘Overview of the emerging HEVC screen content coding extension’, IEEE Trans. Circuits Syst. Video Technol., 2016, 26, (1), pp. 5062.
    35. 35)
      • 16. Zhang, L., Zhang, L., Mou, X., et al: ‘FSIM: a feature similarity index for image quality assessment’, IEEE Trans. Image Process., 2011, 20, (8), pp. 23782386.
    36. 36)
      • 10. Gu, K., Zhai, G., Yang, X., et al: ‘An efficient color image quality metric with local-tuned-global model’. IEEE Int. Conf. on Image Processing (ICIP), Paris, France, 2014, pp. 506510.
    37. 37)
      • 8. Wang, Z., Bovik, H.R., Sheikh, R., et al: ‘Image quality assessment: from error measurement to structural similarity’, IEEE Trans. Image Process., 2004, 13, (4), pp. 600612.
    38. 38)
      • 41. Decombas, M., Dufaux, F., Renan, E., et al: ‘A new object based quality metric based on SIFT and SSIM’. 19th IEEE Int. Conf. on Image Processing (ICIP), Orlando, USA, 2012, pp. 14931496.
    39. 39)
      • 24. Qian, J., Tang, L., Jakhetiya, V., et al: ‘Towards efficient blind quality evaluation of screen content images based on edge-preserving filter’, Electron. Lett., 2017, 53, (9), pp. 592594.
    40. 40)
      • 39. Sun, T., Ding, S., Xu, X.: ‘No-reference image quality assessment through sift intensity’, Appl. Math. Inf. Sci., 2014, 8, (4), pp. 19251934.
    41. 41)
      • 20. Ni, Z., Ma, L., Zeng, H., et al: ‘Gradient direction for screen content image quality assessment’, IEEE Signal Process. Lett., 2016, 23, (10), pp. 13941398.
    42. 42)
      • 49. Larson, E.C., Chandler, D.M.: ‘Unveiling relationships between regions of interest and image fidelity metrics’. Int. Conf. on Visual Communications and Image Processing, Xi'an, China, 2008, Vol. 6822, pp. 68222A168222A16.
    43. 43)
      • 14. Gu, K., Zhai, G., Yang, X., et al: ‘Structural similarity weighting for image quality assessment’. IEEE Int. Conf. on Multimedia and Expo Workshops (ICMEW), San Jose, USA, 2013, pp. 16.
    44. 44)
      • 47. Wang, Z.: ‘SSIM index implementation version 1’. Available at https://ece.uwaterloo.ca/∼z70wang/research/ssim/ssim_index.m, accessed October 2018.
    45. 45)
      • 31. Min, X., Gu, K., Zhai, G., et al: ‘Saliency-induced reduced-reference quality index for natural scene and screen content images’, Signal Process., 2018, 145, pp. 127136.
    46. 46)
      • 36. Lowe, D.G.: ‘Distinctive image features from scale-invariant keypoints’, Int. J. Comput. Vis., 2004, 60, (2), pp. 91110.
    47. 47)
      • 19. Yang, H., Fang, Y., Lin, W.: ‘Perceptual quality assessment of screen content images’, IEEE Trans. Image Process., 2015, 24, (11), pp. 44084421.
    48. 48)
      • 5. Borji, A., Itti, L.: ‘State-of-the-art in visual attention modeling’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (1), pp. 185207.
    49. 49)
      • 28. Li, Q., Wang, Z.: ‘Reduced-reference image quality assessment using divisive normalization-based image representation’, IEEE. J. Sel. Top. Signal. Process., 2009, 3, (2), pp. 202211.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2018.5496
Loading

Related content

content/journals/10.1049/iet-ipr.2018.5496
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading