http://iet.metastore.ingenta.com
1887

Deep-network based method for joint image deblocking and super-resolution

Deep-network based method for joint image deblocking and super-resolution

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Many pieces of research have been conducted on image-restoration techniques to recover high-quality images from their low-quality versions, but they usually aim to handle a single degraded factor. However, captured images usually suffer from various degradation factors, such as low resolution and compression distortion, in the procedures of image acquisition, compression, and transmission simultaneously. Ignoring the correlation of different degraded factors may result in the limited efficiency of the existing image-restoration methods for captured images. A joint deep-network-based image-restoration algorithm is proposed to establish a restoration framework for image deblocking and super-resolution. The proposed convolutional neural network is made up of two stages. A deblocking network is constructed with two cascade deblocking subnets first, then, super-resolution is performed by a very deep network with skipping links. Cascading these two stages forms a novel deep network. An end-to-end training scheme is developed, which makes the two stages be trained jointly so as to achieve better performance. Intensive evaluations have been conducted to measure the performance of the authors’ method both in general images and face images. Experimental results on several datasets demonstrate that the proposed method outperforms other state-of-the-art methods, in terms of both subjective and objective performances.

References

    1. 1)
      • 1. Freeman, W.T., Pasztor, E.C., Carmichael, O.T.: ‘Learning low-level vision’, Int. J. Comput. Vis., 2000, 40, (1), pp. 2547.
    2. 2)
      • 2. Chang, H., Yeung, D.Y., Xiong, Y.: ‘Super-resolution through neighbor embedding’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, Washington, DC, 2004, pp. 18.
    3. 3)
      • 3. Yang, J., Wright, J., Huang, T., et al: ‘Image super-resolution as sparse representation of raw image patches’. 2008 IEEE Conf. on Computer Vision and Pattern Recognition, Anchorage, Alaska, 2008, pp. 18.
    4. 4)
      • 4. Sajjad, M., Mehmood, I., Baik, S.W.: ‘Image super-resolution using sparse coding over redundant dictionary based on effective image representations’, J. Vis. Commun. Image Represent., 2015, 26, pp. 5065.
    5. 5)
      • 5. Dong, C., Chen, C.L., He, K., et al: ‘Image super-resolution using deep convolutional networks’, IEEE Trans. Pattern Anal. Mach. Intell., 2016, 38, (2), pp. 295307.
    6. 6)
      • 6. Dong, C., Deng, Y., Chen, C.L., et al: ‘Compression artifacts reduction by a deep convolutional network’. IEEE Int. Conf. on Computer Vision, Boston, 2015, pp. 576584.
    7. 7)
      • 7. Kim, J., Lee, J.K., Lee, K.M.: ‘Accurate image super-resolution using very deep convolutional networks’. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, Las Vegas, 2016, pp. 16461654.
    8. 8)
      • 8. Tuzel, O., Taguchi, Y., Hershey, J.R.: ‘Global-local face upsampling network’, arXiv preprint arXiv:160307235, 2016.
    9. 9)
      • 9. Goodfellow, I.J., Pouget Abadie, J., Mirza, M., et al: ‘Generative adversarial nets’. Int. Conf. on Neural Information Processing Systems, Canada, 2014, pp. 26722680.
    10. 10)
      • 10. Cui, Z., Chang, H., Shan, S., et al: ‘Deep network cascade for image super-resolution’. European Conf. on Computer Vision, Zurich, Switzerland, 2014, pp. 4964.
    11. 11)
      • 11. Tai, Y., Yang, J., Liu, X.: ‘Image super-resolution via deep recursive residual network’. IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, Hawaii, 2017, pp. 27902798.
    12. 12)
      • 12. Lai, W.S., Huang, J.B., Ahuja, N., et al: ‘Deep Laplacian pyramid networks for fast and accurate superresolution’. IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, Hawaii, 2017, vol. 2, p. 5.
    13. 13)
      • 13. Herrmann, C., Qu, C., Willersinn, D., et al: ‘Impact of resolution and image quality on video face analysis’. IEEE Int. Conf. on Advanced Video and Signal Based Surveillance, Auckland, New Zealand, 2015, pp. 16.
    14. 14)
      • 14. Sun, X., Li, X., Zhuo, L., et al: ‘A joint deep-network-based image restoration algorithm for multi-degradations’. IEEE Int. Conf. on Multimedia and Expo, Hong Kong, 2017, pp. 301306.
    15. 15)
      • 15. Reeve, H.C.: ‘Reduction of blocking effects in image coding’, Opt. Eng., 1984, 23, (1), pp. 3437.
    16. 16)
      • 16. Foi, A., Katkovnik, V., Egiazarian, K.: ‘Pointwise shape-adaptive DCT for high-quality denoising and deblocking of grayscale and color images’, IEEE Trans. Image Process., 2007, 16, (5), pp. 13951411.
    17. 17)
      • 17. Svoboda, P., Hradis, M., Barina, D., et al: ‘Compression artifacts removal using convolutional neural networks’, J. WSCG, 2016, 24, (2), pp. 6372.
    18. 18)
      • 18. Yang, F., Xu, W., Tian, Y.: ‘Image super resolution using deep convolutional network based on topology aggregation structure’. Green Energy and Sustainable Development I: Proc. Int. Conf. on Green Energy and Sustainable Development, ChongQing, China, 2017, pp. 17.
    19. 19)
      • 19. Yosinski, J., Clune, J., Bengio, Y.: ‘How transferable are features in deep neural networks?’, in Ghahramani, Z., Welling, M., Cortes, C., et al (Eds): ‘Advances in neural information processing systems 27’ (Curran Associates, Inc., Canada, 2014), pp. 33203328.
    20. 20)
      • 20. Oquab, M., Bottou, L., Laptev, I., et al: ‘Learning and transferring mid-level image representations using convolutional neural networks’. IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, Ohio, 2014, pp. 17171724.
    21. 21)
      • 21. Yang, L., Wang, Y., Mu, X., et al: ‘Image super-resolution using mid-level representations’, DEStech Trans. Eng. Technol. Res., 2016, 1, (iect), pp. 14.
    22. 22)
      • 22. Dumoulin, V., Visin, F.: ‘A guide to convolution arithmetic for deep learning’, (arXiv preprint): arXiv:1603.07285, 2016.
    23. 23)
      • 23. Liang, Y., Wang, J., Zhou, S., et al: ‘Incorporating image priors with deep convolutional neural networks for image super-resolution’, Neurocomputing, 2016, 194, (C), pp. 340347.
    24. 24)
      • 24. Yang, W., Feng, J., Yang, J., et al: ‘Deep edge guided recurrent residual learning for image super-resolution’, IEEE Trans. Image Process., 2017, 26, (12), pp. 58955907.
    25. 25)
      • 25. Kim, J., Lee, J.K., Lee, K.M.: ‘Deeply-recursive convolutional network for image super-resolution’. Computer Vision and Pattern Recognition, Las Vegas, 2016, pp. 16371645.
    26. 26)
      • 26. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large scale image recognition’, (arXiv preprint), arXiv:1409.1556, 2014.
    27. 27)
      • 27. Bevilacqua, M., Roumy, A., Guillemot, C., et al: ‘Low-complexity single image super-resolution based on nonnegative neighbor embedding’, 2012.
    28. 28)
      • 28. Zeyde, R., Elad, M., Protter, M.: ‘On single image scale-up using sparse-representations’. Int. Conf. on Curves and Surfaces, Avignon, France, 2010, pp. 711730.
    29. 29)
      • 29. Martin, D., Fowlkes, C., Tal, D., et al: ‘A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics’. Proc. IEEE Int. Conf. on Computer Vision, Kauai, HI, 2001, vol. 2, pp. 416423.
    30. 30)
      • 30. Huang, J.B., Singh, A., Ahuja, N.: ‘Single image super-resolution from transformed self-exemplars’. IEEE Conf. on Computer Vision and Pattern Recognition, Boston, MA, United states, 2015, pp. 51975206.
    31. 31)
      • 31. Jae-Won, L., Oh-Young, L., Jong-Ok, K.: ‘Dual learning based compression noise reduction in the texture domain’, J. Vis. Commun. Image Represent., 2017, 43, pp. 98107.
    32. 32)
      • 32. Kang, L.W., Hsu, C.C., Zhuang, B., et al: ‘Learning-based joint super-resolution and deblocking for a highly compressed image’, IEEE Trans. Multimed., 2015, 17, (7), pp. 921934.
    33. 33)
      • 33. Fadili, M.J., Starck, J.L., Bobin, J., et al: ‘Image decomposition and separation using sparse representations: an overview’, Proc. IEEE, 2010, 98, (6), pp. 983994.
    34. 34)
      • 34. Zhou, R., Achanta, R., Süsstrunk, S.: ‘Deep residual network for joint demosaicing and super-resolution’. Color and Imaging Conference. Society for Imaging Science and Technology, Vancouver, BC, Canada, 2018, pp. 7580.
    35. 35)
      • 35. Tan, H., Xiao, H., Lai, S., et al: ‘Deep residual learning for image demosaicing and blind denoising’, Pattern Recognit. Lett., 2018, Preprint.
    36. 36)
      • 36. Zhang, X., Dong, H., Hu, Z., et al: ‘Gated fusion network for joint image deblurring and super-resolution’. British Machine Vision Conf. 2018, BMVC 2018, Northumbria University, Newcastle, UK, 2018, pp. 113.
    37. 37)
      • 37. Liu, H., Fu, Z., Han, J., et al: ‘Single satellite imagery simultaneous super-resolution and colorization using multi-task deep neural networks’, J. Vis. Commun. Image Represent., 2018, 53, pp. 2030.
    38. 38)
      • 38. Ke Yu, L.L., Dong, C., Loy, C.C.: ‘Crafting a toolchain for image restoration by deep reinforcement learning’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Salty Lake, 2018.
    39. 39)
      • 39. Lin, M., Chen, Q., Yan, S.: ‘Network in network’, Comput. Sci., arXiv preprint, arXiv:1312.4400v3, 2014.
    40. 40)
      • 40. Dong, C., Loy, C.C., Tang, X.: ‘Accelerating the super-resolution convolutional neural network’. European Conf. on Computer Vision, Amsterdam, The Netherlands, 2016, pp. 391407.
    41. 41)
      • 41. Liu, W., Anguelov, D., Erhan, D., et al: ‘SSD: single shot multibox detector’. European Conf. on Computer Vision, Amsterdam, The Netherlands, 2016, pp. 2137.
    42. 42)
      • 42. Ren, S., He, K., Girshick, R., et al: ‘Faster R-CNN: towards real-time object detection with region proposal networks’, IEEE Trans. Pattern Anal. Mach. Intell., 2015, 39, (6), pp. 11371149.
    43. 43)
      • 43. Kingma, D.P., Ba, J.: ‘Adam: a method for stochastic optimization’. International Conference for Learning Representations, San Diego, 2015.
    44. 44)
      • 44. Yang, J., Wright, J., Huang, T.S., et al: ‘Image super-resolution via sparse representation’, IEEE Trans. Image Process., 2010, 19, (11), pp. 28612873.
    45. 45)
      • 45. Sheikh, R.H., Sabir, F.M., Bovik, C.A.: ‘A statistical evaluation of recent full reference image quality assessment algorithms’, IEEE Trans. Image Process., 2006, 15, (11), pp. 34403451.
    46. 46)
      • 46. Wolf, L., Hassner, T., Taigman, Y.: ‘Effective unconstrained face recognition by combining multiple descriptors and learned background statistics’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (10), pp. 19781990.
    47. 47)
      • 47. Brandon, A., Bartosz, L., Mahadev, S.: ‘Openface: a general-purpose face recognition library with mobile applications’, CMU School of Computer Science, CMU-CS-16–118, 2016.
    48. 48)
      • 48. Yu, K., Dong, C., Chen, C.L., et al: ‘Deep convolution networks for compression artifacts reduction’, arXiv preprint, arXiv:1608.02778, 2016.
    49. 49)
      • 49. Timofte, R., Rothe, R., Gool, L.V.: ‘Seven ways to improve example-based single image super resolution’. Computer Vision and Pattern Recognition, Las Vegas, 2016, pp. 18651873.
    50. 50)
      • 50. Wang, Z., Liu, D., Yang, J., et al: ‘Deep networks for image super-resolution with sparse prior’. Proc. IEEE Int. Conf. on Computer Vision, Santiago, Chile, 2015, pp. 370378.
    51. 51)
      • 51. Zhang, Y., Tian, Y., Kong, Y., et al: ‘Residual dense network for image super-resolution’. 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018.
    52. 52)
      • 52. Lim, B., Son, S., Kim, H., et al: ‘Enhanced deep residual networks for single image super-resolution’. Computer Vision and Pattern Recognition Workshops, Honolulu, Hawaii, 2017, pp. 11321140.
    53. 53)
      • 53. Ledig, C., Theis, L., Huszar, F., et al: ‘Photo-realistic single image super-resolution using a generative adversarial network’, IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, Hawaii, 2017, pp. 46814690.
    54. 54)
      • 54. Tong, T., Li, G., Liu, X., et al: ‘Image super-resolution using dense skip connections’. IEEE Int. Conf. on Computer Vision, Venice, Italy, 2017, pp. 48094817.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2018.6113
Loading

Related content

content/journals/10.1049/iet-ipr.2018.6113
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address