http://iet.metastore.ingenta.com
1887

Image super-resolution using conditional generative adversarial network

Image super-resolution using conditional generative adversarial network

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Recently, extensive studies on a generative adversarial network (GAN) have made great progress in single image super-resolution (SISR). However, there still exists a significant difference between the reconstructed high-frequency and the real high-frequency details. To address this issue, this study presents an SISR approach based on conditional GAN (SRCGAN). SRCGAN includes a generator network that generates super-resolution (SR) images and a discriminator network that is trained to distinguish the SR images from ground-truth high-resolution (HR) ones. Specifically, the discriminator network uses the ground-truth HR image as a conditional variable, which guides the network to distinguish the real images from the SR images, facilitating training a more stable generator model than GAN without this guidance. Furthermore, a residual-learning module is introduced into the generator network to solve the issue of detail information loss in SR images. Finally, the network is trained in an end-to-end manner by optimizing a perceptual loss function. Extensive evaluations on four benchmark datasets including Set5, Set14, BSD100, and Urban100 demonstrate the superiority of the proposed SRCGAN over state-of-the-art methods in terms of PSNR, SSIM, and visual effect.

References

    1. 1)
      • 1. Yang, Q., Yang, R., Davis, J., et al: ‘Spatial-depth super resolution for range images’. IEEE Conf. on Computer Vision and Pattern Recognition. IEEE Computer Society, Minneapolis, MN, USA, June 2007, pp. 18.
    2. 2)
      • 2. Kim, J., Lee, J.K., Lee, K.M., et al: ‘Accurate image super-resolution using very deep convolutional networks’. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, June 2016, pp. 16461654.
    3. 3)
      • 3. Yang, C., Ma, C., Yang, M., et al: ‘Single-image super-resolution: a benchmark’. European Conf. on Computer Vision, Zürich, Zürich, September 2014, pp. 372386.
    4. 4)
      • 4. Dong, C., Loy, C.C., He, K., et al: ‘Learning a deep convolutional network for image super-resolution’. European Conf. on Computer Vision, Zürich, Zürich (Springer, Cham), September 2014, pp. 184199.
    5. 5)
      • 5. Liu, D., Wang, Z., Wen, B., et al: ‘Robust single image super-resolution via deep networks with sparse prior’, IEEE Trans. Image Process., 2016, 25, (7), pp. 31943207.
    6. 6)
      • 6. Kim, J., Kwon Lee, J., Mu Lee, K.: ‘Deeply-recursive convolutional network for image super-resolution’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, June 2016, pp. 16371645.
    7. 7)
      • 7. Shi, W., Caballero, J., Huszár, F., et al: ‘Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, June 2016, pp. 18741883.
    8. 8)
      • 8. Ledig, C., Theis, L., Huszár, F., et al: ‘Photo-realistic single image super-resolution using a generative adversarial network’. Computer Vision Pattern Recognition, Honolulu, HI, USA, July 2017, pp. 105144.
    9. 9)
      • 9. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al: ‘Generative adversarial nets’. Neural Information Processing Systems, Montreal, CANADA, June 2014, pp. 26722680.
    10. 10)
      • 10. Mirza, M., Osindero, S.: ‘Conditional generative adversarial nets’. Computer Science, Kuala Lumpur, Malaysia, November 2014, pp. 26722680.
    11. 11)
      • 11. Isola, P., Zhu, J.Y., Zhou, T., et al: ‘Image-to-image translation with conditional adversarial networks’. The IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, July 2017, pp. 11251134.
    12. 12)
      • 12. He, K., Zhang, X., Ren, S., et al: ‘Deep residual learning for image recognition’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, August 2016, pp. 770778.
    13. 13)
      • 13. Ioffe, S., Szegedy, C.: ‘Batch normalization: accelerating deep network training by reducing internal covariate shift’. Int. Conf. on Machine Learning, Lille, France, February 2015, pp. 448456.
    14. 14)
      • 14. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’, Computer Science, 2014.
    15. 15)
      • 15. Lim, B., Son, S., Kim, H., et al: ‘Enhanced deep residual networks for single image super-resolution’. The IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, HI, USA, July 2017, pp. 11321140.
    16. 16)
      • 16. He, K., Zhang, X., Ren, S., et al: ‘Delving deep into rectifiers: surpassing human-level performance on imagenet classification’. IEEE Int. Conf. on Computer Vision (ICCV), Santiago, Chile, December 2015, pp. 10261034.
    17. 17)
      • 17. Lai, W.S., Huang, J.B., Ahuja, N., et al: ‘Deep Laplacian pyramid networks for fast and accurate super-resolution’. Computer Vision and Pattern Recognition, Honolulu, HI, USA, October 2017, pp. 58355843.
    18. 18)
      • 18. Radford, A., Metz, L., Chintala, S.: ‘Unsupervised representation learning with deep convolutional generative adversarial networks’. Int. Conf. on Learning Representations (ICLR), San Juan, Puerto Rico, November 2016.
    19. 19)
      • 19. Johnson, J., Alahi, A., Feifei, L.: ‘Perceptual losses for real-time style transfer and super-resolution’. European Conf. on Computer Vision, Amsterdam, Netherlands, March 2016, pp. 694711.
    20. 20)
      • 20. Bevilacqua, M., Roumy, A., Guillemot, C., et al: ‘Low-complexity single-image super-resolution based on nonnegative neighbor embedding’. British Machine Vision Conf., Guildford, United Kingdom, September 2012, pp. 110.
    21. 21)
      • 21. Zeyde, R., Elad, M., Protter, M.: ‘On single image scale-up using sparse-representations’. Int. Conf. on Curves and Surfaces, Oslo, Norway, June 2012, pp. 711730.
    22. 22)
      • 22. Huang, J. B., Singh, A., Ahuja, N., et al: ‘Single image super-resolution from transformed self-exemplars’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, June 2015, pp. 51975206.
    23. 23)
      • 23. Martin, D., Fowlkes, C., Tal, D., et al: ‘A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics’. Proc. Eighth IEEE Int. Conf. on Computer Vision (ICCV 2001), Vancouver, Canada, February 2001, pp. 416423.
    24. 24)
      • 24. Timofte, R., De Smet, V., Van Gool, L., et al: ‘A + : adjusted anchored neighborhood regression for fast super-resolution’. Asian Conf. on Computer Vision, Singapore, Singapore, November 2014, pp. 111126.
    25. 25)
      • 25. Dong, C., Loy, C.C., Tang, X.: ‘Accelerating the super-resolution convolutional neural network’. European Conf. on Computer Vision, Amsterdam, Netherlands, October 2016, pp. 391407.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2018.6570
Loading

Related content

content/journals/10.1049/iet-ipr.2018.6570
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address