Fast generative adversarial networks model for masked image restoration

Fast generative adversarial networks model for masked image restoration

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

The conventional masked image restoration algorithms all utilise the correlation between the masked region and its neighbouring pixels, which does not work well for the larger masked image. The latest research utilises Generative Adversarial Networks (GANs) model to generate a better result for the larger masked image but does not work well for the complex masked region. To get a better result for the complex masked region, the authors propose a novel fast GANs model for masked image restoration. The method used in authors’ research is based on GANs model and fast marching method (FMM). The authors trained an FMMGAN model which consists of a neighbouring network, a generator network, a discriminator network, and two parsing networks. A large number of experimental results on two open datasets show that the proposed model performs well for masked image restoration.


    1. 1)
      • 1. Li, Y., Liu, S., Yang, J., et al: ‘Generative face completion’. arXiv preprint arXiv:1704.05838, 2017.
    2. 2)
      • 2. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al: ‘Generative adversarial nets’. Advances in Neural Information Processing Systems, Montréal, Canada, 2014, pp. 26722680.
    3. 3)
      • 3. Bertalmio, M., Sapiro, G., Caselles, V., et al: ‘Image inpainting’. Proc. of the 27th annual Conf. on Computer Graphics and Interactive Techniques, New Orleans, USA, 2000, pp. 417424.
    4. 4)
      • 4. Bertalmio, M., Vese, L., Sapiro, G., et al: ‘Simultaneous structure and texture image inpainting’, IEEE Trans. Image Process., 2003, 12, (8), pp. 882889.
    5. 5)
      • 5. Barnes, C., Shechtman, E., Finkelstein, A., et al: ‘Patchmatch: A randomized correspondence algorithm for structural image editing’, ACM Trans. Graphics-TOG, 2009, 28, (3), p. 24.
    6. 6)
      • 6. Telea, A.: ‘An image inpainting technique based on the fast marching method’, J. Graph. Tools, 2004, 9, (1), pp. 2334.
    7. 7)
      • 7. Ren, J.S.J., Xu, L., Yan, Q., et al: ‘Shepard convolutional neural networks’. Advances in Neural Information Processing Systems, Montréal, Canada, 2014, pp. 901909.
    8. 8)
      • 8. Kingma, D.P., Welling, M.: ‘Auto-encoding variational Bayes’. arXiv preprint arXiv:1312.6114, 2013.
    9. 9)
      • 9. van den Oord, A., Kalchbrenner, N., Espeholt, L., et al: ‘Conditional image generation with pixelcnn decoders’. Advances in Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 47904798.
    10. 10)
      • 10. Dahl, R., Norouzi, M., Shlens, J.: ‘Pixel recursive super resolution’. arXiv preprint arXiv:1702.00783, 2017.
    11. 11)
      • 11. Radford, A., Metz, L., Chintala, S.: ‘Unsupervised representation learning with deep convolutional generative adversarial networks’. arXiv preprint arXiv:1511.06434, 2015.
    12. 12)
      • 12. Arjovsky, M., Chintala, S., Bottou, L.: ‘Wasserstein gan’. arXiv preprint arXiv:1701.07875, 2017.
    13. 13)
      • 13. Gulrajani, I., Ahmed, F., Arjovsky, M., et al: ‘Improved training of wasserstein gans’. arXiv preprint arXiv:1704.00028, 2017.
    14. 14)
      • 14. Mao, X., Li, Q., Xie, H., et al: ‘Least squares generative adversarial networks’. 2017 IEEE Int. Conf. on Computer Vision (ICCV), Venice, Italy, 2017, pp. 28132821.
    15. 15)
      • 15. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’. arXiv preprint arXiv:1409.1556, 2014.
    16. 16)
      • 16. Kingma, D., Ba, J.: ‘Adam: A method for stochastic optimization’. arXiv preprint arXiv:1412.6980, 2014.
    17. 17)
      • 17. Hinton, G., Srivastava, N., Swersky, K.: ‘Rmsprop: divide the gradient by a running average of its recent magnitude’. Neural Networks for Machine Learning, Coursera Lecture 6e, 2012.
    18. 18)
      • 18. Rasmus, A., Berglund, M., Honkala, M., et al: ‘Semi-supervised learning with ladder networks’. Advances in Neural Information Processing Systems, Montréal, Canada, 2015, pp. 35463554.
    19. 19)
      • 19. Ioffe, S., Szegedy, C.: ‘Batch normalization: accelerating deep network training by reducing internal covariate shift’. Int. Conf. on Machine Learning, Lille, France, 2015, pp. 448456.
    20. 20)
      • 20. Dosovitskiy, A., Fischer, P., Springenberg, J.T., et al: ‘Discriminative unsupervised feature learning with exemplar convolutional neural networks’, IEEE Trans. Pattern Anal. Mach. Intell., 2016, 38, (9), pp. 17341747.
    21. 21)
      • 21. Fei-Fei, L.: ‘Imagenet: crowdsourcing, benchmarking & other cool things’. CMU VASC Seminar, Pittsburgh, USA, 2010, vol. 16, pp. 1825.
    22. 22)
      • 22. Huang, G.B., Mattar, M., Berg, T., et al: ‘Labeled faces in the wild: A database forstudying face recognition in unconstrained environments’. Workshop on Faces in ‘Real-Life’ Images: Detection, Alignment, and Recognition, Marseilles, France, 2008.
    23. 23)
      • 23. Krause, J., Stark, M., Deng, J., et al: ‘3d object representations for fine-grained categorization’. Proc. of the IEEE Int. Conf. on Computer Vision Workshops, Sydney, Australia, 2013, pp. 554561.
    24. 24)
      • 24. Wang, Z., Bovik, A., Sheikh, H., et al: ‘Image quality assessment: from error visibility to structural similarity’, 2004.

Related content

This is a required field
Please enter a valid email address