Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Fast generative adversarial networks model for masked image restoration

The conventional masked image restoration algorithms all utilise the correlation between the masked region and its neighbouring pixels, which does not work well for the larger masked image. The latest research utilises Generative Adversarial Networks (GANs) model to generate a better result for the larger masked image but does not work well for the complex masked region. To get a better result for the complex masked region, the authors propose a novel fast GANs model for masked image restoration. The method used in authors’ research is based on GANs model and fast marching method (FMM). The authors trained an FMMGAN model which consists of a neighbouring network, a generator network, a discriminator network, and two parsing networks. A large number of experimental results on two open datasets show that the proposed model performs well for masked image restoration.

References

    1. 1)
      • 12. Arjovsky, M., Chintala, S., Bottou, L.: ‘Wasserstein gan’. arXiv preprint arXiv:1701.07875, 2017.
    2. 2)
      • 17. Hinton, G., Srivastava, N., Swersky, K.: ‘Rmsprop: divide the gradient by a running average of its recent magnitude’. Neural Networks for Machine Learning, Coursera Lecture 6e, 2012.
    3. 3)
      • 3. Bertalmio, M., Sapiro, G., Caselles, V., et al: ‘Image inpainting’. Proc. of the 27th annual Conf. on Computer Graphics and Interactive Techniques, New Orleans, USA, 2000, pp. 417424.
    4. 4)
      • 21. Fei-Fei, L.: ‘Imagenet: crowdsourcing, benchmarking & other cool things’. CMU VASC Seminar, Pittsburgh, USA, 2010, vol. 16, pp. 1825.
    5. 5)
      • 6. Telea, A.: ‘An image inpainting technique based on the fast marching method’, J. Graph. Tools, 2004, 9, (1), pp. 2334.
    6. 6)
      • 14. Mao, X., Li, Q., Xie, H., et al: ‘Least squares generative adversarial networks’. 2017 IEEE Int. Conf. on Computer Vision (ICCV), Venice, Italy, 2017, pp. 28132821.
    7. 7)
      • 13. Gulrajani, I., Ahmed, F., Arjovsky, M., et al: ‘Improved training of wasserstein gans’. arXiv preprint arXiv:1704.00028, 2017.
    8. 8)
      • 22. Huang, G.B., Mattar, M., Berg, T., et al: ‘Labeled faces in the wild: A database forstudying face recognition in unconstrained environments’. Workshop on Faces in ‘Real-Life’ Images: Detection, Alignment, and Recognition, Marseilles, France, 2008.
    9. 9)
      • 10. Dahl, R., Norouzi, M., Shlens, J.: ‘Pixel recursive super resolution’. arXiv preprint arXiv:1702.00783, 2017.
    10. 10)
      • 15. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’. arXiv preprint arXiv:1409.1556, 2014.
    11. 11)
      • 16. Kingma, D., Ba, J.: ‘Adam: A method for stochastic optimization’. arXiv preprint arXiv:1412.6980, 2014.
    12. 12)
      • 2. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al: ‘Generative adversarial nets’. Advances in Neural Information Processing Systems, Montréal, Canada, 2014, pp. 26722680.
    13. 13)
      • 23. Krause, J., Stark, M., Deng, J., et al: ‘3d object representations for fine-grained categorization’. Proc. of the IEEE Int. Conf. on Computer Vision Workshops, Sydney, Australia, 2013, pp. 554561.
    14. 14)
      • 11. Radford, A., Metz, L., Chintala, S.: ‘Unsupervised representation learning with deep convolutional generative adversarial networks’. arXiv preprint arXiv:1511.06434, 2015.
    15. 15)
      • 4. Bertalmio, M., Vese, L., Sapiro, G., et al: ‘Simultaneous structure and texture image inpainting’, IEEE Trans. Image Process., 2003, 12, (8), pp. 882889.
    16. 16)
      • 5. Barnes, C., Shechtman, E., Finkelstein, A., et al: ‘Patchmatch: A randomized correspondence algorithm for structural image editing’, ACM Trans. Graphics-TOG, 2009, 28, (3), p. 24.
    17. 17)
      • 18. Rasmus, A., Berglund, M., Honkala, M., et al: ‘Semi-supervised learning with ladder networks’. Advances in Neural Information Processing Systems, Montréal, Canada, 2015, pp. 35463554.
    18. 18)
      • 24. Wang, Z., Bovik, A., Sheikh, H., et al: ‘Image quality assessment: from error visibility to structural similarity’, 2004.
    19. 19)
      • 1. Li, Y., Liu, S., Yang, J., et al: ‘Generative face completion’. arXiv preprint arXiv:1704.05838, 2017.
    20. 20)
      • 7. Ren, J.S.J., Xu, L., Yan, Q., et al: ‘Shepard convolutional neural networks’. Advances in Neural Information Processing Systems, Montréal, Canada, 2014, pp. 901909.
    21. 21)
      • 19. Ioffe, S., Szegedy, C.: ‘Batch normalization: accelerating deep network training by reducing internal covariate shift’. Int. Conf. on Machine Learning, Lille, France, 2015, pp. 448456.
    22. 22)
      • 20. Dosovitskiy, A., Fischer, P., Springenberg, J.T., et al: ‘Discriminative unsupervised feature learning with exemplar convolutional neural networks’, IEEE Trans. Pattern Anal. Mach. Intell., 2016, 38, (9), pp. 17341747.
    23. 23)
      • 9. van den Oord, A., Kalchbrenner, N., Espeholt, L., et al: ‘Conditional image generation with pixelcnn decoders’. Advances in Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 47904798.
    24. 24)
      • 8. Kingma, D.P., Welling, M.: ‘Auto-encoding variational Bayes’. arXiv preprint arXiv:1312.6114, 2013.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2018.5592
Loading

Related content

content/journals/10.1049/iet-ipr.2018.5592
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address