http://iet.metastore.ingenta.com
1887

Generative adversarial networks model for visible watermark removal

Generative adversarial networks model for visible watermark removal

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Previously visible watermark removal algorithms required the location of known watermarks. A corresponding removal algorithm is then proposed based on the location and the feature of the watermark. If the location of the watermark is random or the watermark has different angles, the watermark removal algorithm will encounter problems. The authors recommend a visible watermark removal algorithm based on generative adversarial networks (GANs) and self-attention mechanisms. During the training, the authors introduce a GANs model to build mappings between watermarked images and real images. The authors observe that the feature of the watermarked region in different watermarked images is invariant in nature, and the other regions are changed. The self-attention layer will automatically focus on this invariant feature. Experiments on two public datasets prove that the authors’ model has gained excellent performance. Compared with the other four most competitive watermark removal models, the authors improve the watermark removal rate indicator from 17 to 92%. For the other four evaluation indicators, the authors have improved performance by up to 20%.

References

    1. 1)
      • 1. Dekel, T., Rubinstein, M., Liu, C., et al: ‘On the effectiveness of visible watermarks’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, USA, 2017, pp. 21462154.
    2. 2)
      • 2. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al: ‘Generative adversarial nets’, Adv. Neural. Inf. Process. Syst., 2014, pp. 26722680, A conference paper accepted by nips 2014 (Palais des Congrès de Montréal, Montréal CANADA)..
    3. 3)
      • 3. Qian, R., Tan, R.T., Yang, W., et al: ‘Attentive generative adversarial network for raindrop removal from a single image’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018, pp. 24822491.
    4. 4)
      • 4. Braudaway, G.W.: ‘Protecting publicly-available images with an visible image watermark’. 1997 Proc.., Int. Conf. on Image Processing, IEEE, 1997, vol. 1, pp. 524527.
    5. 5)
      • 5. Mohanty, S.P., Ramakrishnan, K.R., Kankanhalli, M.S.: ‘A dct domain visible watermarking technique for images’. 2000 IEEE Int. Conf. on Multimedia and Expo, 2000. ICME 2000, IEEE, 2000, vol. 2, pp. 10291032.
    6. 6)
      • 6. Huang, C.-H., Wu, J.-L.: ‘Attacking visible watermarking schemes’, IEEE Trans. Multimed., 2004, 6, (1), pp. 1630.
    7. 7)
      • 7. Levin, A., Lischinski, D., Weiss, Y.: ‘A closed form solution to natural image matting’. 2006 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, IEEE, 2006, vol. 1, pp. 6168.
    8. 8)
      • 8. Arjovsky, M., Chintala, S., Bottou, L.: ‘Wasserstein gan’, arXiv preprint arXiv:1701.07875, 2017.
    9. 9)
      • 9. Gulrajani, I., Ahmed, F., Arjovsky, M., et al: ‘Improved training of wasserstein gans’, arXiv preprint arXiv:1704.00028, 2017.
    10. 10)
      • 10. Mao, X., Li, Q., Xie, H., et al: ‘Least squares generative adversarial networks’. 2017 IEEE Int. Conf. on Computer Vision (ICCV), IEEE, 2017, pp. 28132821.
    11. 11)
      • 11. Cao, Z., Shaozhang, N.: ‘A fast generative adversarial networks model for complex masked image restoration’, IET Image Process., 2019, 13, (7), pp. 11241129.
    12. 12)
      • 12. Kingma, D.P., Welling, M.: ‘Auto-encoding variational Bayes’, arXiv preprint arXiv:1312.6114, 2013.
    13. 13)
      • 13. Lucic, M., Kurach, K., Michalski, M., et al: ‘Are gans created equal? a large-scale study’, Adv. Neural. Inf. Process. Syst., 2018, pp. 700709, A conference paper accepted by nips 2018. (Palais des Congrès de Montréal, Montréal CANADA).
    14. 14)
      • 14. Isola, P., Zhu, J.-Y., Zhou, T., et al: ‘Image-to-image translation with conditional adversarial networks’. 2017 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE, 2017, pp. 59675976.
    15. 15)
      • 15. Mirza, M., Osindero, S.: ‘Conditional generative adversarial nets’, arXiv preprint arXiv:1411.1784, 2014.
    16. 16)
      • 16. Yang, C., Kim, T., Wang, R., et al: ‘Show, attend and translate: unsupervised image translation with self-regularization and attention’, arXiv preprint arXiv:1806.06195, 2018.
    17. 17)
      • 17. Zhang, H., Goodfellow, I., Metaxas, D., et al: ‘Selfattention generative adversarial networks’, arXiv preprint arXiv:1805.08318, 2018.
    18. 18)
      • 18. Radford, A., Metz, L., Chintala, S.: ‘Unsupervised representation learning with deep convolutional generative adversarial networks’, arXiv preprint arXiv:1511.06434, 2015.
    19. 19)
      • 19. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’, arXiv preprint arXiv:1409.1556, 2014.
    20. 20)
      • 20. Wang, Z., Bovik, A.C., Sheikh, H.R., et al: ‘Image quality assessment: from error visibility to structural similarity’, IEEE Trans. Image Process., 2004, 13, (4), pp. 600612.
    21. 21)
      • 21. Li, F.-F.: ‘Imagenet: crowdsourcing, benchmarking & other cool things’, CMU VASC Semin., 2010, 16, pp. 1825.
    22. 22)
      • 22. Wang, Z., Bovik, A., Sheikh, H., et al: ‘Image quality assessment: from error visibility to structural similarity’, 2004, 13, (4), pp. 600612.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2019.0266
Loading

Related content

content/journals/10.1049/iet-ipr.2019.0266
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address