access icon free Two-stage visible watermark removal architecture based on deep learning

With the rapid development of the Internet, watermarks are widely used in images to protect copyright. This implies that the robustness of watermark is very important. In recent years, there have been some studies to evaluate watermark performance by removing the watermark. Among them, some methods need to mark the watermark position in advance, and some require multiple images with the same watermark. Moreover, when the colour of thewatermark is similar to that of the background, the existing methods can hardly remove the watermark from the watermarked image. In the proposed work, the authors presented a watermark removal structure consisting of watermark extraction and image inpainting to address the aforementioned issues. In particular, the extraction network is used to extract the watermark in the watermarked image, and the inpainting network is used to inpainting image for a better watermark removal image, respectively. Finally, the authors train and test the developed network architecture by constructing two data sets, i.e. white watermarked image data set (WW-data set) and colour watermarked image data set (CW-data set). The proposed method not only has better performance on the WW-data set than the current latest methods (on the CW-data set, other methods have almost failed) but also effectively removes the watermarks.

Inspec keywords: image watermarking; copyright; image coding; image colour analysis

Other keywords: image inpainting; watermarked image; watermark removal image; stage visible watermark removal architecture; watermark removal structure; watermark position; watermark performance; watermark extraction

Subjects: Image and video coding; Data security; Computer vision and image processing techniques

References

    1. 1)
      • 11. Dekel, T., Rubinstein, M., Liu, C., et al: ‘On the effectiveness of visible watermarks’. Proc. IEEE Conf. on Computer Vision Pattern Recognition (CVPR), Honolulu, United states, November 2017, pp. 68646872.
    2. 2)
      • 12. Ballester, C., Bertalmio, M., Caselles, V., et al: ‘Filling-in by joint interpolation of vector fields and gray levels’, IEEE Trans. Image Process., 2001, 10, (8), pp. 12001211.
    3. 3)
      • 21. Yu, J., Lin, Z., Yang, J., et al: ‘Generative image inpainting with contextual attention’. Proc. IEEE Computer Society on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, USA, June 2018, pp. 55055514.
    4. 4)
      • 28. Isola, P., Zhu, J.Y., Zhou, T., et al: ‘Image-to-image translation with conditional adversarial networks’. Proc. IEEE Conf. on Computer Vision Pattern Recognition (CVPR), Honolulu, United states, July 2017, pp. 59675976.
    5. 5)
      • 15. Fukushima, K.: ‘Neocognitron: a hierarchical neural network capable of visual pattern recognition’, Neural Netw., 1988, 1, (2), pp. 119130.
    6. 6)
      • 1. Liu, S., Pan, Z., Song, H.: ‘Digital image watermarking method based on DCT and fractal encoding’, IET Image Process., 2017, 11, (10), pp. 815821.
    7. 7)
      • 26. Russakovsky, O., Deng, J., Su, H., et al: ‘Imagenet large scale visual recognition challenge’, Int. J. Comput. Vis., 2015, 115, (3), pp. 211252.
    8. 8)
      • 23. Zhou, B., Lapedriza, A., Khosla, A., et al: ‘Places: a 10 million image database for scene recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 40, (6), pp. 14521464.
    9. 9)
      • 3. Xu, C., Lu, Y., Zhou, Y.: ‘An automatic visible watermark removal technique using image inpainting algorithms’. Int. Conf. on Systems and Informatics (ICSAI), Hangzhou, China, November 2017, pp. 11521157.
    10. 10)
      • 6. Cheng, D., Li, X., Li, W.H., et al: ‘Large-scale visible watermark detection and removal with deep convolutional networks’. Conf. Pattern Recognition Computer Vision (PRCV), Guangzhou, China, November 2018, pp. 2740.
    11. 11)
      • 10. Santoyo-Garcia, H., Fragoso-Navarro, E., Reyes-Reyes, R., et al: ‘An automatic visible watermark detection method using total variation’. Proc. Int. Workshop Workshop on Biometrics and Forensics (IWBF), Coventry, United kingdom, April 2017.
    12. 12)
      • 4. LeCun, Y., Bengio, Y., Hinton, G.: ‘Deep learning’, Nature, 2015, 521, (7553), pp. 436444.
    13. 13)
      • 13. Liu, G., Reda, F.A., Shih, K.J., et al: ‘Image inpainting for irregular holes using partial convolutions’. European Conf. on Computer Vision, Munich, Germany, September 2018, pp. 89105.
    14. 14)
      • 18. Yang, C., Lu, X., Lin, Z., et al: ‘High-resolution image inpainting using multi-scale neural patch synthesis’. Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, USA, July 2017, pp. 40764084.
    15. 15)
      • 16. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al: ‘Generative adversarial nets’. Advances in Neural Information Processing Systems, Montreal, Canada, 2014, pp. 26722680.
    16. 16)
      • 2. Huang, C.H., Wu, J.L.: ‘Attacking visible watermarking schemes’, IEEE Trans. Multimedia, 2004, 6, (1), pp. 1630.
    17. 17)
      • 27. Kingma, D.P., Ba, J.: ‘Adam: a method for stochastic optimization’, arXiv:14126980, 2014.
    18. 18)
      • 9. Pei, S.C., Zeng, Y.C.: ‘A novel image recovery algorithm for visible watermarked images’, IEEE Trans. Inf. Forensic Secur., 2006, 1, (4), pp. 543550.
    19. 19)
      • 19. Yu, F., Koltun, V.: ‘Multi-scale context aggregation by dilated convolutions’, arXiv:1511.07122, 2015.
    20. 20)
      • 17. Pathak, D., Krahenbuhl, P., Donahue, J., et al: ‘Context encoders: feature learning by inpainting’. Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, USA, June 2016, pp. 25362544.
    21. 21)
      • 22. Liu, H., Jiang, B., Xiao, Y., et al: ‘Coherent semantic attention for image inpainting’. Proc. IEEE Int. Conf. Computer Vision, Seoul, Korea, October 2019, pp. 41694178.
    22. 22)
      • 7. Mirza, M., Osindero, S.: ‘Conditional generative adversarial nets’, arXiv:14111784, 2014.
    23. 23)
      • 30. Ren, D., Zuo, W., Hu, Q., et al: ‘Progressive image deraining networks: a better and simpler baseline’. Proc. IEEE Conf. on Computer Vision Pattern Recognition (CVPR), Long Beach, United states, June 2019, pp. 39323941.
    24. 24)
      • 8. Mao, X., Li, Q., Xie, H., et al: ‘Multi-class generative adversarial networks with the L2 loss function’, CoRR, abs/1611.04076, 2016.
    25. 25)
      • 24. Ronneberger, O., Fischer, P., Brox, T.: ‘U-net: convolutional networks for biomedical image segmentation’. 18th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany, 2015, (Lecture Notes Computer Science, vol. 9351), pp. 234241.
    26. 26)
      • 29. Qian, R., Tan, R.T., Yang, W., et al: ‘Attentive generative adversarial network for raindrop removal from a single image’. Proc. IEEE Conf. on Computer Vision Pattern Recognition (CVPR), Salt Lake City, United states, June 2018, pp. 24822491.
    27. 27)
      • 25. Ali, S., Abbas, W., Hassan, N.U., et al: ‘Cpgan: conditional patchbased generative adversarial network for retinal vessel segmentation’, IET Image Process., 2020, 14, pp. 10811090.
    28. 28)
      • 20. Iizuka, S., Simo-Serra, E., Ishikawa, H.: ‘Globally and locally consistent image completion’, ACM Trans. Graph., 2017, 36, (4), pp. 114.
    29. 29)
      • 5. Li, X., Lu, C., Cheng, D., et al: ‘Towards photo-realistic visible watermark removal with conditional generative adversarial networks’. Proc. IEEE Conf. Computer Vision Pattern Recognition (CVPR), Long Beach, CA, USA, 2019.
    30. 30)
      • 14. Gu, J., Wang, Z., Kuen, J., et al: ‘Recent advances in convolutional neural networks’, Pattern Recognit., 2018, 77, pp. 354377.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2020.0444
Loading

Related content

content/journals/10.1049/iet-ipr.2020.0444
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading