Your browser does not support JavaScript!

access icon free Face, hairstyle and clothing colour de-identification in video sequences

The authors introduce a system for person de-identification in video data that de-identifies biometric and non-biometric features, namely faces, hairstyles and clothing colours. The authors’ system detects human faces and silhouettes in the input video and replaces the detected faces with random synthesised faces obtained using a deep convolutional generative adversarial network. Alternative hairstyles are rendered over the synthesised faces, and the human silhouette is recoloured so that skin hues are preserved and clothing hues are altered. Through the use of artificially synthesised faces that look realistic, they ensure that the de-identified image looks natural and at the same time avoid ethical and legal considerations present when using real face images as replacement faces. As they address non-biometric feature de-identification, their system offers a considerably higher level of privacy protection than commonly employed solutions that use simple image processing techniques such as blurring. Qualitative and quantitative evaluation suggests that their system produces de-identified images that look natural, at the same time being resistant to re-identification attacks.


    1. 1)
      • 16. Rother, C., Vladimir, K., Blake, A.GrabCut’ – interactive foreground extraction using iterated graph cuts’, in Marks, J. (Ed.): Proc. SIGGRAPH, New York, NY, USA, 2004, pp. 309314.
    2. 2)
      • 17. Goodfellow, I., Bengio, Y., Courville, A.: ‘Deep learning’ (MIT Press, 2016).
    3. 3)
      • 6. Agrawal, P., Narayanan, P.J.: ‘Person de-identification in videos’, IEEE TCSVT, 2011, 21, (3), pp. 299310.
    4. 4)
      • 12. Brkić, K., Sikirić, I., Hrkać, T., et alDe-identifying people in videos using neural art’, in Bordalo Lopez, M. (Eds.): Proc. IPTA, Oulu, 2016, pp. 16.
    5. 5)
      • 11. Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., et al: ‘Generative adversarial nets’, in Ghahramani, Z., et al (Eds.): Proc. NIPS, Montréal, 2014, pp. 26722680.
    6. 6)
      • 9. Gross, R., Airoldi, E., Malin, B., et al: ‘Integrating utility into face de-identification’, in Danezis, G., Martin, D. (Eds.): ‘Proc. PET’ (Springer Berlin Heidelberg, Berlin, 2006), pp. 227242.
    7. 7)
      • 15. Hrkać, T., Brkić, K.‘Iterative automated foreground segmentation in video sequences using graph cuts’, in, Gall, J., Gehler, P., Leibe, B. (EDS): ‘Proc. GCPR’ (Springer International Publishing, Cham, 2015), pp. 308319.
    8. 8)
      • 4. Lin, Y., Wang, S., Lin, Q., et al: ‘Face swapping under large pose variations: a 3d model based approach’, in Werner, B. (Ed.): Proc. ICME, Melbourne, 2012, pp. 333338.
    9. 9)
      • 24. Liu, Z., Luo, P., Wang, X., et alDeep learning face attributes in the wild’, in Malik, J., et al (Eds.): Proc ICCV, Washington, DC, 2015, pp. 37303738.
    10. 10)
      • 21. Phung, S.L., Bouzerdoum, A., Chai, D.: ‘Skin segmentation using color pixel classification: analysis and comparison’, IEEE TPAMI, 2005, 27, (1), pp. 148154.
    11. 11)
      • 7. Marčetić, D., Ribarić, S., Štruc, V., et al: ‘An experimental tattoo de-identification system for privacy protection in still images’, in Biljanović, P., et al (Eds.): Proc. MIPRO, Opatija, 2014, vol. 1, pp. 12881293.
    12. 12)
      • 1. Newton, E.M., Sweeney, L., Malin, B.: ‘Preserving privacy by de-identifying face images’, IEEE TKDE, 2005, 17, (2), pp. 232243.
    13. 13)
      • 10. Bitouk, D., Kumar, N., Dhillon, S., et al: ‘Face swapping: automatically replacing faces in photographs’, ACM Trans. Graph., 2008, 27, (3), pp. 139.
    14. 14)
      • 5. Park, S., Trivedi, M.M., ‘A track-based human movement analysis and privacy protection system adaptive to environmental contexts’, in Tubaro, S., Sarti, A., Lupica, F. (Eds.): Proc. AVSS, Como, 2005. pp. 171176.
    15. 15)
      • 8. Hrkać, T., Brkić, K., Ribarić, S., et al: ‘Deep learning architectures for tattoo detection and de-identification’, in Tan, Z.-H., et al (Eds.): Proc. SPLINE, Aalborg, 2016.
    16. 16)
      • 22. Bradski, G.: ‘The OpenCV library’, Dr Dobb's J. Softw. Tools, 2000, 25, (11), pp. 120125.
    17. 17)
      • 14. Zivkovic, Z.Improved adaptive Gaussian mixture model for background subtraction’, in Kittler, J., Petrou, M., Nixon, M. (Eds.): Proc. ICPR, Cambridge, 2004, pp. 2831.
    18. 18)
      • 3. Gross, R., Sweeney, L., Cohn, J.F., et al: ‘Face de-identification’, in Senior, A. (Eds.): ‘Protecting privacy in video surveillance’ (Springer, London, 2009), pp. 129146.
    19. 19)
      • 13. Viola, P., Jones, M.: ‘Robust real-time object detection’, IJCV, 2004, 57, (2), pp. 137154.
    20. 20)
      • 2. Ribarić, S., Ariyaeeinia, A., Pavešić, N.: ‘De-identification for privacy protection in multimedia content: a survey’, Signal Process., Image Commun., 2016, 47, pp. 131151.
    21. 21)
      • 23. Wong, Y., Chen, S., Mau, S., et al: ‘Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition’, in Pinto, N. (Eds.): Proc. CVPRW, Colorado Springs, 2011, pp. 8188.
    22. 22)
      • 18. Radford, A., Metz, L., Chintala, S.: ‘Unsupervised representation learning with deep convolutional generative adversarial networks’, CoRR, 2016, abs/1511.06434,
    23. 23)
      • 19. Ioffe, S., Szegedy, C.Batch normalization: accelerating deep network training by reducing internal covariate shift’, in Bach, F., Blei, D. (Eds.): Proc. ICML, Lille, 2015, pp. 448456.
    24. 24)
      • 20. Huang, G.B., Mattar, M., Lee, H., et al: ‘Learning to align from scratch’, in Pereira, F., et al (Eds.): Proc. NIPS, Lake Tahoe, 2012, pp. 764772.
    25. 25)
      • 25. Smeulders, A.W.M., Chu, D.M., Cucchiara, R., et al: ‘Visual tracking: an experimental survey’, IEEE TPAMI, 2014, 36, (7), pp. 14421468.

Related content

This is a required field
Please enter a valid email address