access icon free Adversarial auto-encoder for unsupervised deep domain adaptation

Unsupervised visual domain adaptation aims to train a classifier that works well on a target domain given labelled source samples and unlabelled target samples. The key issue in unsupervised visual domain adaptation is how to do the feature alignment between source and target domains. Inspired by the adversarial learning in generative adversarial networks, this study proposes a novel adversarial auto-encoder for unsupervised deep domain adaptation. This method incorporates the auto-encoder with the adversarial learning so that the domain similarity and reconstruction information from the decoder can be exploited to facilitate the adversarial domain adaptation in the encoder. Extensive experiments on various visual recognition tasks show that the proposed method performs favourably against or better than competitive state-of-the-art methods.

Inspec keywords: pattern classification; unsupervised learning; learning (artificial intelligence); image classification

Other keywords: adversarial domain adaptation; novel adversarial auto-encoder; reconstruction information; unsupervised deep domain adaptation; unlabelled target samples; unsupervised visual domain adaptation; target domain; domain similarity; target domains; generative adversarial networks; labelled source samples

Subjects: Computer vision and image processing techniques; Image recognition; Other topics in statistics; Knowledge engineering techniques

References

    1. 1)
      • 20. Radford, A., Metz, L., Chintala, S.: ‘Unsupervised representation learning with deep convolutional generative adversarial networks’, arXiv preprint arXiv:1511.06434, 2015.
    2. 2)
      • 22. Chen, Q., Liu, Y., Wang, Z., et al: ‘Re-weighted adversarial adaptation network for unsupervised domain adaptation’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018.
    3. 3)
      • 12. Ming Yu, L., Tuzel, O.: ‘Coupled generative adversarial networks’. Advances in Neural Information Processing Systems, Barcelona, Spain, 2016.
    4. 4)
      • 10. Tzeng, E., Hoffman, J., Saenko, K., et al: ‘Adversarial discriminative domain adaptation’. IEEE Conf. Computer Vision and Pattern Recognition, Hawaii, USA, 2017.
    5. 5)
      • 14. Ganin, Y., Ustinova, E., Ajakan, H., et al: ‘Domain adversarial training of neural networks’, J. Mach. Learn. Res., 2016, 17, (1), pp. 20302096.
    6. 6)
      • 9. Goodfellow, I., Pouget Abadie, J., Mirza, M., et al: ‘Generative adversarial nets’. Advances in Neural Information Processing Systems, Montreal, Canada, 2014.
    7. 7)
      • 4. Long, M., Cao, Y., Wang, J., et al: ‘Learning transferable features with deep adaptation networks’. Int. Conf. Machine Learning, Lille, France, 2015.
    8. 8)
      • 19. Netzer, Y., Wang, T., Coates, A., et al: ‘Reading digits in natural images with unsupervised feature learning’. NIPS Workshop on Deep Learning and Unsupervised Feature Learning, Granada, Spain, 2011.
    9. 9)
      • 6. Sun, B., Feng, J., Saenko, K.: ‘Return of frustratingly easy domain adaptation’. AAAI Conf. Artificial Intelligence, Phoenix, USA, 2016.
    10. 10)
      • 24. Girshick, R., Donahue, J., Darrell, T., et al: ‘Region-based convolutional networks for accurate object detection and segmentation’, IEEE Trans. Pattern Anal. Mach. Intell., 2015, 38, (1), pp. 22782324.
    11. 11)
      • 1. Antonio, T., Efros, A.A.: ‘Unbiased look at dataset bias’. IEEE Conf. Computer Vision and Pattern Recognition, Colorado Springs, USA, 2011.
    12. 12)
      • 23. Fernando, B., Habrard, A., Sebban, M., et al: ‘Unsupervised visual domain adaptation using subspace alignment’. Proc. IEEE Int. Conf. Computer Vision, Sydney, Australia, 2013.
    13. 13)
      • 13. Choi, E., Lee, C.: ‘Feature extraction based on the Bhattacharyya distance’, Pattern Recognit., 2003, 36, pp. 17031709.
    14. 14)
      • 8. Herath, S., Harandi, M., Porikli, F.: ‘Learning an invariant Hilbert space for domain adaptation’. IEEE Conf. Computer Vision and Pattern Recognition, Hawaii, USA, 2017.
    15. 15)
      • 5. Gholami, B., Rudovic, O., Ognjen Pavlovic, V.: ‘PUnDA: probabilistic unsupervised domain adaptation for knowledge transfer across visual categories’. IEEE Int. Conf. Computer Vision, Hawaii, USA, 2017.
    16. 16)
      • 3. Tzeng, E., Hoffman, J., Zhang, N., et al: ‘Deep domain confusion: maximizing for domain invariance’, arXiv preprint arXiv:1412.3474, 2014.
    17. 17)
      • 7. Gong, B., Shi, Y., Sha, F., et al: ‘Geodesic flow kernel for unsupervised domain adaptation’. IEEE Conf. Computer Vision and Pattern Recognition, Rhode Island, USA, 2012.
    18. 18)
      • 2. Zhou, J.T., Tsang, I.W., Pan, S.J., et al: ‘Multi-class heterogeneous domain adaptation’, J. Mach. Learn. Res., 2019, 20, (57), pp. 131.
    19. 19)
      • 17. LeCun, Y., Bottou, L., Bengio, Y., et al: ‘Gradient-based learning applied to document recognition’, Proc. IEEE, 1998, 86, (11), pp. 22782324.
    20. 20)
      • 16. Bousmalis, K., Trigeorgis, G, Silberman, N., et al: ‘Domain separation networks’. Advances in Neural Information Processing Systems, Barcelona, Spain, 2016.
    21. 21)
      • 21. Kingma, D., Ba, J.: ‘Adam: a method for stochastic optimization’, arXiv preprint arXiv:1412.6980, 2014.
    22. 22)
      • 11. Bousmalis, K., Silberman, N., Dohan, D., et al: ‘Unsupervised pixel-level domain adaptation with generative adversarial networks’. IEEE Conf. Computer Vision and Pattern Recognition, Hawaii, USA, 2017.
    23. 23)
      • 15. Ghifary, M., Kleijn, W.B., Zhang, M., et al: ‘Deep reconstruction-classification networks for unsupervised domain adaptation’. European Conf. Computer Vision, Amsterdam, The Netherlands, 2016.
    24. 24)
      • 18. Denker, J.S., Gardner, W.R., Graf, H.P., et al: ‘Neural network recognizer for hand-written zip code digits’. Advances in Neural Information Processing Systems, Denver, USA, 1989.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2018.6687
Loading

Related content

content/journals/10.1049/iet-ipr.2018.6687
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading