access icon free Common-specific feature learning for multi-source domain adaptation

Multi-source domain adaptation (MDA) aims to leverage knowledge from multiple source domains to improve the classification performance on target domains. Different degrees of distribution discrepancies between every two domains pose a huge challenge to MDA tasks. Most works focus on extracting features shared by all domains, which is critical but not enough to reduce distribution discrepancies. In this paper, we propose a method named as common-specific feature learning (CSFL). Constituting a framework of feature learning, CSFL explores a subspace where the combination of common and specific features makes learned representations comprehensive. Based on this framework, we conduct a metric learning method for learning a discriminative feature representation. Considering redundant information caused by source domains is likely to hurt the performance, we impose an effective low-rank constraint to remove the redundant information. Further, we adopt structure consistent constraint to preserve the local structure in each domain. CSFL has obtained about 1–5% improvement of mean accuracy, compared to the state-of-the-art shallow methods. Further, compared with 90.2% and 89.4% of the best baseline deep method, CSFL achieves mean accuracy of 90.8% and 89.7% on the Office-31 and ImageCLEF-DA datasets respectively. The encouraging results validate the effectiveness of our method.

Inspec keywords: learning (artificial intelligence); feature extraction; image classification

Other keywords: CSFL; multisource domain adaptation method; multisource domain adaptation tasks; target domains; multiple source domains; common-specific feature learning; distribution discrepancies; common features

Subjects: Computer vision and image processing techniques; Knowledge engineering techniques; Other topics in statistics

References

    1. 1)
      • 1. Pan, S.J., Yang, Q.: ‘A survey on transfer learning’, IEEE Trans. Knowl. Data Eng., 2009, 22, (10), pp. 13451359.
    2. 2)
      • 37. Ding, Z., Shao, M., Fu, Y.: ‘Low-rank embedded ensemble semantic dictionary for zero-shot learning’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 20502058.
    3. 3)
      • 3. Patel, V.M., Gopalan, R., Li, R., et al: ‘Visual domain adaptation: a survey of recent advances’, IEEE Signal Process. Mag., 2015, 32, (3), pp. 5369.
    4. 4)
      • 9. Long, M., Wang, J., Ding, G., et al: ‘Transfer joint matching for unsupervised domain adaptation’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 14101417.
    5. 5)
      • 19. Fernando, B., Habrard, A., Sebban, M., et al: ‘Unsupervised visual domain adaptation using subspace alignment’. Proc. of the IEEE Int. Conf. on Computer Vision, Sydney, NSW, Australia, 2013, pp. 29602967.
    6. 6)
      • 36. Gretton, A., Borgwardt, K., Rasch, M., et al: ‘A kernel method for the two-sample-problem’. Advances in Neural Information Processing Systems, Vancouver, Canada, 2007, pp. 513520.
    7. 7)
      • 8. Shao, L., Zhu, F., Li, X.: ‘Transfer learning for visual categorization: a survey’, IEEE Trans. Neural Netw. Learn. Syst., 2014, 26, (5), pp. 10191034.
    8. 8)
      • 14. Zhao, S., Li, B., Xu, P., et al: ‘Multi-source domain adaptation in the deep learning era: a systematic survey’, 2020, arXiv preprint arXiv:200212169.
    9. 9)
      • 13. Sun, S., Shi, H., Wu, Y.: ‘A survey of multi-source domain adaptation’, Inf. Fusion, 2015, 24, pp. 8492.
    10. 10)
      • 26. Ding, Z., Shao, M., Fu, Y.: ‘Robust multi-view representation: a unified perspective from multi-view learning to domain adaption’. IJCAI, Stockholm, Sweden, 2018, pp. 54345440.
    11. 11)
      • 30. Wang, K., He, R., Wang, L., et al: ‘Joint feature selection and subspace learning for cross-modal retrieval’, IEEE Trans. Pattern Anal. Mach. Intell., 2015, 38, (10), pp. 20102023.
    12. 12)
      • 18. Gong, B., Shi, Y., Sha, F., et al: ‘Geodesic flow kernel for unsupervised domain adaptation’. 2012 IEEE Conf. on Computer Vision and Pattern Recognition, Providence, RI, USA, 2012, pp. 20662073.
    13. 13)
      • 40. Chung, F.R., Graham, F.C.: ‘‘Spectral graph theory’, vol. 92 (American Mathematical Soc., USA, 1997).
    14. 14)
      • 45. Maaten, L.v.d., Hinton, G.: ‘Visualizing data using t-sne’, J. Mach. Learn. Res., 2008, 9, (Nov), pp. 25792605.
    15. 15)
      • 20. Wang, J., Chen, Y., Hao, S., et al: ‘Balanced distribution adaptation for transfer learning’. 2017 IEEE Int. Conf. on Data Mining (ICDM), New Orleans, LA, USA, 2017, pp. 11291134.
    16. 16)
      • 10. Zhang, J., Li, W., Ogunbona, P.: ‘Joint geometrical and statistical alignment for visual domain adaptation’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 18591867.
    17. 17)
      • 28. Pan, Y., Yao, T., Mei, T., et al: ‘Click-through-based cross-view learning for image search’. Proc. of the 37th Int. ACM SIGIR Conf. on Research & Development in Information Retrieval, Gold Coast, Australia, 2014, pp. 717726.
    18. 18)
      • 11. Li, J., Lu, K., Huang, Z., et al: ‘Transfer independently together: a generalized framework for domain adaptation’, IEEE Trans. Cybern., 2018, 49, (6), pp. 21442155.
    19. 19)
      • 24. Sun, B., Feng, J., Saenko, K.: ‘Return of frustratingly easy domain adaptation’. Thirtieth AAAI Conf. on Artificial Intelligence, Phoenix, AZ, USA, 2016.
    20. 20)
      • 22. Bengio, Y., Courville, A., Vincent, P.: ‘Representation learning: a review and new perspectives’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (8), pp. 17981828.
    21. 21)
      • 21. Liu, J., Li, J., Lu, K.: ‘Coupled local–global adaptation for multi-source transfer learning’, Neurocomputing, 2018, 275, pp. 247254.
    22. 22)
      • 29. Ding, Z., Fu, Y.: ‘Low-rank common subspace for multi-view learning’. 2014 IEEE Int. Conf. on Data Mining, Shenzhen, People's Republic of China, 2014, pp. 110119.
    23. 23)
      • 41. Nene, S.A., Nayar, S.K., Murase, H., et al: ‘Columbia object image library (coil-20)’, 1996.
    24. 24)
      • 16. Si, S., Tao, D., Geng, B.: ‘Bregman divergence-based regularization for transfer subspace learning’, IEEE Trans. Knowl. Data Eng., 2009, 22, (7), pp. 929942.
    25. 25)
      • 44. He, K., Zhang, X., Ren, S., et al: ‘Deep residual learning for image recognition’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770778.
    26. 26)
      • 38. Yan, S., Xu, D., Zhang, B., et al: ‘Graph embedding and extensions: a general framework for dimensionality reduction’, IEEE Trans. Pattern Anal. Mach. Intell., 2006, 29, (1), pp. 4051.
    27. 27)
      • 25. Xu, R., Chen, Z., Zuo, W., et al: ‘Deep cocktail network: multi-source unsupervised domain adaptation with category shift’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 39643973.
    28. 28)
      • 2. Weiss, K., Khoshgoftaar, T.M., Wang, D.: ‘A survey of transfer learning’, J. Big Data, 2016, 3, (1), p. 9.
    29. 29)
      • 15. Zhu, Y., Zhuang, F., Wang, D.: ‘Aligning domain-specific distribution and classifier for cross-domain classification from multiple sources’. Proc. of the AAAI Conf. on Artificial Intelligence, Honolulu, HI, USA, 2019, vol. 33, pp. 59895996.
    30. 30)
      • 32. Sun, H., Liu, S., Zhou, S., et al: ‘Transfer sparse subspace analysis for unsupervised cross-view scene model adaptation’, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 2015, 9, (7), pp. 29012909.
    31. 31)
      • 7. Long, M., Wang, J., Sun, J., et al: ‘Domain invariant transfer kernel learning’, IEEE Trans. Knowl. Data Eng., 2014, 27, (6), pp. 15191532.
    32. 32)
      • 42. Sim, T., Baker, S., Bsat, M.: ‘The cmu pose, illumination, and expression (pie) database’. Proc. of Fifth IEEE Int. Conf. on Automatic Face Gesture Recognition, Washington, DC, USA, 2002, pp. 5358.
    33. 33)
      • 33. Li, S., Shao, M., Fu, Y.: ‘Cross-view projective dictionary learning for person re-identification’. Twenty-Fourth Int. Joint Conf. on Artificial Intelligence, Buenos Aires, Argentina, 2015.
    34. 34)
      • 39. Belkin, M., Niyogi, P.: ‘Laplacian eigenmaps and spectral techniques for embedding and clustering’. Advances in Neural Information Processing Systems, Vancouver, Canada, 2002, pp. 585591.
    35. 35)
      • 23. Sun, B., Saenko, K.: ‘Deep coral: correlation alignment for deep domain adaptation’. European Conf. on Computer Vision, Amsterdam, The Netherlands, 2016, pp. 443450.
    36. 36)
      • 34. Ding, Z., Fu, Y.: ‘Dual low-rank decompositions for robust cross-view learning’, IEEE Trans. Image Process., 2018, 28, (1), pp. 194204.
    37. 37)
      • 4. Csurka, G.: ‘Domain adaptation for visual applications: a comprehensive survey’, 2017, arXiv preprint arXiv:170205374.
    38. 38)
      • 6. Long, M., Wang, J., Ding, G., et al: ‘Transfer feature learning with joint distribution adaptation’. Proc. of the IEEE Int. Conf. on Computer Vision, Sydney, NSW, Australia, 2013, pp. 22002207.
    39. 39)
      • 17. Pan, S.J., Tsang, I.W., Kwok, J.T., et al: ‘Domain adaptation via transfer component analysis’, IEEE Trans. Neural Netw., 2010, 22, (2), pp. 199210.
    40. 40)
      • 31. Wang, W., Arora, R., Livescu, K., et al: ‘On deep multi-view representation learning’. Int. Conf. on Machine Learning, Lille, France, 2015, pp. 10831092.
    41. 41)
      • 12. Li, J., Lu, K., Huang, Z., et al: ‘Heterogeneous domain adaptation through progressive alignment’, IEEE Trans. Neural Netw. Learn. Syst., 2018, 30, (5), pp. 13811391.
    42. 42)
      • 43. Griffin, G., Holub, A., Perona, P.: ‘Caltech-256 object category dataset’, 2007.
    43. 43)
      • 27. Cai, X., Wang, C., Xiao, B., et al: ‘Regularized latent least square regression for cross pose face recognition’. Twenty-Third Int. Joint Conf. on Artificial Intelligence, Beijing, People's Republic of China, 2013.
    44. 44)
    45. 45)
      • 5. Csurka, G.: ‘A comprehensive survey on domain adaptation for visual applications’, in ‘domain adaptation in computer vision applications’ (Springer, Switzerland, 2017), pp. 135.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2019.1712
Loading

Related content

content/journals/10.1049/iet-ipr.2019.1712
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading