© The Institution of Engineering and Technology
Multi-source domain adaptation (MDA) aims to leverage knowledge from multiple source domains to improve the classification performance on target domains. Different degrees of distribution discrepancies between every two domains pose a huge challenge to MDA tasks. Most works focus on extracting features shared by all domains, which is critical but not enough to reduce distribution discrepancies. In this paper, we propose a method named as common-specific feature learning (CSFL). Constituting a framework of feature learning, CSFL explores a subspace where the combination of common and specific features makes learned representations comprehensive. Based on this framework, we conduct a metric learning method for learning a discriminative feature representation. Considering redundant information caused by source domains is likely to hurt the performance, we impose an effective low-rank constraint to remove the redundant information. Further, we adopt structure consistent constraint to preserve the local structure in each domain. CSFL has obtained about 1–5% improvement of mean accuracy, compared to the state-of-the-art shallow methods. Further, compared with 90.2% and 89.4% of the best baseline deep method, CSFL achieves mean accuracy of 90.8% and 89.7% on the Office-31 and ImageCLEF-DA datasets respectively. The encouraging results validate the effectiveness of our method.
References
-
-
1)
-
1. Pan, S.J., Yang, Q.: ‘A survey on transfer learning’, IEEE Trans. Knowl. Data Eng., 2009, 22, (10), pp. 1345–1359.
-
2)
-
37. Ding, Z., Shao, M., Fu, Y.: ‘Low-rank embedded ensemble semantic dictionary for zero-shot learning’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 2050–2058.
-
3)
-
3. Patel, V.M., Gopalan, R., Li, R., et al: ‘Visual domain adaptation: a survey of recent advances’, IEEE Signal Process. Mag., 2015, 32, (3), pp. 53–69.
-
4)
-
9. Long, M., Wang, J., Ding, G., et al: ‘Transfer joint matching for unsupervised domain adaptation’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 1410–1417.
-
5)
-
19. Fernando, B., Habrard, A., Sebban, M., et al: ‘Unsupervised visual domain adaptation using subspace alignment’. Proc. of the IEEE Int. Conf. on Computer Vision, Sydney, NSW, Australia, 2013, pp. 2960–2967.
-
6)
-
36. Gretton, A., Borgwardt, K., Rasch, M., et al: ‘A kernel method for the two-sample-problem’. Advances in Neural Information Processing Systems, Vancouver, Canada, 2007, pp. 513–520.
-
7)
-
8. Shao, L., Zhu, F., Li, X.: ‘Transfer learning for visual categorization: a survey’, IEEE Trans. Neural Netw. Learn. Syst., 2014, 26, (5), pp. 1019–1034.
-
8)
-
14. Zhao, S., Li, B., Xu, P., et al: ‘’, 2020, arXiv preprint arXiv:200212169.
-
9)
-
13. Sun, S., Shi, H., Wu, Y.: ‘A survey of multi-source domain adaptation’, Inf. Fusion, 2015, 24, pp. 84–92.
-
10)
-
26. Ding, Z., Shao, M., Fu, Y.: ‘Robust multi-view representation: a unified perspective from multi-view learning to domain adaption’. IJCAI, Stockholm, Sweden, 2018, pp. 5434–5440.
-
11)
-
30. Wang, K., He, R., Wang, L., et al: ‘Joint feature selection and subspace learning for cross-modal retrieval’, IEEE Trans. Pattern Anal. Mach. Intell., 2015, 38, (10), pp. 2010–2023.
-
12)
-
18. Gong, B., Shi, Y., Sha, F., et al: ‘Geodesic flow kernel for unsupervised domain adaptation’. 2012 IEEE Conf. on Computer Vision and Pattern Recognition, Providence, RI, USA, 2012, pp. 2066–2073.
-
13)
-
40. Chung, F.R., Graham, F.C.: ‘‘Spectral graph theory’, vol. 92 (American Mathematical Soc., USA, 1997).
-
14)
-
45. Maaten, L.v.d., Hinton, G.: ‘Visualizing data using t-sne’, J. Mach. Learn. Res., 2008, 9, (Nov), pp. 2579–2605.
-
15)
-
20. Wang, J., Chen, Y., Hao, S., et al: ‘Balanced distribution adaptation for transfer learning’. 2017 IEEE Int. Conf. on Data Mining (ICDM), New Orleans, LA, USA, 2017, pp. 1129–1134.
-
16)
-
10. Zhang, J., Li, W., Ogunbona, P.: ‘Joint geometrical and statistical alignment for visual domain adaptation’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 1859–1867.
-
17)
-
28. Pan, Y., Yao, T., Mei, T., et al: ‘Click-through-based cross-view learning for image search’. Proc. of the 37th Int. ACM SIGIR Conf. on Research & Development in Information Retrieval, Gold Coast, Australia, 2014, pp. 717–726.
-
18)
-
11. Li, J., Lu, K., Huang, Z., et al: ‘Transfer independently together: a generalized framework for domain adaptation’, IEEE Trans. Cybern., 2018, 49, (6), pp. 2144–2155.
-
19)
-
24. Sun, B., Feng, J., Saenko, K.: ‘Return of frustratingly easy domain adaptation’. Thirtieth AAAI Conf. on Artificial Intelligence, Phoenix, AZ, USA, 2016.
-
20)
-
22. Bengio, Y., Courville, A., Vincent, P.: ‘Representation learning: a review and new perspectives’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (8), pp. 1798–1828.
-
21)
-
21. Liu, J., Li, J., Lu, K.: ‘Coupled local–global adaptation for multi-source transfer learning’, Neurocomputing, 2018, 275, pp. 247–254.
-
22)
-
29. Ding, Z., Fu, Y.: ‘Low-rank common subspace for multi-view learning’. 2014 IEEE Int. Conf. on Data Mining, Shenzhen, People's Republic of China, 2014, pp. 110–119.
-
23)
-
41. Nene, S.A., Nayar, S.K., Murase, H., et al: ‘, 1996.
-
24)
-
16. Si, S., Tao, D., Geng, B.: ‘Bregman divergence-based regularization for transfer subspace learning’, IEEE Trans. Knowl. Data Eng., 2009, 22, (7), pp. 929–942.
-
25)
-
44. He, K., Zhang, X., Ren, S., et al: ‘Deep residual learning for image recognition’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770–778.
-
26)
-
38. Yan, S., Xu, D., Zhang, B., et al: ‘Graph embedding and extensions: a general framework for dimensionality reduction’, IEEE Trans. Pattern Anal. Mach. Intell., 2006, 29, (1), pp. 40–51.
-
27)
-
25. Xu, R., Chen, Z., Zuo, W., et al: ‘Deep cocktail network: multi-source unsupervised domain adaptation with category shift’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 3964–3973.
-
28)
-
2. Weiss, K., Khoshgoftaar, T.M., Wang, D.: ‘A survey of transfer learning’, J. Big Data, 2016, 3, (1), p. 9.
-
29)
-
15. Zhu, Y., Zhuang, F., Wang, D.: ‘Aligning domain-specific distribution and classifier for cross-domain classification from multiple sources’. Proc. of the AAAI Conf. on Artificial Intelligence, Honolulu, HI, USA, 2019, vol. 33, pp. 5989–5996.
-
30)
-
32. Sun, H., Liu, S., Zhou, S., et al: ‘Transfer sparse subspace analysis for unsupervised cross-view scene model adaptation’, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 2015, 9, (7), pp. 2901–2909.
-
31)
-
7. Long, M., Wang, J., Sun, J., et al: ‘Domain invariant transfer kernel learning’, IEEE Trans. Knowl. Data Eng., 2014, 27, (6), pp. 1519–1532.
-
32)
-
42. Sim, T., Baker, S., Bsat, M.: ‘The cmu pose, illumination, and expression (pie) database’. Proc. of Fifth IEEE Int. Conf. on Automatic Face Gesture Recognition, Washington, DC, USA, 2002, pp. 53–58.
-
33)
-
33. Li, S., Shao, M., Fu, Y.: ‘Cross-view projective dictionary learning for person re-identification’. Twenty-Fourth Int. Joint Conf. on Artificial Intelligence, Buenos Aires, Argentina, 2015.
-
34)
-
39. Belkin, M., Niyogi, P.: ‘Laplacian eigenmaps and spectral techniques for embedding and clustering’. Advances in Neural Information Processing Systems, Vancouver, Canada, 2002, pp. 585–591.
-
35)
-
23. Sun, B., Saenko, K.: ‘Deep coral: correlation alignment for deep domain adaptation’. European Conf. on Computer Vision, Amsterdam, The Netherlands, 2016, pp. 443–450.
-
36)
-
34. Ding, Z., Fu, Y.: ‘Dual low-rank decompositions for robust cross-view learning’, IEEE Trans. Image Process., 2018, 28, (1), pp. 194–204.
-
37)
-
4. Csurka, G.: ‘’, 2017, arXiv preprint arXiv:170205374.
-
38)
-
6. Long, M., Wang, J., Ding, G., et al: ‘Transfer feature learning with joint distribution adaptation’. Proc. of the IEEE Int. Conf. on Computer Vision, Sydney, NSW, Australia, 2013, pp. 2200–2207.
-
39)
-
17. Pan, S.J., Tsang, I.W., Kwok, J.T., et al: ‘Domain adaptation via transfer component analysis’, IEEE Trans. Neural Netw., 2010, 22, (2), pp. 199–210.
-
40)
-
31. Wang, W., Arora, R., Livescu, K., et al: ‘On deep multi-view representation learning’. Int. Conf. on Machine Learning, Lille, France, 2015, pp. 1083–1092.
-
41)
-
12. Li, J., Lu, K., Huang, Z., et al: ‘Heterogeneous domain adaptation through progressive alignment’, IEEE Trans. Neural Netw. Learn. Syst., 2018, 30, (5), pp. 1381–1391.
-
42)
-
43. Griffin, G., Holub, A., Perona, P.: ‘’, 2007.
-
43)
-
27. Cai, X., Wang, C., Xiao, B., et al: ‘Regularized latent least square regression for cross pose face recognition’. Twenty-Third Int. Joint Conf. on Artificial Intelligence, Beijing, People's Republic of China, 2013.
-
44)
-
35. Feng, Y., Yuan, Y., Lu, X.: ‘Person reidentification via unsupervised cross-view metric learning’, IEEE Trans. Cybern., 2019, pp. 1–11 doi: 10.1109/TCYB.2019.2909480.
-
45)
-
5. Csurka, G.: ‘A comprehensive survey on domain adaptation for visual applications’, in ‘domain adaptation in computer vision applications’ (Springer, Switzerland, 2017), pp. 1–35.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2019.1712
Related content
content/journals/10.1049/iet-ipr.2019.1712
pub_keyword,iet_inspecKeyword,pub_concept
6
6