http://iet.metastore.ingenta.com
1887

Minimising disparity in distribution for unsupervised domain adaptation by preserving the local spatial arrangement of data

Minimising disparity in distribution for unsupervised domain adaptation by preserving the local spatial arrangement of data

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Domain adaptation is used for machine learning tasks, when the distribution of the training (obtained from source domain) set differs from that of the testing (referred as target domain) set. In the work presented in this study, the problem of unsupervised domain adaptation is solved using a novel optimisation function to minimise the global and local discrepancies between the transformed source and the target domains. The dissimilarity in data distributions is the major contributor to the global discrepancy between the two domains. The authors propose two techniques to preserve the local structural information of source domain: (i) identify closest pair of instances in source domain and minimise the distances between these pairs of instances after transformation; (ii) preserve the naturally occurring clusters present in source domain during transformation. This cost function and constraints yield a non-linear optimisation problem, used to estimate the weight matrix. An iterative framework solves the optimisation problem, providing a sub-optimal solution. Next, using orthogonality constraint, an optimisation task is formulated in the Stiefel manifold. Performance analysis using real-world datasets show that the proposed methods perform better than a few recently published state-of-the-art methods.

References

    1. 1)
      • 1. Pan, S.J., Tsang, I.W., Kwok, J.T., et al: ‘Domain adaptation via transfer component analysis’, IEEE Trans. Neural Netw., 2011, 22, (2), pp. 199210.
    2. 2)
      • 2. Lichman, M.: UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]., Irvine, CA: University of California, School of Information and Computer Science2013.
    3. 3)
      • 3. Saenko, K., Kulis, B., Fritz, M., et al: ‘Adapting visual category models to new domains’. European Conf. on Computer vision, 2010.
    4. 4)
      • 4. Shi, X., Fan, W., Ren, J.: ‘Actively transfer domain knowledge’. European Conf. on Machine Learning, 2008.
    5. 5)
      • 5. Pan, S.J., Yang, Q.: ‘A survey on transfer learning’, IEEE Trans. Knowl. Data Eng., 2010, 22, pp. 13451359.
    6. 6)
      • 6. Beijbom, O.: ‘Domain adaptation for computer vision applications’, Technical Report, University of California, San Diego, 2012.
    7. 7)
      • 7. Jiang, W., Zavesky, E., Chang, S., et al: ‘Cross-domain learning methods for high-level visual concept classification’. Int. Conf. on Image Processing, 2008.
    8. 8)
      • 8. Yang, J., Yan, R., Hauptmann, A.G.: ‘Cross-domain video concept detection using adaptive SVMs’. Int. Conf. on Multimedia, 2007.
    9. 9)
      • 9. Aytar, Y., Zisserman, A.: ‘Tabula rasa: model transfer for object category detection’. Int. Conf. on Computer Vision, 2011.
    10. 10)
      • 10. Yeh, Y., Huang, C., Wang, Y.-C.F.: ‘Heterogeneous domain adaptation and classification by exploiting the correlation subspace’, IEEE Trans. Image Process., 2014, 23, (5), pp. 20092018.
    11. 11)
      • 11. Zen, G., Sangineto, E., Ricci, E., et al: ‘Unsupervised domain adaptation for personalized facial emotion recognition’. Int. Conf. on Multimodal Interaction, 2014, pp. 128135.
    12. 12)
      • 12. Torresani, L., Bergamo, A.: ‘Exploiting weakly-labeled web images to improve object classification: a domain adaptation approach’. Neural Information Processing Systems (NIPS), 2010, pp. 181189.
    13. 13)
      • 13. Bruzzone, L., Marconcini, M.: ‘Domain adaptation problems: a dasvm classification technique and a circular validation strategy’, IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32, (5), pp. 770787.
    14. 14)
      • 14. Ma, A.J., Yuen, P.C., Li, J.: ‘Domain transfer support vector ranking for person re-identification without target camera label information’. IEEE Int. Conf. on Computer Vision, 2013, pp. 35673574.
    15. 15)
      • 15. Tao, J., Chung, F.-L., Wang, S.: ‘On minimum distribution discrepancy support vector machine for domain adaptation’, Pattern Recognit., 2012, 45, (11), pp. 39623984.
    16. 16)
      • 16. Sugiyama, M., Nakajima, S., Kashima, H., et al: ‘Direct importance estimation with model selection and its application to covariate shift adaptation’. Proc. of Neural Information Processing Systems, 2007.
    17. 17)
      • 17. Gopalan, R., Li, R., Chellappa, R.: ‘Domain adaptation for object recognition: an unsupervised approach’. Int. Conf. in Computer Vision, 2011.
    18. 18)
      • 18. Kulis, B., Saenko, K., Darrell, T.: ‘What you saw is not what you get: domain adaptation using asymmetric kernel transforms’. IEEE Conf. on Computer Vision and Pattern Recognition, 2011.
    19. 19)
      • 19. Gopalan, R., Li, R., Chellappa, R.: ‘Unsupervised adaptation across domain shifts by generating intermediate data representations’, IEEE Trans. Pattern Anal. Mach. Intell., 2014, 36, (11), pp. 22882302.
    20. 20)
      • 20. Chattopadhyay, R., Chatapuram Krishnan, N., Panchanathan, S.: ‘Topology preserving domain adaptation for addressing subject based variability in semg signal’. AAAI Spring Symp.: Computational Physiology, 2011.
    21. 21)
      • 21. Jhuo, I.H., Liu, D., Lee, D.T., et al: ‘Robust visual domain adaptation with low-rank reconstruction’. IEEE Conf. on Computer Vision and Pattern Recognition, 2012.
    22. 22)
      • 22. Gong, B., Shi, Y., Sha, F., et al: ‘Geodesic flow kernel for unsupervised domain adaptation’. IEEE Conf. on Computer Vision and Pattern Recognition, 2012.
    23. 23)
      • 23. Samanta, S., Das, S.: ‘Domain adaptation based on eigen-analysis and clustering, for object categorization’. Int. Conf. on Computer Analysis of Images and Patterns., 2013, LNCS.
    24. 24)
      • 24. Duan, L., Xu, D., Tsang, I.W., et al: ‘Visual event recognition in videos by learning from web data’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (9), pp. 16671680.
    25. 25)
      • 25. Fernando, B., Habrard, A., Sebban, M., et al: ‘Unsupervised visual domain adaptation using subspace alignment’. Int. Conf. in Computer Vision, 2013.
    26. 26)
      • 26. Baktashmotlagh, M., Harandi, M.T., Lovell, B.C., et al: ‘Unsupervised domain adaptation by domain invariant projection’. Int. Conf. on Computer Vision, 2013.
    27. 27)
      • 27. Pezeshki, A., Scharf, L.L., Chong, E.K.P.: ‘The geometry of linearly and quadratically constrained optimization problems for signal processing and communications’, J. Franklin Inst., 2010, 347, (5), pp. 818835.
    28. 28)
      • 28. Wan, C., Pan, R., Li, J.: ‘Bi-weighting domain adaptation for cross-language text classification’. Int. Joint Conf. on Artificial Intelligence, 2011.
    29. 29)
      • 29. Supplementary Material: “http://www.cse.iitm.ac.in/~vplab/SUPPLE_PAGES-DA-IET_CV.pdf.
    30. 30)
      • 30. Absil, P.-A., Mahony, R., Sepulchre, R.: ‘Optimization algorithms on matrix manifolds’ (Princeton University Press, 2008).
    31. 31)
      • 31. Wen, Z., Yin, W.: ‘A feasible method for optimization with orthogonality constraints.’, Math. Program., 2013, 142, (1–2), pp. 397434.
    32. 32)
      • 32. Tagare, H.D.: ‘Notes on optimization on Stiefel manifolds’, Technical Report, Department of Diagnostic Radiology, Department of Biomedical Engineering, Yale University, 2011.
    33. 33)
      • 33. Löfberg, J.: ‘YALMIP: a toolbox for modeling and optimization in MATLAB’. Proc. of the CACSD Conf., Taiwan Taipei, 2004.
    34. 34)
      • 34. Bay, H., Ess, A., Tuytelaars, T., et al: ‘Speeded-up robust features (SURF)’, Comput. Vis. Image Underst., 2008, 110, (3), pp. 346359.
    35. 35)
      • 35. Torresani, L., Szummer, M., Fitzgibbon, A.: ‘Efficient object category recognition using classemes’. European Conf. on Computer Vision, 2010, pp. 776789.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2015.0322
Loading

Related content

content/journals/10.1049/iet-cvi.2015.0322
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address