Concepedia

Publication | Open Access

Unsupervised Domain Adaptation with Residual Transfer Networks

1K

Citations

26

References

2016

Year

TLDR

Deep neural networks rely on large labeled datasets, and domain adaptation allows transfer learning when target labels are unavailable. The study proposes a method that jointly learns adaptive classifiers and transferable features using labeled source data and unlabeled target data. The approach introduces residual layers to model the difference between source and target classifiers, fuses multi‑layer features with tensor products into an RKHS for distribution matching, and trains the extended network end‑to‑end via back‑propagation. Experiments show that the proposed residual transfer network outperforms existing state‑of‑the‑art domain adaptation methods on benchmark datasets.

Abstract

The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks.

References

YearCitations

Page 1