Unsupervised domain adaptation by subspace alignment, and more
Rémi Emonet, maître de conférences at Laboratoire Hubert Curien, Saint-Etienne, France
Abstract: The first part of this talk focuses on unsupervised domain adaptation in machine learning. More often than not, the distribution of training data differs from the distribution of test data (on which the learned model will be used). Most machine learning methods unfortunately suppose that these distributions are identical (or very close). Unsupervised domain adaptation is a way of tackling this issue: it aims at transferring a model, learned from a source domain, to a target domain where no labels are available. In this talk, I will present two unsupervised domain adaptation methods that are simple and effective. The first one extracts a subspace from each domain (source and target) and learns an alignment between these subspaces: the overall procedure is simple and robust. The second method additionally handles non-linearities by projecting all points according to some "landmarks" that are selected to make the source and target distributions as close as possible.
Depending on time and audience interests, the second part will present our work on full-scene labeling using deep convolutional neural networks. Our work focuses on exploiting different labelings. We use the labeled data produced by different research groups and takes into account the fact that their labelings vary: they use different sets of labels, different label semantics, a varying precision in the training segmentation, etc. Our method allows a joint learning that combines all labeled data and improves pixel classification accuracy.