top of page

A large part of the computer vision literature focuses on obtaining impressive results on large datasets under the main assumption that training and test samples are drawn from the same distribution. However, in several applications this assumption is grossly violated. Think about using algorithms trained on clean Amazon images to annotate objects acquired with a low-resolution cellphone camera, or using an organ detection and segmentation tool trained on CT images for MRI scans. Other challenging tasks appear across object classes: given the models of a giraffe and a zebra or some of their image patches, can we use them to detect and recognize an okapi?

Despite the large availability of principled learning methods, it has been shown that they often fail in generalizing across domains, preventing any reliable automatic labeling and bringing back to the error prone and time expensive human annotation for new images. Domain adaptation and Transfer learning tackle these problems proposing methods that bridge the gap between the source training domain and different but related target test domains.

 

How Will the Tutorial Help Me?

This tutorial will give you the basic knowledge to understand  when domain adaptation and transfer learning methods are suitable and how to use them. The introduction of the tutorial will cover the theoretical basis of adaptive learning, thus no prior knowledge on the topic is assumed. We will review different algorithms recently proposed for a wide range of computer vision applications and provide the audience with pointers to existing resources (code, datasets, surveys etc.).

 

Francesco Orabona
Tatiana Tommasi

is a Research Assistant Professor at the Toyota Technological Institute at Chicago. He is (co)author of more than 40 peer-reviewed papers, on online learning, transfer learning and computer vision. Francesco has previously co-organized a workshop at NIPS 2013 on learning across domains and tasks, and he is the co-organizer of the ECCV 2014 workshop "TASK-CV: Transferring and Adapting Source Knowledge in Computer Vision".

is currently a Postdoctoral Research Fellow at KU Leuven. She completed her PhD at the École Polytechnique Fédérale de Lausanne in 2013 and she worked as a Research Assistant at Idiap (Martigny, Switzerland) in the previous four years. Tatiana publishes in the area of transfer learning, multi-task learning and domain adaptation in computer vision since 2009 and has previously co-organized a workshop at NIPS 2013 on learning across domains and tasks.

Organizers

Content

  • Introduction and Theory (1.5 hours):

    • what, how and when to transfer;

    • different scenarios: cross-domain and cross-task transfer, semi-supervised and unsupervised settings;

    • the data distribution mismatch and the generalization bounds.

  • Algorithms (1.5 hours):

    • feature learning methods: reduce or enlarge the dimensionality of the feature space (subspace projections, representations built over classification output scores and the use of local features), self-taught learning, dictionary learning approaches and the use of deep convolutional neural networks;

    • sample selection methods;

    • self-labeling;

    • model adaptation.

  • Applications and New directions (1 hour):

    • the available datasets and their bias;

    • speed-up for large scale problems;

As human beings, we are able to adapt and apply efficiently our past experience to new scenarios, but how can we reproduce this skill for an artificial learning system?

Acknowlegments

T.T. Acknowledges the support of the EC FP7 project AXES.

bottom of page