Introduction

During the first decade of the XXI century, progress in machine learning has had an enormous impact in computer vision. The ability to learn models from data has boosted tasks such as classification, detection, segmentation, recognition, tracking, etc.

A key ingredient of such a success has been the use of visual data with annotations, both for training and testing, and well established protocols for evaluating the results.

However, most of the time, annotating visual information is a tiresome human activity prone to errors. Thus, for addressing new tasks and/or operating in new domains, it is worth it to aspire to reuse the available annotations or the models learned from them.

Therefore, transferring and adapting source knowledge (in the form of annotated data or learned models) has recently emerged as a challenge to develop computer vision methods that are reliable across domains and tasks.

Accordingly, the TASK-CV workshop aims to bring together research in transfer learning (TL) and domain adaptation (DA) for computer vision. This workshop will take place at ECCV2014. We invite the submission of original research contributions such as:

  • TL/DA learning methods for challenging paradigms like unsupervised, and incremental or on-line learning.
  • TL/DA focusing on specific visual features (HOG, LBP, etc.), models (holistic, DPM, BoW, etc.), or learning algorithms (SVM, AdaBoost, CNN, Random Forest, etc.).
  • TL/DA focusing on specific computer vision tasks such as classification, detection, segmentation, recognition, tracking, etc.
  • Comparative studies of different TL/DA methods.
  • Working frameworks with appropriate CV-oriented datasets and evaluation protocols to assess TL/DA methods.
  • Transferring part representations between categories.
  • Transferring tasks to new domains.
  • Facing domain shift due to sensor differences (e.g., low-vs-high resolution, power spectrum sensitivity) and compression schemes.
  • Datasets and protocols for evaluating TL/DA methods. This is not a closed list; therefore, we welcome other interesting and relevant research on TASK for CV problems.

Accepted papers:

Invited Posters

  • Tomas Pfister, University of Oxford; James Charles, University of Leeds; Andrew Zisserman,University of Oxford
    "Domain-adaptive Discriminative One-shot Learning of Gestures".
  • Enver Sangineto, DISI, University of Trento
    "Statistical and Spatial Consensus Collection for Detector Adaptation".
  • Baochen Sun, UMass Lowell; Kate Saenko, UMass Lowell
    "From Virtual to Reality: Fast Adaptation of Virtual Object Detectors to Real Domains".
  • Jiaolong Xu, Computer Vision Center and U. Autònoma de Barcelona; Sebastian Ramos, Computer Vision Center; David Vázquez, Computer Vision Center; Antonio M. López, Computer Vision Center and U. Autònoma de Barcelona
    "Structure-aware Domain Adaptation of Deformable Part-based Models".
  • Jiaolong Xu, Computer Vision Center and U. Autònoma de Barcelona; Sebastian Ramos, Computer Vision Center; David Vázquez, Computer Vision Center; Antonio M. López, Computer Vision Center and U. Autònoma de Barcelona
    "Incremental Domain Adaptation of Deformable Part-based Models".

Invited Speakers

  • Trevor Darrell University of California, Berkeley.
    Domain Adaptation and Deep Learning for Large Scale Object Recognition and Detection.
  • Boqing Gong University of Southern California.
    Kernel Methods for Domain Adaptation.
  • Christoph Lampert Inst. of Science and Technology, Austria.
    Learning with a Time-evolving Data Distribution.
  • Mehryar Mohri Courant Institute of Mathematical Sciences, New York.
    Recent Theoretical and Algorithmic advances in Domain Adaptation.
  • Tinne Tuytelaars Katholieke Universiteit Leuven.
    Overcoming Dataset Bias: How Far Are we from the Solution.

Organization

More information

http://www.cvc.uab.es/adas/task-cv2014