LNCS Homepage
ContentsAuthor IndexSearch

Zero-Shot Learning via Visual Abstraction

Stanislaw Antol1, C. Lawrence Zitnick2, and Devi Parikh1

1Virginia Tech, Blacksburg, VA, USA

2Microsoft Research, Redmond, WA, USA

Abstract. One of the main challenges in learning fine-grained visual categories is gathering training images. Recent work in Zero-Shot Learning (ZSL) circumvents this challenge by describing categories via attributes or text. However, not all visual concepts, e.g., two people dancing, are easily amenable to such descriptions. In this paper, we propose a new modality for ZSL using visual abstraction to learn difficult-to-describe concepts. Specifically, we explore concepts related to people and their interactions with others. Our proposed modality allows one to provide training data by manipulating abstract visualizations, e.g., one can illustrate interactions between two clipart people by manipulating each person’s pose, expression, gaze, and gender. The feasibility of our approach is shown on a human pose dataset and a new dataset containing complex interactions between two people, where we outperform several baselines. To better match across the two domains, we learn an explicit mapping between the abstract and real worlds.

Keywords: zero-shot learning, visual abstraction, synthetic data, pose

LNCS 8692, p. 401 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer International Publishing Switzerland 2014