LNCS Homepage
ContentsAuthor IndexSearch

Discovering Object Classes from Activities

Abhilash Srikantha1, 2 and Juergen Gall1

1University of Bonn, Germany
abhilash.srikantha@tue.mpg.de
gall@informatik.uni-bonn.de

2MPI for Intelligent Systems, Tuebingen, Germany

Abstract. In order to avoid an expensive manual labelling process or to learn object classes autonomously without human intervention, object discovery techniques have been proposed that extract visually similar objects from weakly labelled videos. However, the problem of discovering small or medium sized objects is largely unexplored. We observe that videos with activities involving human-object interactions can serve as weakly labelled data for such cases. Since neither object appearance nor motion is distinct enough to discover objects in such videos, we propose a framework that samples from a space of algorithms and their parameters to extract sequences of object proposals. Furthermore, we model similarity of objects based on appearance and functionality, which is derived from human and object motion. We show that functionality is an important cue for discovering objects from activities and demonstrate the generality of the model on three challenging RGB-D and RGB datasets.

Keywords: Object Discovery, Human-Object Interaction, RGBD Videos

LNCS 8694, p. 415 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer International Publishing Switzerland 2014