LNCS Homepage
ContentsAuthor IndexSearch

Appearances Can Be Deceiving: Learning Visual Tracking from Few Trajectory Annotations*

Santiago Manen1, Junseok Kwon1, Matthieu Guillaumin1, and Luc Van Gool1, 2

1Computer Vision Laboratory, ETH Zurich, Switzerland

2ESAT - PSI / IBBT, K.U. Leuven, Belgium

Abstract. Visual tracking is the task of estimating the trajectory of an object in a video given its initial location. This is usually done by combining at each step an appearance and a motion model. In this work, we learn from a small set of training trajectory annotations how the objects in the scene typically move. We learn the relative weight between the appearance and the motion model. We call this weight: visual deceptiveness. At test time, we transfer the deceptiveness and the displacement from the closest trajectory annotation to infer the next location of the object. Further, we condition the transference on an event model. On a set of 161 manually annotated test trajectories, we show in our experiments that learning from just 10 trajectory annotations halves the center location error and improves the success rate by about 10%.

Keywords: Visual tracking, Motion learning, Event modelling

Electronic Supplementary Material:

LNCS 8693, p. 157 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer International Publishing Switzerland 2014