LNCS Homepage
ContentsAuthor IndexSearch

Tracking Using Multilevel Quantizations

Zhibin Hong1, Chaohui Wang2, Xue Mei3, Danil Prokhorov3, and Dacheng Tao1

1Centre for Quantum Computation and Intelligent Systems, Faculty of Engineering and Information Technology, University of Technology, Sydney, NSW, Australia

2Max Planck Institute for Intelligent Systems, Tübingen, Germany

3Toyota Research Institute, North America, Ann Arbor, MI, USA

Abstract. Most object tracking methods only exploit a single quantization of an image space: pixels, superpixels, or bounding boxes, each of which has advantages and disadvantages. It is highly unlikely that a common optimal quantization level, suitable for tracking all objects in all environments, exists. We therefore propose a hierarchical appearance representation model for tracking, based on a graphical model that exploits shared information across multiple quantization levels. The tracker aims to find the most possible position of the target by jointly classifying the pixels and superpixels and obtaining the best configuration across all levels. The motion of the bounding box is taken into consideration, while Online Random Forests are used to provide pixel- and superpixel-level quantizations and progressively updated on-the-fly. By appropriately considering the multilevel quantizations, our tracker exhibits not only excellent performance in non-rigid object deformation handling, but also its robustness to occlusions. A quantitative evaluation is conducted on two benchmark datasets: a non-rigid object tracking dataset (11 sequences) and the CVPR2013 tracking benchmark (50 sequences). Experimental results show that our tracker overcomes various tracking challenges and is superior to a number of other popular tracking methods.

Keywords: Tracking, Multilevel Quantizations, Online Random Forests, Non-rigid Object Tracking, Conditional Random Fields

LNCS 8694, p. 155 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer International Publishing Switzerland 2014