LNCS Homepage
ContentsAuthor IndexSearch

Spatiotemporal Background Subtraction Using Minimum Spanning Tree and Optical Flow*

Mingliang Chen1,2, Qingxiong Yang1,2, Qing Li1,2, Gang Wang3, and Ming-Hsuan Yang4

1Department of Computer Science, Multimedia software Engineering Research Centre (MERC), City University of Hong Kong, Hong Kong, China

2MERC-Shenzhen, Guangdong, China

3Nanyang Technological University, Singapore

4University of California, Merced, USA

Abstract. Background modeling and subtraction is a fundamental research topic in computer vision. Pixel-level background model uses a Gaussian mixture model (GMM) or kernel density estimation to represent the distribution of each pixel value. Each pixel will be process independently and thus is very efficient. However, it is not robust to noise due to sudden illumination changes. Region-based background model uses local texture information around a pixel to suppress the noise but is vulnerable to periodic changes of pixel values and is relatively slow. A straightforward combination of the two cannot maintain the advantages of the two. This paper proposes a real-time integration based on robust estimator. Recent efficient minimum spanning tree based aggregation technique is used to enable robust estimators like M-smoother to run in real time and effectively suppress the noisy background estimates obtained from Gaussian mixture models. The refined background estimates are then used to update the Gaussian mixture models at each pixel location. Additionally, optical flow estimation can be used to track the foreground pixels and integrated with a temporal M-smoother to ensure temporally-consistent background subtraction. The experimental results are evaluated on both synthetic and real-world benchmarks, showing that our algorithm is the top performer.

Keywords: Background Modeling, Video Segmentation, Tracking, Optical Flow

*This work was supported in part by a GRF grant from the Research Grants Council of Hong Kong (RGC Reference: CityU 122212), the NSF CAREER Grant #1149783 and NSF IIS Grant #1152576.

LNCS 8695, p. 521 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer International Publishing Switzerland 2014