LNCS Homepage
ContentsAuthor IndexSearch

Creating Summaries from User Videos*

Michael Gygli1, 2, Helmut Grabner1, 2, Hayko Riemenschneider1, and Luc Van Gool1, 3

1Computer Vision Laboratory, ETH Zurich, Switzerland
gygli@vision.ee.ethz.ch

2upicto GmbH, Zurich, Switzerland

3K.U. Leuven, Belgium

Abstract. This paper proposes a novel approach and a new benchmark for video summarization. Thereby we focus on user videos, which are raw videos containing a set of interesting events. Our method starts by segmenting the video by using a novel “superframe” segmentation, tailored to raw videos. Then, we estimate visual interestingness per superframe using a set of low-, mid- and high-level features. Based on this scoring, we select an optimal subset of superframes to create an informative and interesting summary. The introduced benchmark comes with multiple human created summaries, which were acquired in a controlled psychological experiment. This data paves the way to evaluate summarization methods objectively and to get new insights in video summarization. When evaluating our method, we find that it generates high-quality results, comparable to manual, human-created summaries.

Keywords: Video analysis, video summarization, temporal segmentation

Electronic Supplementary Material:

LNCS 8695, p. 505 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer International Publishing Switzerland 2014