LNCS Homepage
ContentsAuthor IndexSearch

Learning Where to Classify in Multi-view Semantic Segmentation

Hayko Riemenschneider1, András Bódis-Szomorú1, Julien Weissenberg1, and Luc Van Gool1, 2

1Computer Vision Laboratory, ETH Zurich, Switzerland
hayko@vision.ee.ethz.ch
bodis@vision.ee.ethz.ch
julienw@vision.ee.ethz.ch
vangool@vision.ee.ethz.ch

2K.U. Leuven, Belgium

Abstract. There is an increasing interest in semantically annotated 3D models, e.g. of cities. The typical approaches start with the semantic labelling of all the images used for the 3D model. Such labelling tends to be very time consuming though. The inherent redundancy among the overlapping images calls for more efficient solutions. This paper proposes an alternative approach that exploits the geometry of a 3D mesh model obtained from multi-view reconstruction. Instead of clustering similar views, we predict the best view before the actual labelling. For this we find the single image part that bests supports the correct semantic labelling of each face of the underlying 3D mesh. Moreover, our single-image approach may surprise because it tends to increase the accuracy of the model labelling when compared to approaches that fuse the labels from multiple images. As a matter of fact, we even go a step further, and only explicitly label a subset of faces (e.g. 10%), to subsequently fill in the labels of the remaining faces. This leads to a further reduction of computation time, again combined with a gain in accuracy. Compared to a process that starts from the semantic labelling of the images, our method to semantically label 3D models yields accelerations of about 2 orders of magnitude. We tested our multi-view semantic labelling on a variety of street scenes.

Keywords: semantic segmentation, multi-view, efficiency, view selection, redundancy, ranking, importance, labeling

LNCS 8693, p. 516 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer International Publishing Switzerland 2014