LNCS Homepage
ContentsAuthor IndexSearch

Multi-modal Unsupervised Feature Learning for RGB-D Scene Labeling

Anran Wang1, Jiwen Lu2, Gang Wang1, 2, Jianfei Cai1, and Tat-Jen Cham1

1Nanyang Technological University, Singapore

2Advanced Digital Sciences Center, Singapore

Abstract. Most of the existing approaches for RGB-D indoor scene labeling employ hand-crafted features for each modality independently and combine them in a heuristic manner. There has been some attempt on directly learning features from raw RGB-D data, but the performance is not satisfactory. In this paper, we adapt the unsupervised feature learning technique for RGB-D labeling as a multi-modality learning problem. Our learning framework performs feature learning and feature encoding simultaneously which significantly boosts the performance. By stacking basic learning structure, higher-level features are derived and combined with lower-level features for better representing RGB-D data. Experimental results on the benchmark NYU depth dataset show that our method achieves competitive performance, compared with state-of-the-art.

Keywords: RGB-D scene labeling, unsupervised feature learning, joint feature learning and encoding, multi-modality

LNCS 8693, p. 453 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer International Publishing Switzerland 2014