LNCS Homepage
ContentsAuthor IndexSearch

Expanding the Family of Grassmannian Kernels: An Embedding Perspective*

Mehrtash T. Harandi1,2, Mathieu Salzmann1,2, Sadeep Jayasumana1,2, Richard Hartley1,2, and Hongdong Li1,2

1Australian National University, Canberra, ACT 0200, Australia

2NICTA, Locked Bag 8001, Canberra, ACT 2601, Australia

Abstract. Modeling videos and image-sets as linear subspaces has proven beneficial for many visual recognition tasks. However, it also incurs challenges arising from the fact that linear subspaces do not obey Euclidean geometry, but lie on a special type of Riemannian manifolds known as Grassmannian. To leverage the techniques developed for Euclidean spaces (e.g., support vector machines) with subspaces, several recent studies have proposed to embed the Grassmannian into a Hilbert space by making use of a positive definite kernel. Unfortunately, only two Grassmannian kernels are known, none of which -as we will show- is universal, which limits their ability to approximate a target function arbitrarily well. Here, we introduce several positive definite Grassmannian kernels, including universal ones, and demonstrate their superiority over previously-known kernels in various tasks, such as classification, clustering, sparse coding and hashing.

Keywords: Grassmann manifolds, kernel methods, Plücker embedding

Electronic Supplementary Material:

LNCS 8695, p. 408 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer International Publishing Switzerland 2014