LNCS Homepage
ContentsAuthor IndexSearch

Read My Lips: Continuous Signer Independent Weakly Supervised Viseme Recognition

Oscar Koller1, 2, Hermann Ney1, and Richard Bowden2

1Human Language Technology and Pattern Recognition, RWTH, Aachen, Germany
koller@cs.rwth-aachen.de
ney@cs.rwth-aachen.de

2Centre for Vision Speech and Signal Processing, University of Surrey, UK
r.bowden@surrey.ac.uk

Abstract. This work presents a framework to recognise signer independent mouthings in continuous sign language, with no manual annotations needed. Mouthings represent lip-movements that correspond to pronunciations of words or parts of them during signing. Research on sign language recognition has focused extensively on the hands as features. But sign language is multi-modal and a full understanding particularly with respect to its lexical variety, language idioms and grammatical structures is not possible without further exploring the remaining information channels. To our knowledge no previous work has explored dedicated viseme recognition in the context of sign language recognition. The approach is trained on over 180.000 unlabelled frames and reaches 47.1% precision on the frame level. Generalisation across individuals and the influence of context-dependent visemes are analysed.

Keywords: Sign Language Recognition, Viseme Recognition, Mouthing, Lip Reading

LNCS 8689, p. 281 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer International Publishing Switzerland 2014