2009 IEEE International Conference on
Systems, Man, and Cybernetics |
![]() |
Abstract
We have developed a head gesture controlled electric wheelchair system to aid persons with severe disabilities. Realtime range information obtained from a stereo camera is used to locate and segment the face images of the user from the sensed video. We use an Isomap based nonlinear manifold learning map of facial textures for head pose estimation. Our system is a non-contact vision system, making it much more convenient to use. The user is only required to gesture his/her head to command the wheelchair. To overcome problems with a nonresponding system it is necessary to notify the user of the exact system state while the system is in use. In this paper, we explore the use of vibrotactile rendering of head gestures as feedback. Three different feedback systems are developed and tested, audio stimuli, vibrotactile stimuli and audio plus vibrotactile stimuli.We have performed user tests to study the usability of these three display methods. The usability studies show that the method using both audio plus vibrotactile response output performs the other methods (i.e. audio stimuli, vibrotactile stimuli response).