R. Sharma, T. S. Huang, V. I. Pavlovic, K. Schulten, A. Dalke, J. Phillips,
M. Zeller, W. Humphrey, Y. Zhao, Z. Lo, and S. Chu.
Speech/gesture interface to a visual computing environment for
molecular biologists.
In Proceedings of 13th ICPR 96, volume 3, pp. 964-968, 1996.
SHAR96
Recent progress in 3-D, immersive display and virtual reality (VR) technologies has made possible many exciting applications, for example, interactive visualization of complex scientific data. To fully exploit this potential there is a need for ``natural'' interfaces that allow the manipulation of such displays without cumbersome attachments. In this paper we describe the use of visual hand gesture analysis and speech recognition for developing a speech/gesture interface for controlling a 3-D display. The interface enhances an existing application, VMD, which is a VR visual computing environment for molecular biologists. The free hand gestures are used for manipulating the 3-D graphical display together with a set of speech commands. We describe the visual gesture analysis and the speech analysis techniques used in developing this interface. The dual modality of speech/gesture is found to greatly aid the interaction capability.
Download Full Text
The manuscripts available on our site are provided for your personal
use only and may not be retransmitted or redistributed without written
permissions from the paper's publisher and author. You may not upload any
of this site's material to any public server, on-line service, network, or
bulletin board without prior written permission from the publisher and
author. You may not make copies for any commercial purpose. Reproduction
or storage of materials retrieved from this web site is subject to the
U.S. Copyright Act of 1976, Title 17 U.S.C.