next up previous contents index
Next: 3.4.5 Gesture Up: 3.4 Other User Interfaces Previous: 3.4.3 Using the remote

3.4.4 Speech

  In order to provide a more human interface to the capabilities of VMD, we are developing a (quasi) natural language interface. This will be coupled to a robust automatic speech recognition system to be developed by Dr. Yunxin Zhao of the University of Illinois. The interface will also utilize input from the mouse and 3D pointing devices (such as the experimental gesture recognition system). This will enable the user to control the most frequently used features of VMD without being tied to the keyboard. Even in the absence of a speech recognition system, the natural language interface will provide an alternative to VMD's numerous forms without requiring the user to learn VMD's sometimes cryptic text command structure. Those with an interest in the natural language interface may contact Jim Phillips <jim@ks.uiuc.edu> for additional information as it becomes available.



Sergei Izrailev
Fri Jul 25 17:07:27 CDT 1997