|
| | | |
Explicit Task Representation based on Gesture Interaction
Mueller-Tomfelde, C. and Paris, C.
This paper describes the role and the use of an explicit task representation in applications where humans interact in non-traditional computer environments using gestures. The focus lies on training and assistance applications, where the objective of the training includes implicit knowledge, e.g., motor-skills. On the one hand, these applications require a clear and transparent description of what has to be done during the interaction, while, on the other hand, they are highly interactive and multimodal. Therefore, the human computer interaction becomes modelled from the top down as a collaboration in which each participant pursues their individual goal that is stipulated by a task. In a bottom up processing, gesture recognition determines the actions of the user by applying processing on the continuous data streams from the environment. The resulting gesture or action is interpreted as the user's intention and becomes evaluated during the collaboration, allowing the system to reason about how to best provide guidance at this point. A vertical prototype based on the combination of a haptic virtual environment and a knowledge-based reasoning system is discussed and the evolvement of the task-based collaboration becomes demonstrated. |
Cite as: Mueller-Tomfelde, C. and Paris, C. (2005). Explicit Task Representation based on Gesture Interaction. In Proc. NICTA-HCSNet Multimodal User Interaction Workshop, MMUI 2005, Sydney, Australia. CRPIT, 57. Chen, F. and Epps, J., Eds. ACS. 39-45. |
(from crpit.com)
(local if available)
|
|