For the computer to interact intelligently with human users, computers should be able to recognize emotions, by analyzing the human's affective state, physiology and behavior. In this paper, we present a survey of research conducted on face and body gesture and recognition. In order to make human-computer interfaces truly natural, we need to develop technology that tracks human movement, body behavior and facial expression, and interprets these movements in an affective way. Accordingly in this paper, we present a framework for a vision-based multimodal analyzer that combines face and body gesture and further discuss relevant issues.
|Cite as: Gunes, H., Piccardi, M. and Jan, T. (2004). Face and Body Gesture Recognition for a Vision-Based Multimodal Analyzer. In Proc. 2003 Pan-Sydney Area Workshop on Visual Information Processing (VIP2003), Sydney, Australia. CRPIT, 36. Piccardi, M., Hintz, T., He, S., Huang, M. L. and Feng, D. D., Eds. ACS. 19-28. |
(local if available)