|
| | | |
Observer Annotation of Affective Display and Evaluation of Expressivity: Face vs. Face-and-Body
Gunes, H. and Piccardi, M.
A first step in developing and testing a robust affective multimodal system is to obtain or access data
representing human multimodal expressive behaviour.
Collected affect data has to be further annotated in order
to become usable for the automated systems. Most of the
existing studies of emotion or affect annotation are
monomodal. Instead, in this paper, we explore how
independent human observers annotate affect display
from monomodal face data compared to bimodal face and body data. To this aim we collected visual affect
data by recording the face and face-and-body
simultaneously. We then conducted a survey by asking
human observers to view and label the face and face and body
recordings separately. The results obtained show
that in general, viewing face-and-body simultaneously
helps with resolving the ambiguity in annotating
emotional behaviours. |
Cite as: Gunes, H. and Piccardi, M. (2006). Observer Annotation of Affective Display and Evaluation of Expressivity: Face vs. Face-and-Body. In Proc. HCSNet Workshop on the Use of Vision in Human-Computer Interaction, (VisHCI 2006), Canberra, Australia. CRPIT, 56. Goecke, R., Robles-Kelly, A. and Caelli, T., Eds. ACS. 35-42. |
(from crpit.com)
(local if available)
|
|