Multimodal complex emotions: Gesture expressivity and blended facial expressions
Description
One of the challenges of designing virtual humans is the definition of appropriate models of the relation between realistic emotions and the coordination of behaviors in several modalities. In this paper, we present the annotation, representation and modeling of multimodal visual behaviors occurring during complex emotions. We illustrate our work using a corpus of TV interviews. This corpus has been annotated at several levels of information: communicative acts, emotion labels, and multimodal signs. We have defined a copy-synthesis approach to drive an Embodied Conversational Agent from these different levels of information. The second part of our paper focuses on a model of complex (superposition and masking of) emotions in facial expressions of the agent. We explain how the complementary aspects of our work on corpus and computational model is used to specify complex emotional behaviors. © 2006 World Scientific Publishing Company.
Additional details
- URL
- https://hdl.handle.net/11567/1124195
- URN
- urn:oai:iris.unige.it:11567/1124195
- Origin repository
- UNIGE