Published 2013
| Version v1
Publication
MMLI: Multimodal Multiperson Corpus of Laughter in Interaction
Description
The aim of the Multimodal and Multiperson Corpus of Laughter in Interaction (MMLI) was to collect multimodal data of laughter
with the focus on full body movements and different laughter types. It
contains both induced and interactive laughs from human triads. In total
we collected 500 laugh episodes of 16 participants. The data consists of
3D body position information, facial tracking, multiple audio and video
channels as well as physiological data.
In this paper we discuss methodological and technical issues related
to this data collection including techniques for laughter elicitation and
synchronization between different independent sources of data. We also
present the enhanced visualization and segmentation tool used to segment captured data. Finally we present data annotation as well as preliminary results of the analysis of the nonverbal behavior patterns in
laughter.
Additional details
Identifiers
- URL
- http://hdl.handle.net/11567/783599
- URN
- urn:oai:iris.unige.it:11567/783599
Origin repository
- Origin repository
- UNIGE