Published 2016
| Version v1
Publication
Using the Audio Respiration Signal for Multimodal Discrimination of Expressive Movement Qualities
Contributors
Description
In this paper we propose a multimodal approach to distinguish between movements displaying three different expressive qualities: fluid, fragmented, and impulsive movements. Our approach is based on the Event Synchronization algorithm, which is applied to compute the amount of synchronization between two low-level features extracted from multimodal data. In more details, we use the energy of the audio respiration signal captured by a standard microphone placed near to the mouth, and the whole body kinetic energy estimated from motion capture data. The method was evaluated on 90 movement segments performed by 5 dancers. Results show that fragmented movements display higher average synchronization than fluid and impulsive movements.
Additional details
Identifiers
- URL
- http://hdl.handle.net/11567/847999
- URN
- urn:oai:iris.unige.it:11567/847999
Origin repository
- Origin repository
- UNIGE