Characterizing the State of Apathy with Facial Expression and Motion Analysis
- Others:
- Spatio-Temporal Activity Recognition Systems (STARS) ; Inria Sophia Antipolis - Méditerranée (CRISAM) ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)
- Cognition Behaviour Technology (CobTek) ; Université Nice Sophia Antipolis (1965 - 2019) (UNS) ; COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre Hospitalier Universitaire de Nice (CHU Nice)-Institut Claude Pompidou [Nice] (ICP - Nice)-Université Côte d'Azur (UCA)
- Centre Mémoire de Ressources et de Recherche [Nice] (CMRR Nice) ; Université Nice Sophia Antipolis (1965 - 2019) (UNS) ; COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre Hospitalier Universitaire de Nice (CHU Nice)-Université Côte d'Azur (UCA)
- ANR-17-CE39-0002,ENVISION,Analyse Holistique automatique d'individus par des techniques de vision par ordinateur(2017)
- ANR-19-P3IA-0002,3IA@cote d'azur,3IA Côte d'Azur(2019)
Description
Reduced emotional response, lack of motivation, and limited social interaction comprise the major symptoms of apathy. Current methods for apathy diagnosis require the patient's presence in a clinic, and time consuming clinical interviews and questionnaires involving medical personnel, which are costly and logistically inconvenient for patients and clinical staff, hindering among other large scale diagnostics. In this paper we introduce a novel machine learning framework to classify apathetic and non-apathetic patients based on analysis of facial dynamics, entailing both emotion and facial movement. Our approach caters to the challenging setting of current apathy assessment interviews, which include short video clips with wide face pose variations, very low-intensity expressions, and insignificant inter-class variations. We test our algorithm on a dataset consisting of 90 video sequences acquired from 45 subjects and obtained an accuracy of 84% in apathy classification. Based on extensive experiments, we show that the fusion of emotion and facial local motion produces the best feature set for apathy classification. In addition, we train regression models to predict the clinical scores related to the mental state examination (MMSE) and the neuropsychiatric apathy inventory (NPI) using the motion and emotion features. Our results suggest that the performance can be further improved by appending the predicted clinical scores to the video-based feature representation.
Abstract
International audience
Additional details
- URL
- https://hal.inria.fr/hal-02379341
- URN
- urn:oai:HAL:hal-02379341v1
- Origin repository
- UNICA