Use of Speech Analyses within a mobile application for the Assessment of cognitive impairment in elderly people
- Others:
- Institut National de Recherche en Informatique et en Automatique (Inria)
- IBM Haifa Research Lab (IBM HRL) ; IBM R&D Labs in Israel
- Cognition Behaviour Technology (CobTek) ; Université Nice Sophia Antipolis (1965 - 2019) (UNS) ; COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre Hospitalier Universitaire de Nice (CHU Nice)-Institut Claude Pompidou [Nice] (ICP - Nice)-Université Côte d'Azur (UCA)
- Laboratoire d'Intégration des Systèmes et des Technologies (LIST (CEA)) ; Direction de Recherche Technologique (CEA) (DRT (CEA)) ; Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)
Description
Background: Various types of dementia and Mild Cognitive Impairment (MCI) are manifested as irregularities in human speech and language, which have proven to be strong predictors for the disease presence and progression. Therefore, automatic speech analytics provided by a mobile application may be a useful tool in providing additional indicators for assessment and detection of early stage dementia and MCI. Method: 165 participants (subjects with subjective cognitive impairment (SCI), MCI patients, Alz-heimer's disease (AD) and mixed dementia (MD) patients) were recorded with a mobile application while performing several short vocal cognitive tasks during a regular consultation. These tasks included verbal fluency, picture description, counting down and a free speech task. The voice recordings were processed in two steps: in the first step, vocal markers were extracted using speech signal processing techniques; in the second, the vocal markers were tested to assess their 'power' to distinguish between SCI, MCI, AD and MD. The second step included training automatic classifiers for detecting MCI and AD, based on machine learning methods, and testing the detection accuracy. Results: The fluency and free speech tasks obtain the highest accuracy rates of classifying AD vs. MD vs. MCI vs. SCI. Using the data, we demonstrated classification accuracy as follows: SCI vs AD = 92% accuracy; SCI vs. MD = 92% accuracy; SCI vs. MCI = 86% accuracy and MCI vs. AD = 86%. Conclusions: Our results indicate the potential value of vocal analytics and the use of a mobile application for accurate automatic differentiation between SCI, MCI and AD. This tool can provide the clini-cian with meaningful information for assessment and monitoring of people with MCI and AD based on a non-invasive, simple and low-cost method.
Abstract
International audience
Additional details
- URL
- https://hal.inria.fr/hal-01672580
- URN
- urn:oai:HAL:hal-01672580v1
- Origin repository
- UNICA