Affective State Assistant for Helping Users with Cognition Disabilities Using Neural Networks
Description
Non-verbal communication is essential in the communication process. This means that its lack can cause misinterpretations of the message that the sender tries to transmit to the receiver. With the rise of video calls, it seems that this problem has been partially solved. However, people with cognitive disorders such as those with some kind of Autism Spectrum Disorder (ASD) are unable to interpret non-verbal communication neither live nor by video call. This work analyzes the relationship between some physiological measures (EEG, ECG, and GSR) and the affective state of the user. To do that, some public datasets are evaluated and used for a multiple Deep Learning (DL) system. Each physiological signal is pre-processed using a feature extraction process after a frequency study with the Discrete Wavelet Transform (DWT), and those coefficients are used as inputs for a single DL classifier focused on that signal. These multiple classifiers (one for each signal) are evaluated independently and their outputs are combined in order to optimize the results and obtain additional information about the most reliable signals for classifying the affective states into three levels: low, middle, and high. The full system is carefully detailed and tested, obtaining promising results (more than 95% accuracy) that demonstrate its viability.
Additional details
- URL
- https://idus.us.es/handle//11441/104933
- URN
- urn:oai:idus.us.es:11441/104933
- Origin repository
- USE