Published April 10, 2023 | Version v1
Publication

Explainable machine learning for sleep apnea prediction

Description

Machine and deep learning has become one of the most useful tools in the last years as a diagnosis-decision-support tool in the health area. However, it is widely known that artificial intelligence models are considered a black box and most experts experience difficulties explaining and interpreting the models and their results. In this context, explainable artificial intelligence is emerging with the aim of providing black-box models with sufficient interpretability so that models can be easily understood and further applied. Obstructive sleep apnea is a common chronic respiratory disease related to sleep. Its diagnosis nowadays is done by processing different data signals, such as electrocardiogram or respiratory rate. The waveform of the respiratory signal is of importance too. Machine learning models could be applied to the signal's analysis. Data from a polysomnography study for automatic sleep apnea detection have been used to evaluate the use of the Local Interpretable Model-Agnostic (LIME) library for explaining the health data models. Results obtained help to understand how several features have been used in the model and their influence in the quality of sleep.

Abstract

Forma parte de un número especial dedicado al 26th International Conference on Knowledge-Based and Intelligent Information & Engineering Systems (KES 2022)

Abstract

Ministerio de Ciencia e Innovación PID2020-117954RB-C21

Abstract

Junta de Andalucía PY20-00870

Abstract

Junta de Andalucía UPO-138516

Additional details

Created:
April 14, 2023
Modified:
December 1, 2023