Deep learning for eye fundus diagnosis based on multispectral imaging
- Others:
- E-Patient : Images, données & mOdèles pour la médeciNe numériquE (EPIONE) ; Inria Sophia Antipolis - Méditerranée (CRISAM) ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)
- Universitat Politècnica de Catalunya [Barcelona]
- IHU-LIRYC ; Université Bordeaux Segalen - Bordeaux 2-CHU Bordeaux [Bordeaux]
Description
Purpose: A new deep-learning based method for automatic eye fundus diagnosis using multispectral images is proposed. The method discriminates between healthy and diseased eyes exploiting the potential of multispectral data. Among other pathologies, those mainly considered were age-related macular degeneration (ARMD), glaucoma and diabetic retinopathy as the leading causes of vision loss affecting the retina. Methods: We analyzed 68 healthy and 68 diseased eyes from 89 subjects, 63% females and 37% males (19-95 years); only patients with retinal and/or choroidal pathologies were included. For each eye, 15 images from 400 nm to 1300 nm were acquired with a novel multispectral fundus camera. The deep learning network was adapted from that developed by Ly et al. (Ly B. et al. Lect. Notes Comput. Sc., vol. 12738, 2021) for sustained ventricular arrhythmia prediction, which involves a conditional variational autoencoder (CVAE) and a classifier model. The low dimensional features generated by the encoder are the inputs for the classifier and the decoder, which reconstructs the original sequence of 15 spectral images. These features contain information of healthy and diseased structures such as drusen, scars, edemas and neovascularization. The error between the encoder-decoder outputs is used to improve the performance of the network. The dataset was divided in training/validation (80% data) and test (20% data) datasets. Results: The multispectral images offered very relevant information of healthy (Fig. 1 left) and diseased (Fig. 1 right) eye fundus structures to be used as input data for the proposed algorithm. The CVAE ran for 85 epochs leading to a classification accuracy of 96.43%, a loss of 0.20, a sensitivity of 92.86% and a specificity of 100.00% when discriminating between healthy and diseased fundus of the test dataset. Conclusions: The proposed CVAE for the automatic classification of healthy and diseased eyes from multispectral eye fundus images produced an excellent outcome, highlighting the power of an encoder-decoder network and the significant information retrieved from multispectral images in the visible and near-infrared beyond 900 nm, a relatively unexplored range. Future work will focus on differentiating among pathologies by means of approaches such as attention maps, which help identifying abnormal structures.
Abstract
International audience
Additional details
- URL
- https://hal.inria.fr/hal-03695867
- URN
- urn:oai:HAL:hal-03695867v1
- Origin repository
- UNICA