Published July 19, 2024 | Version v1
Publication

Applicability of Deep Learning to Dynamically Identify the Different Organs of the Pelvic Floor in the Midsagittal Plane

Description

Introduction and Hypothesis The objective was to create and validate the usefulness of a convolutional neural network (CNN) for identifying different organs of the pelvic floor in the midsagittal plane via dynamic ultrasound. Methods This observational and prospective study included 110 patients. Transperineal ultrasound scans were performed by an expert sonographer of the pelvic floor. A video of each patient was made that captured the midsagittal plane of the pelvic floor at rest and the change in the pelvic structures during the Valsalva maneuver. After saving the captured videos, we manually labeled the different organs in each video. Three different architectures were tested—UNet, FPN, and LinkNet—to determine which CNN model best recognized anatomical structures. The best model was trained with the 86 cases for the number of epochs determined by the stop criterion via cross-validation. The Dice Similarity Index (DSI) was used for CNN validation. Results Eighty-six patients were included to train the CNN and 24 to test the CNN. After applying the trained CNN to the 24 test videos, we did not observe any failed segmentation. In fact, we obtained a DSI of 0.79 (95% CI: 0.73 – 0.82) as the median of the 24 test videos. When we studied the organs independently, we observed differences in the DSI of each organ. The poorest DSIs were obtained in the bladder (0.71 [95% CI: 0.70 – 0.73]) and uterus (0.70 [95% CI: 0.68 – 0.74]), whereas the highest DSIs were obtained in the anus (0.81 [95% CI: 0.80 – 0.86]) and levator ani muscle (0.83 [95% CI: 0.82 – 0.83]). Conclusions Our results show that it is possible to apply deep learning using a trained CNN to identify different pelvic floor organs in the midsagittal plane via dynamic ultrasound.

Additional details

Created:
July 20, 2024
Modified:
July 20, 2024