Published August 26, 2024 | Version v1
Conference paper

Towards autonomous robotic structure inspection with dense-direct visual-SLAM

Description

We present a comprehensive framework based on direct Visual Simultaneous Localization and Mapping (V-SLAM) to observe a vertical coastal cliff. The precise positioning of data measurements (such as ground-penetrating radar) is crucial for environmental observations. However, in GPS-denied environments near large structures, the GPS signal can be severely disrupted or even unavailable. To address this challenge, we focus on the accurate localization of drones using vision sensors and SLAM systems. Traditional SLAM approaches may lack robustness and precision, particularly when cameras lose perspective near structures.

We propose a new framework that combines feature-based and direct methods to enhance localization precision and robustness. The proposed system operates in two phases: first, a SLAM phase utilizing a stereo camera to reconstruct the environment from a distance sufficient to benefit from a wide field of view; second, a localization phase employing a monocular camera. Experiments conducted in realistic simulated environments demonstrate the system's ability to achieve drone localization within 15-centimeter precision, surpassing existing state-of-the-art approaches.

Abstract

International audience

Additional details

Created:
September 10, 2024
Modified:
September 10, 2024