Published October 17, 2022 | Version v1
Conference paper

OMNI-DRL: Learning to Fly in Forests with Omnidirectional Images

Description

Perception is crucial for drone obstacle avoidance in complex, static, and unstructured outdoor environments. However, most navigation solutions based on Deep Reinforcement Learning (DRL) use limited Field-Of-View (FOV) images as input. In this paper, we demonstrate that omnidirectional images improve these methods. Thus, we provide a comparative benchmark of several visual modalities for navigation: ground truth depth, ground truth semantic segmentation, and RGB images. These exhaustive comparisons reveal that it is superior to use an omnidirectional camera to navigate with classical DRL methods. Finally, we show in two different virtual forest environments that adapting the convolution to take into account spherical distortions improves the results even more.

Abstract

International audience

Additional details

Identifiers

URL
https://hal.archives-ouvertes.fr/hal-03777700
URN
urn:oai:HAL:hal-03777700v1

Origin repository

Origin repository
UNICA