HiDAnet: RGB-D Salient Object Detection via Hierarchical Depth Awareness
- Others:
- Imagerie et Vision Artificielle [Dijon] (ImViA) ; Université de Bourgogne (UB)
- Signal, Images et Systèmes (Laboratoire I3S - SIS) ; Laboratoire d'Informatique, Signaux, et Systèmes de Sophia Antipolis (I3S) ; Université Nice Sophia Antipolis (1965 - 2019) (UNS) ; COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)-Université Nice Sophia Antipolis (1965 - 2019) (UNS) ; COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)
- Institut de Chimie Moléculaire de l'Université de Bourgogne [Dijon] (ICMUB) ; Université de Bourgogne (UB)-Institut de Chimie du CNRS (INC)-Centre National de la Recherche Scientifique (CNRS)
- Shangaï Jiao Tong University [Shangaï]
- French Conseil Régional de Bourgogne-Franche-Comté
- ANR-18-CE33-0004,CLARA,Couplage Apprentissage et Vision pour Contrôle de Robots Aeriens(2018)
- ANR-15-IDEX-0003,BFC,ISITE " BFC(2015)
Description
RGB-D saliency detection aims to fuse multi-modal cues to accurately localize salient regions. Existing works often adopt attention modules for feature modeling, with few methods explicitly leveraging fine-grained details to merge with semantic cues. Thus, despite the auxiliary depth information, it is still challenging for existing models to distinguish objects with similar appearances but at distinct camera distances. In this paper, from a new perspective, we propose a novel Hierarchical Depth Awareness network (HiDAnet) for RGB-D saliency detection. Our motivation comes from the observation that the multigranularity properties of geometric priors correlate well with the neural network hierarchies. To realize multi-modal and multi-level fusion, we first use a granularity-based attention scheme to strengthen the discriminatory power of RGB and depth features separately. Then we introduce a unified cross dual-attention module for multi-modal and multi-level fusion in a coarse-to-fine manner. The encoded multi-modal features are gradually aggregated into a shared decoder. Further, we exploit a multi-scale loss to take full advantage of the hierarchical information. Extensive experiments on challenging benchmark datasets demonstrate that our HiDAnet performs favorably over the state-of-the-art methods by large margins. The source code can be found in https://github.com/Zongwei97/HIDANet/.
Abstract
International audience
Additional details
- URL
- https://cnrs.hal.science/hal-04045138
- URN
- urn:oai:HAL:hal-04045138v1
- Origin repository
- UNICA