Published 2024 | Version v1
Publication

Grounding Conversational Robots on Vision Through Dense Captioning and Large Language Models

Description

This work explores a novel approach to empowering robots with visual perception capabilities using textual descriptions. Our approach involves the integration of GPT-4 with dense captioning, enabling robots to perceive and interpret the visual world through detailed text-based descriptions. To assess both user experience and the technical feasibility of this approach, experiments were conducted with human participants interacting with a Pepper robot equipped with visual capabilities. The results affirm the viability of the proposed approach, allowing to perform vision-based conversations effectively, despite processing time limitations.

Additional details

Identifiers

URL
https://hdl.handle.net/11567/1214037
URN
urn:oai:iris.unige.it:11567/1214037

Origin repository

Origin repository
UNIGE