Hardware-Aware Affordance Detection for Application in Portable Embedded Systems
Description
Affordance detection in computer vision allows segmenting an object into parts according to functions that those parts afford. Most solutions for affordance detection are developed in robotics using deep learning architectures that require substantial computing power. Therefore, these approaches are not convenient for application in embedded systems with limited resources. For instance, computer vision is used in smart prosthetic limbs, and in this context, affordance detection could be employed to determine the graspable segments of an object, which is a critical information for selecting a grasping strategy. This work proposes an affordance detection strategy based on hardware-aware deep learning solutions. Experimental results confirmed that the proposed solution achieves comparable accuracy with respect to the state-of-the-art approaches. In addition, the model was implemented on real-time embedded devices obtaining a high FPS rate, with limited power consumption. Finally, the experimental assessment in realistic conditions demonstrated that the developed method is robust and reliable. As a major outcome, the paper proposes and characterizes the first complete embedded solution for affordance detection in embedded devices. Such a solution could be used to substantially improve computer vision based prosthesis control but it is also highly relevant for other applications (e.g., resource-constrained robotic systems).
Additional details
- URL
- https://hdl.handle.net/11567/1140075
- URN
- urn:oai:iris.unige.it:11567/1140075
- Origin repository
- UNIGE