Efficient Memory Organization for DNN Hardware Accelerator Implementation on PSoC
Description
The use of deep learning solutions in different disciplines is increasing and their algorithms are computationally expensive in most cases. For this reason, numerous hardware accelerators have appeared to compute their operations efficiently in parallel, achieving higher performance and lower latency. These algorithms need large amounts of data to feed each of their computing layers, which makes it necessary to efficiently handle the data transfers that feed and collect the information to and from the accelerators. For the implementation of these accelerators, hybrid devices are widely used, which have an embedded computer, where an operating system can be run, and a field-programmable gate array (FPGA), where the accelerator can be deployed. In this work, we present a software API that efficiently organizes the memory, preventing reallocating data from one memory area to another, which improves the native Linux driver with a 85% speed-up and reduces the frame computing time by 28% in a real application.
Abstract
Spanish Agencia Estatal de Investigación (AEI) project MINDROB: "Percepción y Cognición Neuromórfica para Actuación Robótica de Alta Velocidad PID2019- 105556GB-C33
Abstract
Spanish Agencia Estatal de Investigación (AEI) project MINDROB: "Percepción y Cognición Neuromórfica para Actuación Robótica de Alta Velocidad AEI/10.13039/501100011033
Additional details
- URL
- https://idus.us.es/handle//11441/112503
- URN
- urn:oai:idus.us.es:11441/112503
- Origin repository
- USE