Published January 13, 2020 | Version v1
Publication

Neuromorphic LIF Row-by-Row Multiconvolution Processor for FPGA

Description

Deep Learning algorithms have become state-of-theart methods for multiple fields, including computer vision, speech recognition, natural language processing, and audio recognition, among others. In image vision, convolutional neural networks (CNN) stand out. This kind of network is expensive in terms of computational resources due to the large number of operations required to process a frame. In recent years, several frame-based chip solutions to deploy CNN for real time have been developed. Despite the good results in power and accuracy given by these solutions, the number of operations is still high, due the complexity of the current network models. However, it is possible to reduce the number of operations using different computer vision techniques other than frame-based, e.g., neuromorphic event-based techniques. There exist several neuromorphic vision sensors whose pixels detect changes in luminosity. Inspired in the leaky integrate-and-fire (LIF) neuron, we propose in this manuscript an event-based field-programmable gate array (FPGA) multiconvolution system. Its main novelty is the combination of a memory arbiter for efficient memory access to allowrow-by-rowkernel processing. This system is able to convolve 64 filters across multiple kernel sizes, from 1 × 1 to 7 × 7, with latencies of 1.3 μs and 9.01 μs, respectively, generating a continuous flow of output events. The proposed architecture will easily fit spike-based CNNs.

Abstract

Ministerio de Economía y Competitividad TEC2016-77785-P

Additional details

Created:
December 5, 2022
Modified:
November 29, 2023