Trainable quantization for Speedy Spiking Neural Networks
- Creators
- Castagnetti, Andrea
- Others:
- Laboratoire d'Electronique, Antennes et Télécommunications (LEAT) ; Université Nice Sophia Antipolis (1965 - 2019) (UNS) ; COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)
- Iinstitut de Neurosciences de la Timone
Description
Spiking neural networks are considered as the third generation of Artificial Neural Networks. SNNs perform computation using neurons and synapses that communicate using binary and asynchronous signals known as spikes. They have attracted significant research interest over the last years since their computing paradigm allows theoretically sparse and low-power operations. This hypothetical gain, used from the beginning of the neuromorphic research, was however limited by three main factors: the absence of an efficient learning rule competing with the one of classical deep learning, the lack of mature learning framework, and an important data processing latency finally generating energy overhead. While the first two limitations have recently been addressed in the literature, the major problem of latency is not solved yet. Indeed, information is not exchanged instantaneously between spiking neurons but gradually builds up over time as spikes are generated and propagated through the network. This presentation focuses on quantization error, one of the main consequence of the SNN discrete representation of information. We propose an in-depth characterization of SNN quantization noise. We then propose a end-to-end direct learning approach based on a new trainable spiking neural model.
Abstract
International audience
Additional details
- URL
- https://hal.science/hal-04047959
- URN
- urn:oai:HAL:hal-04047959v1
- Origin repository
- UNICA