Published June 13, 2022 | Version v1
Publication

Neuromorphic Event-Based Spatio-temporal Attention using Adaptive Mechanisms

Description

Contrary to RGB cameras, Dynamic Vision Sensor (DVS) output visual data in the form of an asynchronous events stream by recording pixel-wise luminance changes at microsecond resolution. While conventional computer vision approaches utilise frame-based input data, thus failing to take full advantage of the high temporal resolution, novel approaches use spiking neural networks Spiking Neural Networks (SNNs) which are more compatible to handle event-based data since these bioinspired neural models intrinsically encode information in a sparse manner using activation spikes trains. This paper presents an attentional mechanism which detects regions with higher event density by using inherent SNN dynamics combined with online weight and threshold adaptation. We implemented the network directly on Intel's research neuromorphic chip Loihi and evaluate our proposed method on the open DVS128 Gesture Dataset. Our system is able to process 1 ms of event-data in 6 ms and reject more than 50% of incoming unwanted events occurring only 20 ms after activity onset.

Abstract

This work was supported by the European Union's ERA-NET CHIST-ERA 2018 research and innovation programme under grant agreement ANR-19-CHR3-0008.

Abstract

International audience

Additional details

Created:
February 22, 2023
Modified:
November 30, 2023