Published September 28, 2020
| Version v1
Publication
A 32 x 32 Pixel Convolution Processor Chip for Address Event Vision Sensors With 155 ns Event Latency and 20 Meps Throughput
Description
This paper describes a convolution chip for
event-driven vision sensing and processing systems. As opposed
to conventional frame-constraint vision systems, in event-driven
vision there is no need for frames. In frame-free event-based vision,
information is represented by a continuous flow of self-timed
asynchronous events. Such events can be processed on the fly by
event-based convolution chips, providing at their output a continuous
event flow representing the 2-D filtered version of the input
flow. In this paper we present a 32 32 pixel 2-D convolution
event processor whose kernel can have arbitrary shape and size
up to 32 32. Arrays of such chips can be assembled to process
larger pixel arrays. Event latency between input and output event
flows can be as low as 155 ns. Input event throughput can reach
20 Meps (mega events per second), and output peak event rate
can reach 45 Meps. The chip can be configured to discriminate
between two simulated propeller-like shapes rotating simultaneously
in the field of view at a speed as high as 9400 rps (revolutions
per second). Achieving this with a frame-constraint system would
require a sensing and processing capability of about 100 K frames
per second. The prototype chip has been built in 0.35 m CMOS
technology, occupies 4.3 5.4 mm and consumes a peak power
of 200 mW at maximum kernel size at maximum input event rate.
Abstract
European Union 216777 (NABAB)Abstract
Ministerio de Educación y Ciencia TEC2006-11730-C03-01Abstract
Ministerio de Ciencia e Innovación TEC2009-10639-C04-01Abstract
Junta de Andalucía P06TIC01417Additional details
Identifiers
- URL
- https://idus.us.es/handle//11441/101520
- URN
- urn:oai:idus.us.es:11441/101520
Origin repository
- Origin repository
- USE