Published December 27, 2019
| Version v1
Publication
Musical notes classification with Neuromorphic Auditory System using FPGA and a Convolutional Spiking Network
Description
In this paper, we explore the capabilities of a sound
classification system that combines both a novel FPGA cochlear
model implementation and a bio-inspired technique based on a
trained convolutional spiking network. The neuromorphic
auditory system that is used in this work produces a form of
representation that is analogous to the spike outputs of the
biological cochlea. The auditory system has been developed using
a set of spike-based processing building blocks in the frequency
domain. They form a set of band pass filters in the spike-domain
that splits the audio information in 128 frequency channels, 64
for each of two audio sources. Address Event Representation
(AER) is used to communicate the auditory system with the
convolutional spiking network. A layer of convolutional spiking
network is developed and trained on a computer with the ability
to detect two kinds of sound: artificial pure tones in the presence
of white noise and electronic musical notes. After the training
process, the presented system is able to distinguish the different
sounds in real-time, even in the presence of white noise.
Abstract
Ministerio de Economía y Competitividad TEC2012-37868-C04-02Additional details
Identifiers
- URL
- https://idus.us.es/handle//11441/91274
- URN
- urn:oai:idus.us.es:11441/91274
Origin repository
- Origin repository
- USE