Real-time sound recognition using neuromorphic VLSI: models and networks
The ability of machines in parsing and recognizing elements of natural acoustic scenes still falls far short of abilities of biological systems. Parts of the reason for this might be the fundamental differences in handling sounds by biological systems and by machines. In machines acoustic signals are typically chopped into short segments equally spaced in time, each segment described by a set of features that are subsequently classified by machine learning techniques; in biological systems auditory processing makes use of continuous time, stimulus-driven, asynchronous, distributed, collective, and self-organizing principles. The goal of this project is to improve sound recognition in machines by closer emulation of these biological principles, and to do so by using real-time low-power neuromorphic VLSI technology. Specifically, we aim to develop an autonomous VLSI sound recognition system that implements a real-time biologically realistic model of auditory processing applicable to recognition of relevant classes of sounds in natural environments, including human speech. We have already fabricated spiking silicon cochlea devices and spike-based neural network learning chips. The project consist in developing cortical-like models for auditory processing, in collaboration with speech-recognition experts at the IDIAP Research Institute (Martigny, Switzerland), integrating them into embedded systems using the existing neuromorphic hardware, and achieving real-time context-dependent sound-recognition.