Project

Back to overview

Spike-based computation and learning in distributed neuromorphic systems

Applicant Indiveri Giacomo
Number 146608
Funding scheme Project funding
Research institution Institut für Neuroinformatik Universität Zürich Irchel und ETH Zürich
Institution of higher education University of Zurich - ZH
Main discipline Information Technology
Start/End 01.08.2013 - 31.07.2016
Approved amount 344'669.00
Show all

All Disciplines (2)

Discipline
Information Technology
Microelectronics. Optoelectronics

Keywords (4)

Neural computation; Spike-based processing; Parameter tuning; Neuromorphic circuits

Lay Summary (German)

Lead
Für viele praktisch relevante Aufgaben erreichen moderne Computer nicht die Leistung biologischer Systeme. Einer der Gründe ist, dass sich die Architektur von Nervensystemen, in denen Milliarden von Zellen parallel durch Aktionspotentiale (sog. “Spikes”) kommunizieren, sehr stark von der heutiger Computer unterscheidet. In diesem Projekt werden solche neuronale Architekturen studiert, und neue Ansätze für Berechnungen in neuartigen spike-basierten Computertechnologien entwickelt.
Lay summary

Inhalt und Ziel des Forschungsprojekts

Neuartige, vom Aufbau des Gehirns inspirierte Hardware-Architekturen, die die Biophysik von Nervenzellen und Synapsen elektronisch nachbilden, stellen eine vielversprechende Technologie für alternative Rechenmodelle dar. Bis jetzt fehlen aber Methoden um solche Plattformen ähnlich intuitiv zu programmieren wie herkömmliche Computer. Das Ziel dieses Projekts ist es Methoden und Tools zu studieren und zu entwickeln, mit denen man eine gewünschte Funktionalität spezifizieren, und automatisch auf Neuronale Netzwerke übertragen kann, welche auf verteilten Hardware-Plattformen laufen. Wir werden Systeme entwickeln die in Echtzeit mit der Umwelt interagieren, und daher mit ähnlich unvorhersehbaren Ereignissen umgehen müssen wie echte Nervensysteme. Das erfordert die Entwicklung spike-basierter Lernmechanismen, die das Netzwerk für die gewünschte Aufgabe anpassen, Abweichungen in der Hardware kompensieren, und die Leistung des Systems dadurch verbessern. In unserem Projekt wird untersucht wie dies unter den Einschränkungen geschehen kann denen Neuronale Mechanismen unterliegen, und wie das mit mathematischen Prinzipien der Künstlichen Intelligenz erklärt werden kann.

Wissenschaftlicher und gesellschaftlicher Kontext

Eine der grossen Herausforderungen der Informatik ist es Systeme intelligenter, skalierbar, zuverlässiger, und energieeffizient zu gestalten. Elektronische Systeme die den verteilten, asynchronen, Ereignis-basierten, und adaptiven Rechenstil biologischer Systeme nachbilden sind dafür besonders geeignet. Durch die Entwicklung von Konfigurations- und Lern-Tools, und durch ein tieferes Verständnis biologisch inspirierter Lern- und Rechen-Mechanismen wird unser Projekt diese Technologie bereit für zukünftige Anwendungen in Robotern und mobilen Geräten machen.

Direct link to Lay Summary Last update: 17.06.2013

Lay Summary (English)

Lead
For many practical tasks, modern computers cannot match the performance of biological systems. One reason is that the architecture of nervous systems, in which billions of nerve cells communicate with action potentials (so called “spikes”) in parallel, is very different from that of today's computers. In this project we will investigate the properties of these types of neural architectures and model their computational strategies to develop alternative spike-based computing technologies.
Lay summary

Content and Goals of the Project

Recently developed brain-inspired hardware architectures that emulate the biophysics of neurons and synapses in silicon represent a promising technology for implementing alternative computing paradigms, but methods that allow programming such platforms in a way that is as intuitive for humans as programming a traditional computer are still lacking. Our central goal in this project is to study and develop methods that allow the specification of a desired functionality in a simple mathematical form, and create tools that automatically transfer these programs onto neural networks running on distributed hardware systems. We will develop systems that interact in real-time with the world, and thus have to deal with similar unreliabilities as real nervous systems. This requires the development of spike-based learning mechanisms that adapt the network to the desired tasks, compensate for irregularities in the hardware, and improve the performance of the system over time. Our project will investigate how this can be achieved under the constraints imposed by neural mechanisms, and how this can be related to mathematical learning principles used in artificial intelligence.

Scientific and societal context

One of the great challenges in computing is to make systems smarter, scalable, more reliable, and yet more energy-efficient. A prime candidate for achieving this are electronic systems that employ the distributed, asynchronous, event-driven, and adaptive way of computation that is characteristic for nervous systems. By developing configuration and learning tools, and deepening our understanding of biologically inspired learning and computation, our project will make this technology accessible for future applications in intelligent robots and mobile devices.

Direct link to Lay Summary Last update: 17.06.2013

Responsible applicant and co-applicants

Employees

Publications

Publication
Learning to be efficient: algorithms for training low-latency, low-compute deep spiking neural networks
Neil Daniel, Pfeiffer Michael, Liu Shih-Chii (2016), Learning to be efficient: algorithms for training low-latency, low-compute deep spiking neural networks, in Proceedings of the ACM Symposium on Applied Computing, PisaAssociation for Computing Machinery, Pisa.
Precise deep neural network computation on imprecise low-power analog hardware
Binas Jonathan, Neil Daniel, Indiveri Giacomo, Liu Shih-Chii, Pfeiffer Michael (2016), Precise deep neural network computation on imprecise low-power analog hardware, arXiv, Ithaca.
Spiking Analog VLSI Neuron Assemblies as Constraint Satisfaction Problem Solvers
Binas Jonathan, Indiveri Giacomo, Pfeiffer Michael (2016), Spiking Analog VLSI Neuron Assemblies as Constraint Satisfaction Problem Solvers, in Proceedings of ISCAS 2016, MontrealIEEE, Montreal.
Wide dynamic range weights and biologically realistic synaptic dynamics for spike-based learning circuits
Sumislawska Dora, Qiao Ning, Pfeiffer Michael, Indiveri Giacomo (2016), Wide dynamic range weights and biologically realistic synaptic dynamics for spike-based learning circuits, in Proceedings of ISCAS 2016, MontrealIEEE, Montreal.
A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses
Qiao Ning, Mostafa Hesham, Corradi Federico, Osswald Marc, Stefanini Fabio, Sumislawska Dora, Indiveri Giacomo (2015), A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses, in Frontiers in Neuromorphic Engineering, 9, 141.
A framework for plasticity implementation on the SpiNNaker neural architecture
Galluppi Francesco, Lagorce Xavier, Stromatias Evangelos, Pfeiffer Michael, Plana Luis, Furber Steve, Benosman Ryad (2015), A framework for plasticity implementation on the SpiNNaker neural architecture, in Frontiers in Neuromorphic Engineering, 8, 429.
Fast-Classifying, High-Accuracy Spiking Deep Networks Through Weight and Threshold Balancing
Diehl Peter, Neil Daniel, Binas Jonathan, Cook Matthew, Liu Shih-Chii, Pfeiffer Michael (2015), Fast-Classifying, High-Accuracy Spiking Deep Networks Through Weight and Threshold Balancing, in Proc. of the KIEEE International Joint Conference on Neural Networks (IJCNN), Killarney, IrelandIEEE, Killarney, Ireland.
Local structure helps learning optimized automata in recurrent neural networks
Binas Jonathan, Indiveri Giacomo, Pfeiffer Michael (2015), Local structure helps learning optimized automata in recurrent neural networks, in Proc. of the IEEE International Joint Conference on Neural Networks (IJCNN), Killarney, IrelandIEEE, Killarney, Ireland.
Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms
Stromatias Evangelos, Neil Daniel, Pfeiffer Michael, Galluppi Francesco, Furber Steve, Liu Shih-Chii (2015), Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms, in Frontiers in Neuromorphic Engineering, 9, 222.
Scalable energy-efficient, low-latency implementations of trained spiking deep belief networks on SpiNNaker
Stromatias Evangelos, Neil Daniel, Galluppi Francesco, Pfeiffer Michael, Liu Shih-Chii, Furber Steve (2015), Scalable energy-efficient, low-latency implementations of trained spiking deep belief networks on SpiNNaker, in Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN), Killarney, IrelandIEEE, Killarney, Ireland.
Spatiotemporal features for asynchronous event-based data
Lagorce Xavier, Ieng Sio Hoi, Clady Xavier, Pfeiffer Michael, Benosman Ryad (2015), Spatiotemporal features for asynchronous event-based data, in Frontiers in Neuromorphic Engineering, 9, 46.
Learning and stabilization of winner-take-all dynamics through interacting excitatory and inhibitory plasticity
Binas Jonathan, Rutishauser Ueli, Indiveri Giacomo, Pfeiffer Michael (2014), Learning and stabilization of winner-take-all dynamics through interacting excitatory and inhibitory plasticity, in Frontiers in computational neuroscience, 8, 68-68.

Collaboration

Group / person Country
Types of collaboration
Institute de la Vision, Universite Pierre et Marie Curie Paris France (Europe)
- in-depth/constructive exchanges on approaches, methods or results
- Publication
INSTITUTE FOR NEURAL COMPUTATION, UCSD United States of America (North America)
- in-depth/constructive exchanges on approaches, methods or results
Neuroscientific System Theory Group, Technische Universität München Germany (Europe)
- in-depth/constructive exchanges on approaches, methods or results
Computational Neuroscience Group, University of Bern Switzerland (Europe)
- in-depth/constructive exchanges on approaches, methods or results
Computation and Neural Systems Program, California Institute of Technology United States of America (North America)
- in-depth/constructive exchanges on approaches, methods or results
- Publication
Institut für Neuroinformatik, Ruhr-Universität Bochum Germany (Europe)
- in-depth/constructive exchanges on approaches, methods or results
- Exchange of personnel
Advanced Processors Technology Group, University of Manchester Great Britain and Northern Ireland (Europe)
- Publication

Scientific events

Active participation

Title Type of contribution Title of article or contribution Date Place Persons involved
Telluride Neuromorphic Cognition Engineering Workshop 2016 Talk given at a conference Neuromorphic Circuits 26.06.2016 Telluride, CO, United States of America Indiveri Giacomo; Sumislawska Dora;
Telluride Neuromorphic Cognition Engineering Workshop 2015 Individual talk Workgroup: Manipulation Actions: Movements, Forces, and Affordances 28.06.2015 Telluride, CO, United States of America Pfeiffer Michael;
Telluride Neuromorphic Cognition Engineering Workshop 2014 Individual talk Workgroup: Motion and Action Processing on Wearable Devices 29.06.2014 Telluride, CO, United States of America Pfeiffer Michael;


Self-organised

Title Date Place

Communication with the public

Communication Title Media Place Year

Awards

Title Year
2016 ISCAS (International Symposium on Circuits and Systems) Best Paper Award 2016

Associated projects

Number Title Start Funding scheme
138798 PNEUMA 01.11.2011 CHIST-ERA
180316 Neural Processing of Distinct Prediction Errors: Theory, Mechanisms & Interventions 01.09.2018 Sinergia

Abstract

Animal nervous systems are extremely efficient computational devices that allow even brains of simple animals to easily outperform state-of-the-art artificial intelligence for most real-world tasks. Future technologies can benefit enormously, if we learn to exploit the strategies used by the nervous system and implement them in electronic computing systems. The advantage of nature lies in the way that computation is organized, and in its ability to adapt to the natural environment for producing cognitive behavior: rather than using a centralized processor or memory, different physically distributed regions of the brain self organize and learn to process signals from sensors and other brain regions. Conventional machine learning algorithms, as well as the computing technologies they run on, have not been designed to simulate efficiently these types of massively parallel, inherently variable, continuously adaptive, fault-tolerant, and asynchronously computing local circuits.In this project we will exploit recently developed mixed analog and digital brain-inspired computing \ac{VLSI} architectures for simulations of spiking neural networks, and address the problem of building efficient low-power scalable computing systems that can interact with the environment, learn about the input signals they have been designed to process, and exhibit adaptive and cognitive abilities analogous to those of biological systems. This is indeed a daunting task, that requires a true interdisciplinary approach. In particular, a necessary condition for achieving this goal is the existence of computational theories and methods which map directly and efficiently on a computing substrate that emulates the one found in biological neural systems.We have already substantial experience in developing "neuromorphic" computing devices that directly emulate the bio-physics of real neurons and synapses by exploiting the physics of silicon: we developed several generations of Very Large Scale Integration (VLSI) chips that implement biophysically realistic neurons and synapses endowed with spike-based plasticity mechanisms, that can be used as composable and scalable building blocks for more-and-more powerful artificial cognitive systems. However, even though we are building more sophisticated generations of these types of devices, and even if other research and industrial groups started to propose similar approaches the use of these bio-inspired computing substrates is still severely limited by our (lack of) understanding of how to "program'' distributed and asynchronous neural systems. This is in contrast to the classical von Neumann type of architectures that are much more intuitive for human minds to program in a linear, synchronous, and static fashion. How this can be done for a distributed brain-like architecture remains an open question that motivates this project.To solve this problem we will study and develop distributed methods of learning, state-dependent computation, and probabilistic inference that can, on one hand, emulate the style of computation used by the brain, and on the other hand (as a consequence) be directly and efficiently mapped onto the distributed spiking neural network systems implemented using our existing and future generations of multi-neuron VLSI devices. Specifically, we will apply dynamical systems and probabilistic inference theories to instantiate biologically motivated learning within and between spike-based computational modules. Building on our recent results that have shown how arbitrary finite state machines, the basic elements of computation, can be implemented with networks of spiking neurons we will arrange such networks in distributed and recurrently coupled architectures, and develop learning mechanisms, suitable for VLSI implementation, to train them to perform complex probabilistic inference for recognition and fusion of multi-modal sensory input streams.By combining our research on probabilistic graphical and spike-based inference computational models with the development of distributed, asynchronous neural computing systems implemented using neuromorphic VLSI multi-chip architectures, we will develop the systematic methodologies required to automatically configure spiking neural network chips to execute user-defined "programs'' and solve high level abstract tasks.
-