Neural computation; Spike-based processing; Parameter tuning; Neuromorphic circuits
Neil Daniel, Pfeiffer Michael, Liu Shih-Chii (2016), Learning to be efficient: algorithms for training low-latency, low-compute deep spiking neural networks, in
Proceedings of the ACM Symposium on Applied Computing, PisaAssociation for Computing Machinery, Pisa.
Binas Jonathan, Neil Daniel, Indiveri Giacomo, Liu Shih-Chii, Pfeiffer Michael (2016),
Precise deep neural network computation on imprecise low-power analog hardware, arXiv, Ithaca.
Binas Jonathan, Indiveri Giacomo, Pfeiffer Michael (2016), Spiking Analog VLSI Neuron Assemblies as Constraint Satisfaction Problem Solvers, in
Proceedings of ISCAS 2016, MontrealIEEE, Montreal.
Sumislawska Dora, Qiao Ning, Pfeiffer Michael, Indiveri Giacomo (2016), Wide dynamic range weights and biologically realistic synaptic dynamics for spike-based learning circuits, in
Proceedings of ISCAS 2016, MontrealIEEE, Montreal.
Qiao Ning, Mostafa Hesham, Corradi Federico, Osswald Marc, Stefanini Fabio, Sumislawska Dora, Indiveri Giacomo (2015), A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses, in
Frontiers in Neuromorphic Engineering, 9, 141.
Galluppi Francesco, Lagorce Xavier, Stromatias Evangelos, Pfeiffer Michael, Plana Luis, Furber Steve, Benosman Ryad (2015), A framework for plasticity implementation on the SpiNNaker neural architecture, in
Frontiers in Neuromorphic Engineering, 8, 429.
Diehl Peter, Neil Daniel, Binas Jonathan, Cook Matthew, Liu Shih-Chii, Pfeiffer Michael (2015), Fast-Classifying, High-Accuracy Spiking Deep Networks Through Weight and Threshold Balancing, in
Proc. of the KIEEE International Joint Conference on Neural Networks (IJCNN), Killarney, IrelandIEEE, Killarney, Ireland.
Binas Jonathan, Indiveri Giacomo, Pfeiffer Michael (2015), Local structure helps learning optimized automata in recurrent neural networks, in
Proc. of the IEEE International Joint Conference on Neural Networks (IJCNN), Killarney, IrelandIEEE, Killarney, Ireland.
Stromatias Evangelos, Neil Daniel, Pfeiffer Michael, Galluppi Francesco, Furber Steve, Liu Shih-Chii (2015), Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms, in
Frontiers in Neuromorphic Engineering, 9, 222.
Stromatias Evangelos, Neil Daniel, Galluppi Francesco, Pfeiffer Michael, Liu Shih-Chii, Furber Steve (2015), Scalable energy-efficient, low-latency implementations of trained spiking deep belief networks on SpiNNaker, in
Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN), Killarney, IrelandIEEE, Killarney, Ireland.
Lagorce Xavier, Ieng Sio Hoi, Clady Xavier, Pfeiffer Michael, Benosman Ryad (2015), Spatiotemporal features for asynchronous event-based data, in
Frontiers in Neuromorphic Engineering, 9, 46.
Binas Jonathan, Rutishauser Ueli, Indiveri Giacomo, Pfeiffer Michael (2014), Learning and stabilization of winner-take-all dynamics through interacting excitatory and inhibitory plasticity, in
Frontiers in computational neuroscience, 8, 68-68.
Animal nervous systems are extremely efficient computational devices that allow even brains of simple animals to easily outperform state-of-the-art artificial intelligence for most real-world tasks. Future technologies can benefit enormously, if we learn to exploit the strategies used by the nervous system and implement them in electronic computing systems. The advantage of nature lies in the way that computation is organized, and in its ability to adapt to the natural environment for producing cognitive behavior: rather than using a centralized processor or memory, different physically distributed regions of the brain self organize and learn to process signals from sensors and other brain regions. Conventional machine learning algorithms, as well as the computing technologies they run on, have not been designed to simulate efficiently these types of massively parallel, inherently variable, continuously adaptive, fault-tolerant, and asynchronously computing local circuits.In this project we will exploit recently developed mixed analog and digital brain-inspired computing \ac{VLSI} architectures for simulations of spiking neural networks, and address the problem of building efficient low-power scalable computing systems that can interact with the environment, learn about the input signals they have been designed to process, and exhibit adaptive and cognitive abilities analogous to those of biological systems. This is indeed a daunting task, that requires a true interdisciplinary approach. In particular, a necessary condition for achieving this goal is the existence of computational theories and methods which map directly and efficiently on a computing substrate that emulates the one found in biological neural systems.We have already substantial experience in developing "neuromorphic" computing devices that directly emulate the bio-physics of real neurons and synapses by exploiting the physics of silicon: we developed several generations of Very Large Scale Integration (VLSI) chips that implement biophysically realistic neurons and synapses endowed with spike-based plasticity mechanisms, that can be used as composable and scalable building blocks for more-and-more powerful artificial cognitive systems. However, even though we are building more sophisticated generations of these types of devices, and even if other research and industrial groups started to propose similar approaches the use of these bio-inspired computing substrates is still severely limited by our (lack of) understanding of how to "program'' distributed and asynchronous neural systems. This is in contrast to the classical von Neumann type of architectures that are much more intuitive for human minds to program in a linear, synchronous, and static fashion. How this can be done for a distributed brain-like architecture remains an open question that motivates this project.To solve this problem we will study and develop distributed methods of learning, state-dependent computation, and probabilistic inference that can, on one hand, emulate the style of computation used by the brain, and on the other hand (as a consequence) be directly and efficiently mapped onto the distributed spiking neural network systems implemented using our existing and future generations of multi-neuron VLSI devices. Specifically, we will apply dynamical systems and probabilistic inference theories to instantiate biologically motivated learning within and between spike-based computational modules. Building on our recent results that have shown how arbitrary finite state machines, the basic elements of computation, can be implemented with networks of spiking neurons we will arrange such networks in distributed and recurrently coupled architectures, and develop learning mechanisms, suitable for VLSI implementation, to train them to perform complex probabilistic inference for recognition and fusion of multi-modal sensory input streams.By combining our research on probabilistic graphical and spike-based inference computational models with the development of distributed, asynchronous neural computing systems implemented using neuromorphic VLSI multi-chip architectures, we will develop the systematic methodologies required to automatically configure spiking neural network chips to execute user-defined "programs'' and solve high level abstract tasks.