Project

Back to overview

Temporal Information Integration in Neural Networks

Applicant Grewe Benjamin
Number 173721
Funding scheme Sinergia
Research institution Institut für Neuroinformatik Universität Zürich Irchel und ETH Zürich
Institution of higher education ETH Zurich - ETHZ
Main discipline Interdisciplinary
Start/End 01.06.2018 - 31.05.2023
Approved amount 1'392'236.00
Show all

All Disciplines (4)

Discipline
Interdisciplinary
Information Technology
Other disciplines of Engineering Sciences
Neurophysiology and Brain Research

Keywords (11)

machine learning; recurrent; integration; network; imaging; modeling; temporal; deep learning; neuronal; in vivo; simulation

Lay Summary (German)

Lead
Integration Temporaler Information in Neural Networks
Lay summary
In den letzten zehn Jahren hat eine vom Gehirn inspirierte schwache Form der künstlichen Intelligenz das sogenannte „Tiefe Lernen“ das Gebiet des maschinellen Lernens revolutioniert. So können Computer heutzutage sogar Bilder oder einzelne Wörter erkennen. Doch so spektakulär diese Fortschritte auch sind, einige Defizite sind immer noch allgegenwärtig. z.B. müssen die künstlich intelligenten Algorithmen mit riesigen Datenmengen trainiert werden, um dann eine ganz spezielle Aufgabe zu lösen. Zudem haben die Algorithmen noch Probleme mit der Integration räumlich-zeitlich komplexer Daten. Das Hauptziel dieses Projekts ist ein grundlegendes und analytisches Verständnis von (1) wie neuronale Netzwerke Informationen über kurze Zeiträume speichern und (2) wie sie eingehende Informationen zeitlich verknüpfen, um interne Modelle komplexer zeitlicher Abläufe zu erstellen. Das Timing für dieses Projekt ist ideal, da jüngste Fortschritte in der Neurowissenschaft es erlauben, die Aktivität großer Populationen genetisch identifizierter Neuronen ’live’ in biologischen neuronalen Netzwerken zu verfolgen. Um die oben genannten Ziele zu erreichen, kombinieren wir die Anwendung modernster optischer Bildgebungsverfahren, um die neuronale Netzwerkaktivität aufzunehmen mit neuen Analysetechniken, die es uns erlauben Netzwerkmodelle und Hypothesen aus den beobachteten Daten zu erstellen.
Direct link to Lay Summary Last update: 04.03.2018

Responsible applicant and co-applicants

Employees

Publications

Publication
Exponential slowdown for larger populationsthe (µ + 1)-EA on monotone functions
Lengler Johannes, Zou Xun (2019), Exponential slowdown for larger populationsthe (µ + 1)-EA on monotone functions, in the 15th ACM/SIGEVO Conference, Potsdam, GermanyFOGA, US.
A Hippocampal Model for Behavioral Time Acquisition and Fast Bidirectional Replay of Spatio-Temporal Memory Sequences
Matheus Gauy Marcelo, Lengler Johannes, Einarsson Hafsteinn, Meier Florian, Weissenberger Felix, Yanik Mehmet Fatih, Steger Angelika (2018), A Hippocampal Model for Behavioral Time Acquisition and Fast Bidirectional Replay of Spatio-Temporal Memory Sequences, in Frontiers in Neuroscience, 12, 96.
Approximating the Predictive Distribution via Adversarially-Trained Hypernetworks
Henning Christian, Approximating the Predictive Distribution via Adversarially-Trained Hypernetworks, in NeuRIPS Conference, NeuRIPS, Vancouver, Ca.
Continual learning with hypernetworks
OswaldJohannes von, Continual learning with hypernetworks, in ICLR, ICLR, Vancouver, Ca.
Optimal Kronecker-Sum Approximation
BenzigFrederic, Optimal Kronecker-Sum Approximation, in ICML, ICML, US.
pproximating Real-Time Recurrent Learning with Random Kronecker Factors
MujikaAsier, pproximating Real-Time Recurrent Learning with Random Kronecker Factors, in NeuRIPS Conference, NeuRIPS, Vancouver, Ca.
Wide. Deep. Fast. Recent Advances in vivo Multi-Photon Microscopy of Neuronal Activity
Lecoq Jerome, Orlova Natalia, Grewe Benjamin, Wide. Deep. Fast. Recent Advances in vivo Multi-Photon Microscopy of Neuronal Activity, in Journal of Neuroscience.

Associated projects

Number Title Start Funding scheme
189251 Ultra compact miniaturized microscopes to image meso-scale brain activity 01.06.2020 Project funding

Abstract

In the last decade ‘deep learning’ (DL), a brain-inspired weak form of artificial intelligence, has revolutionized the field of machine learning by achieving unprecedented performance to solve many real-world tasks for example in image or speech recognition. However, so spectacular these advances are, some major deficits are still omnipresent: deep learning networks have to be trained with huge data sets and their results are usually only spectacular when major effort goes towards solving a very specific task (like winning against the best human player in the game of GO). The ability of deep learning networks to act as generalizable problem solvers is still far beyond of what the human brain achieves effortlessly. In particular, the power of deep learning networks is still limited when tasks require an integration of spatiotemporally complex data over extended time periods of more than 2 seconds (Neil, Pfeiffer, & Liu, 2016; Vondrick, Pirsiavash, & Torralba, 2016).The main goal of this project is to gain a fundamental and analytical understanding of (1) how neuronal networks store information over short time periods and (2) how they link information across time to build internal models of complex temporal input data. Our proposal is well timed because recent advances in neuroscience now allow to record and track the activity of large populations of genetically identified neurons deep in the brain of behaving animals during a temporal learning task - this was simply not possible several years ago. To reach the above goal we combine expertise in developing and using cutting edge in vivo calcium imaging techniques to probe neuronal population activity in awake freely behaving animals (group B. Grewe) with expertise in analyzing random structures (group A. Steger). This combination of collaborators allows us to develop network models and hypotheses from observed data that we can subsequently test in vivo.
-