machine learning; recurrent; integration; network; imaging; modeling; temporal; deep learning; neuronal; in vivo; simulation
Lengler Johannes, Zou Xun (2019), Exponential slowdown for larger populationsthe (µ + 1)-EA on monotone functions, in
the 15th ACM/SIGEVO Conference, Potsdam, GermanyFOGA, US.
Matheus Gauy Marcelo, Lengler Johannes, Einarsson Hafsteinn, Meier Florian, Weissenberger Felix, Yanik Mehmet Fatih, Steger Angelika (2018), A Hippocampal Model for Behavioral Time Acquisition and Fast Bidirectional Replay of Spatio-Temporal Memory Sequences, in
Frontiers in Neuroscience, 12, 96.
Henning Christian, Approximating the Predictive Distribution via Adversarially-Trained Hypernetworks, in
NeuRIPS Conference, NeuRIPS, Vancouver, Ca.
OswaldJohannes von, Continual learning with hypernetworks, in
ICLR, ICLR, Vancouver, Ca.
BenzigFrederic, Optimal Kronecker-Sum Approximation, in
ICML, ICML, US.
MujikaAsier, pproximating Real-Time Recurrent Learning with Random Kronecker Factors, in
NeuRIPS Conference, NeuRIPS, Vancouver, Ca.
Lecoq Jerome, Orlova Natalia, Grewe Benjamin, Wide. Deep. Fast. Recent Advances in vivo Multi-Photon Microscopy of Neuronal Activity, in
Journal of Neuroscience.
In the last decade ‘deep learning’ (DL), a brain-inspired weak form of artificial intelligence, has revolutionized the field of machine learning by achieving unprecedented performance to solve many real-world tasks for example in image or speech recognition. However, so spectacular these advances are, some major deficits are still omnipresent: deep learning networks have to be trained with huge data sets and their results are usually only spectacular when major effort goes towards solving a very specific task (like winning against the best human player in the game of GO). The ability of deep learning networks to act as generalizable problem solvers is still far beyond of what the human brain achieves effortlessly. In particular, the power of deep learning networks is still limited when tasks require an integration of spatiotemporally complex data over extended time periods of more than 2 seconds (Neil, Pfeiffer, & Liu, 2016; Vondrick, Pirsiavash, & Torralba, 2016).The main goal of this project is to gain a fundamental and analytical understanding of (1) how neuronal networks store information over short time periods and (2) how they link information across time to build internal models of complex temporal input data. Our proposal is well timed because recent advances in neuroscience now allow to record and track the activity of large populations of genetically identified neurons deep in the brain of behaving animals during a temporal learning task - this was simply not possible several years ago. To reach the above goal we combine expertise in developing and using cutting edge in vivo calcium imaging techniques to probe neuronal population activity in awake freely behaving animals (group B. Grewe) with expertise in analyzing random structures (group A. Steger). This combination of collaborators allows us to develop network models and hypotheses from observed data that we can subsequently test in vivo.