learning; normative approach; Bayesian inference; long-term plasticity; spiking neuron model; Astrocytes; short-term plasticity; graphical model; synaptic plasticity; recurrent network
Kutschireiter Anna, Surace Simone Carlo, Sprekeler Henning, Pfister Jean-Pascal (2017), Nonlinear Bayesian filtering and learning: a neuronal dynamics for perception, in Scientific Reports
, 7(8722), 1-13.
Senn Walter, Pfister Jean-Pascal (2015), Spike-Timing Dependent Plasticity, Learning Rules., in Jaeger Dieter, Jung Ranu (ed.), Springer, New York, 2825-2832.
Kutschireiter Anna, Surace Simone Carlo, Sprekeler Henning, Pfister Jean-Pascal (2015), A Neural Implementation for Nonlinear Filtering, in ArXiv
, (arXiv:1508), 1.
Surace Simone Carlo, Pfister Jean-Pascal (2015), A Statistical Model for In Vivo Neuronal Dynamics, in Plos One
, 10(11), 1-21.
Senn Walter, Pfister Jean-Pascal (2015), Reinforcement Learning in Cortical Networks
, Springer, New York.
Surace Simone Carlo, Kutschireiter Anna, Pfister Jean-Pascal, How to avoid the curse of dimensionality: scalability of particle filters with and without importance weights, in SIAM Review
SuraceSimone Carlo, PfisterJean-Pascal, Online Maximum Likelihood Estimation of the Parameters of Partially Observed Diffusion Processes, in IEEE Transactions on Automatic Control
One of the biggest challenge faced by the brain is to make sense of the perceived environment. Even though this is done in a seemingly effortless way, the required computation steps to make sense of the environment and extract relevant features are far from being trivial. How to reconstruct the 3D shape of an object which is only perceived on a 2D retina? How to recognize an object if only part of it is observed or if it is observed from a new perspective? How to optimally combine multi-sensory cues given that each sensor has its own reliability? How to extract the melody of a single instrument when many of them are playing simultaneously? Interestingly, all those psychophysical tasks which deal with uncertainty can be formulated in a generic probabilistic framework in which the sensory observations are assumed to be generated by some unobserved (hidden) causes. The task in this generative model perspective is to invert the model by inferring the hidden features (causes) given the observations and learn the parameters of the model. This inference procedure can be expressed mathematically with Bayes' rule. Despite the increasing interest in this normative approach, there is still one important question that remains unanswered: how is this learning and inference implemented at the level of single synapses and at the level of spiking neurons? This proposal aims at filling this gap and divides this challenging task into three projects. The first project will consider a generative model where the causes are dynamic and sparsely distributed. This model will generalize existing generative model approaches (such as slow-feature analysis) by assuming that the stationary distribution of each hidden variable is not restricted to a Gaussian distribution. The second project will address the question of inference and learning at the level of single synapse and in the presence of spiking neurons. The last project will combine the computational relevance of the first project and the biological plausibility of the second project thereby deriving a unifying framework for inference and learning with spiking neurons at the network level.