long-term plasticity; Spike-Timing Dependent Plasticity; spiking neuron model; Bayesian regression; learning; stochastic synaptic transmission; generalisation; Bayesian inference; Generative model; synaptic plasticity; short-term plasticity
Bykowska Ola, Gontier Camille, Sax Anne-Lene, Jia David W., Montero Milton Llera, Bird Alex D., Houghton Conor, Pfister Jean-Pascal, Costa Rui Ponte (2019), Model-Based Inference of Synaptic Transmission, in
Frontiers in Synaptic Neuroscience, 11, 1-9.
Synapses are highly stochastic and complex transmission units. Upon the arrival of an action potential in the presynaptic terminal, vesicles fuse to the membrane with a certain probability and release neurotransmitter in the synaptic cleft thereby activating postsynaptic receptors. This probability of release can be influenced by several factors such as the history of presynaptic activity, the identity of the postsynaptic neuron, the age of the animal as well as the presence of neuromodulators. Surprisingly, the functional relevance of this probabilistic release remains largely unknown despite the decades of study of this ubiquitous phenomenon. Here, we propose to study a new hypothesis for the functional role of stochastic synapses. This hypothesis states that the stochasticity at the level of synaptic transmission is computationally beneficial in the sense that it helps the network to generalise better and therefore avoid overfitting. Concretely, we frame the problem in a machine learning setting and ask whether synaptic stochasticity implements Bayesian regression.In a regression problem, the task is to learn the mapping from an input to an output. This mapping is characterised with some parameters that have to be learned. However, with a finite amount of input and output data there is always some uncertainty on the estimation of the parameters. So in a Bayesian approach, the task is to estimate the distribution over the parameters given the data rather than the parameters themselves. This (posterior) distribution over the parameters given the data can be calculated using Bayes' rule. In the context of spiking neural networks, this means that the Bayesian perspective is to compute the posterior distribution over the synaptic weights. Therefore, the present grant will ask the following 3 questions: how is this distribution over weights implemented in biological synapses (project A)? How should this distribution evolve in order to be computationally efficient (project B)? Do biological synapses implement the computationally optimal solution for this evolution of weight distribution (project C)?In project A, the goal will be to quantify stochastic synaptic transmission from a generative model perspective for a wide range of synapse types and conditions. This will be done from a Bayesian perspective by computing the posterior distribution over the synaptic parameters. In project B, we will assess the generalisation performance of spiking neural network with Bayesian regression. In particular, we will derive the optimal learning rule from Bayesian regression perspective and benchmark the regression performance on standard data sets. The third project (C) will combine the biological side of project A and the machine learning side of project B. Concretely, this third project aims at validating with electrophysiological data the Bayesian regression learning rules.