Research Interests
One of the most fascinating property of the brain is its ability to extract relevant information from the environment. How the brain learns to extract relevant features is at the heart of my research interests. Indeed, even though this feature extraction is done in a seemingly effortless way, the required computation steps to make sense of the environment are far from being trivial. How to reconstruct the 3D shape of an object which is only perceived on a 2D retina? How to recognize an object if only part of it is observed or if it is observed from a new perspective? How to optimally combine multi-sensory cues given that each sensor has its own reliability? How to extract the melody of a single instrument when many of them are playing simultaneously? Interestingly, all those psychophysical tasks which deal with uncertainty can be formulated in a generic probabilistic (Bayesian) framework. Despite the increasing interest in this Bayesian approach, there is still one important question that remains unanswered: how is this learning and inference implemented in the brain at the level of single synapses and at the level of spiking neurons? This is question is at the center of my research interests. More generally I am interested in developing new statistical models and apply them in the field of neuroscience.
Statistical modeling in neuroscience
Statistical model for intracellular in vivo recordings
Cortical neurons are constantly active. Even in the absence of an explicit stimulus, cortical neurons are spontaneously active and display large fluctuations of their membrane potentials. The increasing amount of intracellular recordings of spontaneous activity as well as the increasing number of theories which critically rely on a characterization of spontaneous activity calls for a proper quantification of spontaneous intracellular dynamics. In this project we develop statistical models of spontaneous activity which are very flexible and remain tractable. Those models are of particular relevance in the context of a recent theory that we proposed on short-term plasticity (Pfister et al, 2009, 2010).
Collaborator: Simone Surace (UZH,ETHZ).
Relevant publications:
- Surace and Pfister. A statistical model for in vivo neuronal dynamics.
arXiv (2014). [Journal], [PDF]
- Pfister, Dayan and Lengyel. Synapses with short-term plasticity
are optimal estimators of presynaptic membrane potentials . Nature Neuroscience. (2010) [Journal], [PDF], [Supp. Info]
Higher order correlations in recurrent network model
Neurons are not isolated units, but highly interact. What is the best statistical description for such an interactive network of spiking neurons which still remains analytically tractable? Hawkes (1971) proposed a powerful and tractable model, now known as the Hawkes process, that describes the interaction between point emission units. In particular, Hawkes calculated the spiking correlation function in this recurrent network. In this project we calculate analytically higher order moments in this model. This is of particular relevance if we want to calculate the effect of synaptic plasticity in such a recurrent network. Indeed, in previous work, we have been showing that synaptic plasticity not only depends on the correlation between the pre- and postsynaptic activity, but can also depend on higher order correlations such as triplet correlations.
Collaborator: Dr. Matthieu Gilson (University Pompeu Fabra, Barcelona).
Relevant publications:
- Gjorgjieva, Clopath, Audet and Pfister A triplet spike-timing-dependent plasticity model generalizes the Bienenstock-Cooper-Munro rule to higher-order spatiotemporal correlations.
PNAS, (2011) [Journal], [PDF]
- Pfister and Gerstner. Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity.
Journal of Neuroscience, (2006). [PDF] and [high resolution figures]
Statistical learning with biological neural networks
Optimal nonlinear filtering with neural networks
A remarkable property of the brain is its ability to continuously extract relevant features in a changing environment. This task becomes even more challenging when we realize that sensory inputs are not perfectly reliable. This problem can be formalized as a filtering problem where the aim is to infer the state of a dynamically changing hidden variable given some noisy observation. A well-known solution to this problem is the Kalman filter for linear hidden dynamics or the extended Kalman filter for nonlinear dynamics. However, it remains unclear how these filtering algorithms may be implemented in neural tissue. The aim of this project is to propose a neuronal dynamics which approximates this non-linear filtering.
Collaborator: Anna Kutschireiter and Simone Surace (UZH, ETHZ) and Dr. Henning Sprekeler (TU Berlin).
Statistical learning and synaptic plasticity
Far from being static transmission units, synapses are highly complex elements with dynamics ranging from milliseconds to hours or even days. This complexity becomes even more striking when we consider the variability in synaptic dynamics and synaptic plasticity. Some synapses facilitate, other are depressing. For some synapses long-term plasticity is induced with a given stimulation protocol while others depress with the same protocol. How can we make sense of this huge variability? The goal of this project is to propose a unifying theoretical framework which can (at least partially) explain this tremendous variability. The aim is then to determine to what extent the variability in neuronal types can predict the diversity in synaptic dynamics and plasticity in this computational framework.
Collaborator: Peter Dziennik (UZH, ETHZ).
Relevant publications:
- Brea, Senn and Pfister Matching Recall and Storage in Sequence Learning with Spiking Neural Networks. Journal of Neuroscience, (2013). [Journal], [PDF]