MSc Thesis: Are artificial recurrent neural networks a good model for cortex?

This Master’s thesis addresses a core problem in computational neuroscience, namely whether neural networks trained with machine learning techniques can be used as models of the brain. We will use mathematical tools to study a simple example where we have a computational model and comparable experimental data.


Motivation

In Neuroscience: In recent years, the backpropagation algorithm has became the cornerstone of a new branch of neuroscience studies. The approach is simple: train a recurrent neural network to accomplish a task relevant for the brain, and then comparing the observed solution that BPTT with experimental data. This approach relies on the assumption that the brain does find the same solution that the brain would find. We will explore when and how is this assumption valid.

In Machine Learning: Backpropagation is the working horse of current deep learning paradigm. However, the fact that it is often a black box makes it unsuited for critical applications. A key contribution of this project we will make a step into knowing the types of solution that this algorithm finds.


Background and previous works

Computational: In previous works we trained a recurrent neural network to imitate a filter with backpropagation through time (BPTT). We found that even though the network is able to imitate the output, it generates networks where the poles do not match those of the filter. This implies that the network does not have the same dynamical properties, and hence it performs poorly in out-of-distribution examples and it often fails to identify the dominant modes of the filter.

Experimental:, in an ongoing collaboration with experimentalists, we were able to show that the networks found in the middle temporal gyrus of the human brain have a filter-like structure that is helpful for language tasks. This suggest that using BPTT will not give the same networks.

Theoretical: We have some preliminary proofs showing that when a system has sensitivity to initial conditions (meaning that some inputs have large effects after some time) and a restricted output space (meaning that every output can correspond to many different inputs) the loss function of training the system parameters is ill-conditioned. This applies to a wide range of cognitive tasks such as language, speech recognition and in some cases motor control.


Approach

Our key goal now is to combine the three previous results. To do this, you will train recurrent neural networks to solve a speech recognition task, and use the computational framework developed before to examine how the networks found by backpropagation are different than those appearing in the brain.

Once this is achieved, we will propose a combination of brain-inspired prior structures and backpropagation to provide better results overall.


Tasks

  • Familiarize yourself with the theory of backpropagation, dynamical systems and statistics

  • Understand the previous results (computational, experimental, theoretical)

  • Train a RNN to solve a speech recognition task

    • With Echo State Networks with brain-inspired network structure

    • With Backpropagation through time

  • Examine the solutions in terms of the network spectra.

  • Examine the loss function landscape around the final solutions


Your profile

  • Interest in machine learning and computational neuroscience.

  • Programming experience, specifically in machine learning.

  • Willingness to learn concepts from biology (microcircuit motifs, neural activity dimensionality) and from mathematics (curvatures of loss landscape, random matrix spectra).


Remarks

Given that this is a proof-of-concept project, the roadmap is open to changes. If you want to develop your own ideas or emphasize some of the aspects we will be glad to alter the project outline.

Note that you are not required to have in-depth knowledge of network spectra, loss functions, gaussian curvatures and the like, just the willingness to understand what they mean.


Suppervision

The project will take place at the Grewe Lab with Prof. Benjamin Grewe being the PI and Dr. Pau Vilimelis Aceituno as the direct supervisor.

Interested students should sent an e-mail to pau@ini.uzh.ch. Please attach a brief statement explaining your background/broad interests (or a copy of your CV) so that we know how to shape the project.

© 2024 Institut für Neuroinformatik