Biophysical neural network modelling of flexible reinforcement learning

alternate text

The ability to adapt one’s decisions to changing rules during reinforcement learning is a key component of flexible behaviour. Such flexible behaviour depends on brain-wide neural interactions (Wang et al., 2023). However, how this network precisely supports such behaviour remains poorly understood (Banerjee et al., 2021). This project aims to create biologically informed neural network models (Yang et al., 2020) that can solve reinforcement learning tasks. The student will build on EEG and fMRI results from reversal learning tasks performed in our lab and use this framework to create a biophysical recurrent neural network to identify the nodes of the hub network supporting flexible learning (Tuzsus et al., 2024). This project will thus contribute to a deeper understanding of flexible behaviour and its impairment in neurological disorders.

This exciting project is especially suited for candidates with a strong background in Computer science/Mathematics/Computational neuroscience/Physics/Coding/Modelling/Machine learning interested in learning about Cognitive/Circuit/Systems neuroscience. Programming experience in MATLAB and Python is required. Students will contribute to an ongoing project with a clear end-point. There will be further possibilities for interacting with the Senn lab in Bern and DeepMind in London.  

Please send applications/enquiries to Prof. Abhi Banerjee, who is affiliated with INI, at abhishek.banerjee@pharm.ox.ac.uk with a brief statement and a CV if you are interested. Informal inquiries are welcome. 

Website: https://www.adaptive-decisions.com; X: @abhii_mit

https://www.pharm.ox.ac.uk/team/abhishek-banerjee 

 

Keywords: 

Flexible decision-making, recurrent neural networks, machine learning, reversal Learning, EEG

 

References:

1. Wang, B. A., Veismann, M., Banerjee, A., & Pleger, B. (2023). Human orbitofrontal cortex signals decision outcomes to sensory cortex during behavioral adaptations. Nature Communications14(1), 3552.

2. Banerjee, A., Rikhye, R. V., & Marblestone, A. (2021). Reinforcement-guided learning in frontal neocortex: emerging computational concepts. Current Opinion in Behavioral Sciences38, 133-140.

3. Yang, G. R., & Wang, X. J. (2020). Artificial neural networks for neuroscientists: a primer. Neuron107(6), 1048-1070.

4. Tuzsus, D., Brands, A., Pappas, I., & Peters, J. (2024). Exploration–Exploitation Mechanisms in Recurrent Neural Networks and Human Learners in Restless Bandit Problems. Computational Brain & Behavior, 1-43.