Reinforcement Learning of Visual Decision Making in RNNs (Master Thesis)

alternate text
In most primates, including humans, visual perception is only fine grained in a small patch of the visual field called the fovea. Therefore, the fine details of a visual scene can often only be sensed when directly looking at them, and the eyes must constantly move from fixating one part of a visual scene to fixating the next. Since this takes time and resources, the question of where to look next is crucial for survival and a sophisticated neural system has evolved to control visual attention. This system includes a network of frontal brain areas that are also critical for higher cognition. We study the function of these areas using simultaneous recordings from hundreds of neurons while subjects solve naturalistic, visual, cognitive tasks involving eye movements.

A great deal of insight about how different brain areas might implement solutions to such tasks has come from (1) training recurrent neural networks (RNNs) to perform the tasks [1] and then (2) reverse engineering [2] the trained RNNs to generate candidate hypothesis of how the brain might do it [3]. One encouraging observation is that activations in the trained RNNs resemble neural activity recorded in animals performing the respective tasks.

Goal of this project is to use reinforcement learning [4] to train a recurrent architecture to control eye-movements of an artificial agent in eye-movement guided visual decision tasks.

Requirements for this project are programming skills (MATLAB; Python), familiarity with machine learning, neural networks and reinforcement learning, and basic knowledge of linear algebra and standard data analysis methods such as multivariate linear regression and principal component analysis, etc.

Contact
Valerio Mante: valerio (at) ini.uzh.ch
Subject Line: “Master Thesis/Project”

References
1. Martens, J. & Sutskever, I. Learning recurrent neural networks with Hessian-free optimization. Proc. 28th Int. Conf. Mach. Learn. ICML 2011 1033–1040 (2011).
2. Sussillo, D. & Barak, O. Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural Comput. 25, 626–49 (2013).
3. Mante, V., Sussillo, D., Shenoy, K. V & Newsome, W. T. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78–84 (2013).
4. Wang, J. X. et al. Prefrontal cortex as a meta-reinforcement learning system. Nat. Neurosci. 21, 860–868 (2018).