Decoding Vocal Communication: Spatiotemporal Dynamics of Neural Auditory Processing in the Macaque Auditory Cortex
The ability to process vocalizations is fundamental to social interaction. The auditory cortex plays a crucial role in this process, transforming incoming acoustic signals into meaningful representations of vocal identity and emotional content. Understanding how this transformation occurs requires unraveling the complex interplay between local field potentials (LFPs) – reflecting the synchronized activity of neural populations – and the spiking activity of individual neurons. This project offers a unique opportunity to investigate the spatiotemporal dynamics of neural processing in the auditory cortex during vocal communication, bridging the gap between population-level activity and individual neuron contributions.
Project description
This project aims to understand how the spatial-temporal propagation of LFPs, as well as the interaction between LFPs and the spiking activity of the neuron population, encodes different aspects of representing vocalizations and voices. We will leverage a rich dataset of neural recordings from the temporal lobe to investigate how acoustic features are transformed and represented across the auditory processing hierarchy. This project offers a unique opportunity to work with large-scale neural datasets and apply advanced analysis techniques to understand the neural basis of vocal communication.
Methodology: In this project, you will analyze neural data recorded from multi-electrode arrays implanted in the temporal cortex during presentation of various vocalizations. You will employ advanced signal processing and data analysis techniques. This includes investigating the propagation of LFP signals across the array to identify patterns of activity related to different vocal features, quantifying the relationship between LFP activity and the spiking of individual neurons, to understand how population-level dynamics influence neural firing patterns, and using the obtained features to decode vocal features from neural activity
Requirements
- Proficiency in Python is essential, including experience with scientific computing libraries (e.g., NumPy, SciPy, Matplotlib).
- A solid foundation in mathematics, particularly linear algebra and signal processing, is required for understanding and implementing the analytical methods and computational models.
Contact
- Prof. Timothée Proix: proix@ini.ethz.ch
- Dr. Margherita Giamundo (Aix-Marseille University)
Starting Date & Duration:
This project is currently available as a semester project or a Master’s thesis.
Related Literature
- Giamundo et al. (2024). A population of neurons selective for human voice in the monkey brain. PNAS 121(25). https://doi.org/10.1073/pnas.2405588121
- Davis et al. (2020). Spontaneous travelling cortical waves gate perception in behaving primates. Nature 587(7834). https://www.nature.com/articles/s41586-020-2802-y