I studied Computational Biology and Biotechnology at Uni Basel and ETHZ. In my Master I got interested in communication - of cells during development and how they can "do the wave" like people in a sports stadium. After that, I took a break from academia, and developed mechanistic models of hematological diseases during an internship at Roche.
Now in my PhD at INI, I am again interested in communication. We are studying songbirds as a model for imitation-based vocal learning. To imitate an vocal expert (such as an adult tutor bird) the brain needs to represent an auditory template of the expert's behavior (e.g. it's song) which it tries to match via reinforcement of similar self-generated vocal output. The song template has been postulated 50 years ago, but it's exact nature remains elusive. How is it computed from few positive noisy examples of tutor song?