Mesoscopic computational imaging of the predictive listening human brain
How do we understand the complex, noisy, and often incomplete sounds that reach our ears? Sounds are not perceived in isolation. The context in which we encounter them allows predicting what we may hear next. How is contextual sou...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Duración del proyecto: 62 meses
Fecha Inicio: 2021-04-21
Fecha Fin: 2026-06-30
Líder del proyecto
UNIVERSITEIT MAASTRICHT
No se ha especificado una descripción o un objeto social para esta compañía.
TRL
4-5
Presupuesto del proyecto
2M€
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
How do we understand the complex, noisy, and often incomplete sounds that reach our ears? Sounds are not perceived in isolation. The context in which we encounter them allows predicting what we may hear next. How is contextual sound processing implemented in the brain? From the sensory periphery to the cortex, complex information is extracted from the acoustic content of sounds. This feedforward processing is complemented by extensive feedback (FB) processing. Current research suggests that FB confers flexibility to auditory perception, by generating predictions of the input and supplementing noisy or missing information with contextual information. So far, limitations in coverage and spatial resolution of non-invasive imaging methods have prohibited grounding contextual sound processing onto the fundamental computational units of the human brain, resulting in an incomplete understanding of its biological underpinnings. Here, I propose to use ultra-high-field (UHF) functional magnetic resonance imaging (fMRI) to study how expectations shape human hearing. The high spatial resolution and coverage of UHF-fMRI will allow examining fundamental brain units: small subcortical structures and layers of cortex. I will investigate how responses change when acoustic information needs to be prioritized in an uncertain or noisy soundscape. Complementing UHF-fMRI measurements with magnetoencephalography (MEG), I will derive a neurobiological model of contextual sound processing at high spatial and temporal resolution. By comparing this model to state-of-the-art artificial intelligence, I will generalize it to naturalistic settings. This project links algorithmic and at a mesoscopic implementation levels to reveal the neurobiological mechanisms supporting hearing in context. The resulting model will allow testing hypotheses of aberrant contextual processing in phantom hearing (tinnitus and auditory hallucinations).