Innovating Works

PrAud

Financiado
Mesoscopic computational imaging of the predictive listening human brain
How do we understand the complex, noisy, and often incomplete sounds that reach our ears? Sounds are not perceived in isolation. The context in which we encounter them allows predicting what we may hear next. How is contextual sou... How do we understand the complex, noisy, and often incomplete sounds that reach our ears? Sounds are not perceived in isolation. The context in which we encounter them allows predicting what we may hear next. How is contextual sound processing implemented in the brain? From the sensory periphery to the cortex, complex information is extracted from the acoustic content of sounds. This feedforward processing is complemented by extensive feedback (FB) processing. Current research suggests that FB confers flexibility to auditory perception, by generating predictions of the input and supplementing noisy or missing information with contextual information. So far, limitations in coverage and spatial resolution of non-invasive imaging methods have prohibited grounding contextual sound processing onto the fundamental computational units of the human brain, resulting in an incomplete understanding of its biological underpinnings. Here, I propose to use ultra-high-field (UHF) functional magnetic resonance imaging (fMRI) to study how expectations shape human hearing. The high spatial resolution and coverage of UHF-fMRI will allow examining fundamental brain units: small subcortical structures and layers of cortex. I will investigate how responses change when acoustic information needs to be prioritized in an uncertain or noisy soundscape. Complementing UHF-fMRI measurements with magnetoencephalography (MEG), I will derive a neurobiological model of contextual sound processing at high spatial and temporal resolution. By comparing this model to state-of-the-art artificial intelligence, I will generalize it to naturalistic settings. This project links algorithmic and at a mesoscopic implementation levels to reveal the neurobiological mechanisms supporting hearing in context. The resulting model will allow testing hypotheses of aberrant contextual processing in phantom hearing (tinnitus and auditory hallucinations). ver más
30/06/2026
2M€
Duración del proyecto: 62 meses Fecha Inicio: 2021-04-21
Fecha Fin: 2026-06-30

Línea de financiación: concedida

El organismo H2020 notifico la concesión del proyecto el día 2021-04-21
Línea de financiación objetivo El proyecto se financió a través de la siguiente ayuda:
Presupuesto El presupuesto total del proyecto asciende a 2M€
Líder del proyecto
UNIVERSITEIT MAASTRICHT No se ha especificado una descripción o un objeto social para esta compañía.
Perfil tecnológico TRL 4-5