Mesoscopic computational imaging of the predictive listening human brain
How do we understand the complex, noisy, and often incomplete sounds that reach our ears? Sounds are not perceived in isolation. The context in which we encounter them allows predicting what we may hear next. How is contextual sou...
How do we understand the complex, noisy, and often incomplete sounds that reach our ears? Sounds are not perceived in isolation. The context in which we encounter them allows predicting what we may hear next. How is contextual sound processing implemented in the brain? From the sensory periphery to the cortex, complex information is extracted from the acoustic content of sounds. This feedforward processing is complemented by extensive feedback (FB) processing. Current research suggests that FB confers flexibility to auditory perception, by generating predictions of the input and supplementing noisy or missing information with contextual information. So far, limitations in coverage and spatial resolution of non-invasive imaging methods have prohibited grounding contextual sound processing onto the fundamental computational units of the human brain, resulting in an incomplete understanding of its biological underpinnings. Here, I propose to use ultra-high-field (UHF) functional magnetic resonance imaging (fMRI) to study how expectations shape human hearing. The high spatial resolution and coverage of UHF-fMRI will allow examining fundamental brain units: small subcortical structures and layers of cortex. I will investigate how responses change when acoustic information needs to be prioritized in an uncertain or noisy soundscape. Complementing UHF-fMRI measurements with magnetoencephalography (MEG), I will derive a neurobiological model of contextual sound processing at high spatial and temporal resolution. By comparing this model to state-of-the-art artificial intelligence, I will generalize it to naturalistic settings. This project links algorithmic and at a mesoscopic implementation levels to reveal the neurobiological mechanisms supporting hearing in context. The resulting model will allow testing hypotheses of aberrant contextual processing in phantom hearing (tinnitus and auditory hallucinations).ver más
Seleccionando "Aceptar todas las cookies" acepta el uso de cookies para ayudarnos a brindarle una mejor experiencia de usuario y para analizar el uso del sitio web. Al hacer clic en "Ajustar tus preferencias" puede elegir qué cookies permitir. Solo las cookies esenciales son necesarias para el correcto funcionamiento de nuestro sitio web y no se pueden rechazar.
Cookie settings
Nuestro sitio web almacena cuatro tipos de cookies. En cualquier momento puede elegir qué cookies acepta y cuáles rechaza. Puede obtener más información sobre qué son las cookies y qué tipos de cookies almacenamos en nuestra Política de cookies.
Son necesarias por razones técnicas. Sin ellas, este sitio web podría no funcionar correctamente.
Son necesarias para una funcionalidad específica en el sitio web. Sin ellos, algunas características pueden estar deshabilitadas.
Nos permite analizar el uso del sitio web y mejorar la experiencia del visitante.
Nos permite personalizar su experiencia y enviarle contenido y ofertas relevantes, en este sitio web y en otros sitios web.