How does the brain organize sounds into auditory scenes?
Real-world listening involves making sense of the numerous competing sound sources that exist around us. The neuro-computational challenge faced by the brain is to reconstruct these sources from the composite waveform that arrives...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
BFU2015-65376-P
EL PAPEL DEL REFLEJO OLIVOCOCLEAR MEDIAL CONTRALATERAL EN LA...
142K€
Cerrado
OsciLang
OsciLang A neurofeedback system based on oscillatory activi...
150K€
Cerrado
PSI2015-63664-P
CONTRIBUCION SUBCORTICAL A LA COGNICION AUDITIVA
112K€
Cerrado
IBiDT
Individualized Binaural Diagnostics and Technology
2M€
Cerrado
PID2020-117266RB-C21
EFECTO DE LA NEUROMODULACION DE LA CORTEZA AUDITIVA SOBRE LA...
157K€
Cerrado
RTI2018-096311-B-I00
WHY DOES MUSICAL TRAINING ENHANCES SPEECH PROCESSING? A CORT...
171K€
Cerrado
Información proyecto SOUNDSCENE
Duración del proyecto: 86 meses
Fecha Inicio: 2018-03-06
Fecha Fin: 2025-05-31
Descripción del proyecto
Real-world listening involves making sense of the numerous competing sound sources that exist around us. The neuro-computational challenge faced by the brain is to reconstruct these sources from the composite waveform that arrives at the ear; a process known as auditory scene analysis. While young normal hearing listeners can parse an auditory scene with ease, the neural mechanisms that allow the brain to do this are unknown – and we are not yet able to recreate them with digital technology. Hearing loss, aging, impairments in central auditory processing, or an inability to appropriately engage attentional mechanisms can negatively impact the ability to listen in complex and noisy situations and an understanding of how the healthy brain organizes a sound mixture into perceptual sources may guide rehabilitative strategies targeting these problems.
While functional imaging studies in humans highlight a network of brain regions that support auditory scene analysis, little is known about the cellular and circuit based mechanisms that operate within these brain networks. A critical barrier to advancing our understanding of how the brain solves the challenge of scene analysis has been a failure to combine behavioural testing, which provides a crucial measure of how any given sound mixture is perceived, with methods to record and manipulate neuronal activity in animal models. Here, I propose to use a novel behavioural paradigm in conjunction with high-channel count electrophysiological recordings and optogenetic manipulation to elucidate how auditory cortex, prefrontal cortex and hippocampus enable scene analysis during active listening. These methods will allow us to record single cell activity from a number of brain regions more typical of functional imaging studies in order to understand how processing within each area, and the interactions between these areas, underpins auditory scene analysis.