Innovating Works

ORIENT

Financiado
Goal directed eye head coordination in dynamic multisensory environments
Rapid object identification is crucial for survival of all organisms, but poses daunting challenges if many stimuli compete for attention, and multiple sensory and motor systems are involved in the processing, programming and gene... Rapid object identification is crucial for survival of all organisms, but poses daunting challenges if many stimuli compete for attention, and multiple sensory and motor systems are involved in the processing, programming and generating of an eye-head gaze-orienting response to a selected goal. How do normal and sensory-impaired brains decide which signals to integrate (goal), or suppress (distracter)? Audiovisual (AV) integration only helps for spatially and temporally aligned stimuli. However, sensory inputs differ markedly in their reliability, reference frames, and processing delays, yielding considerable spatial-temporal uncertainty to the brain. Vision and audition utilize coordinates that misalign whenever eyes and head move. Meanwhile, their sensory acuities vary across space and time in essentially different ways. As a result, assessing AV alignment poses major computational problems, which so far have only been studied for the simplest stimulus-response conditions. My groundbreaking approaches will tackle these problems on different levels, by applying dynamic eye-head coordination paradigms in complex environments, while systematically manipulating visual-vestibular-auditory context and uncertainty. I parametrically vary AV goal/distracter statistics, stimulus motion, and active vs. passive-evoked body movements. We perform advanced psychophysics to healthy subjects, and to patients with well-defined sensory disorders. We probe sensorimotor strategies of normal and impaired systems, by quantifying their acquisition of priors about the (changing) environment, and use of feedback about active or passive-induced self-motion of eyes and head. I challenge current eye-head control models by incorporating top-down adaptive processes and eye-head motor feedback in realistic cortical-midbrain networks. Our modeling will be critically tested on an autonomously learning humanoid robot, equipped with binocular foveal vision and human-like audition. ver más
31/12/2022
3M€
Duración del proyecto: 74 meses Fecha Inicio: 2016-10-17
Fecha Fin: 2022-12-31

Línea de financiación: concedida

El organismo H2020 notifico la concesión del proyecto el día 2022-12-31
Línea de financiación objetivo El proyecto se financió a través de la siguiente ayuda:
ERC-ADG-2015: ERC Advanced Grant
Cerrada hace 9 años
Presupuesto El presupuesto total del proyecto asciende a 3M€
Líder del proyecto
STICHTING RADBOUD UNIVERSITEIT No se ha especificado una descripción o un objeto social para esta compañía.
Perfil tecnológico TRL 4-5