The neural basis of visual interaction between scenes and objects
We easily categorize places and objects in a single glance, a computationally complex task presenting a central challenge for vision neuroscience. Considerable evidence points to a division of scene and object processing into two...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
POINTS
Revealing the neurocognitive mechanisms underlying the visua...
1M€
Cerrado
NEURACT
Untangling population representations of objects. A closed l...
2M€
Cerrado
TRANSFORM
A theory and model of the neural transformations mediating h...
2M€
Cerrado
ViSyRelPer
Perceptual foundations of relational thinking
212K€
Cerrado
RELEVANCE
How body relevance drives brain organization
8M€
Cerrado
COGSTIM
COGSTIM: Online Computational Modulation of Visual Perceptio...
174K€
Cerrado
Información proyecto SEEING FROM CONTEXT
Duración del proyecto: 38 meses
Fecha Inicio: 2015-03-27
Fecha Fin: 2018-05-31
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
We easily categorize places and objects in a single glance, a computationally complex task presenting a central challenge for vision neuroscience. Considerable evidence points to a division of scene and object processing into two distinct neural pathways, relying on different types of visual cues. However, scenes and objects are also known to strongly interact in visual perception, as seen in contextual effects of background on object perception. At present, the neural mechanisms by which scenes and objects interact remain unknown, leaving a critical gap in our understanding of these two major visual paths. The main goal of this multi-method proposal is to uncover the neural mechanisms of scene-object interactions. I therefore propose three competing theoretical models. A parallel model predicts only stimulus-driven representations of scenes and objects in the visual cortex. In contrast, interactive models predict that representations of scenes and objects in the visual cortex are influenced by one-another. However, whereas a visual-interactive model suggests direct interaction, a feedback model suggests that the interaction is mediated by frontal regions. To test this, I propose a novel psychophysical paradigm of seeing objects from scene context and scenes from object context. Thereby, I will examine how scene and object processing are affected by one-another and identify the potential neural sources of these modulations using fMRI (objective 1). Thereafter, I will use MEG to decode the timeline of these neural processes (objective 2). Establishing a clear neurocognitive model for scene-object interaction would not only advance our understanding of the two central paths of the ventral visual stream, but also significantly contribute to the definition of vision as an interactive system rather than a set of specialized parallel modules. Shifting from localized visual modules to interactive visual processes will broaden my expertise as a cognitive neuroscientist.