Personalized priors: How individual differences in internal models explain idios...
Personalized priors: How individual differences in internal models explain idiosyncrasies in natural vision
In the cognitive and neural sciences, the brain is widely viewed as a predictive system. On this view, the brain conceives the world by comparing sensory inputs to internally generated models of what the world should look like. De...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
TIME
It's about time: Towards a dynamic account of natural vision...
1M€
Cerrado
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
In the cognitive and neural sciences, the brain is widely viewed as a predictive system. On this view, the brain conceives the world by comparing sensory inputs to internally generated models of what the world should look like. Despite this emphasis on internal models, their key properties are not well understood. We currently do not know what exactly is contained in our internal models and how these contents vary systematically across individuals. In the absence of suitable methods for assessing the contents of internal models, the predictive brain has essentially remained a black box.Here, we develop a novel approach for opening this black box. Focusing on natural vision, we will use creative drawing methods to characterize internal models. Through the careful analysis of drawings of real-world scenes, we will distill out the contents of individual people’s internal models. These insights will form the basis for a comprehensive cognitive, neural, and computational investigation of natural vision on the individual level: First, we will establish how individual differences in the contents of internal models explain the efficiency of scene vision, on the behavioral and neural levels. Second, we will harness variations in people’s drawings to determine the critical features of internal models that guide scene vision. Third, we will enrich the currently best deep learning models of vision with information about internal models to obtain computational predictions for individual scene perception. Finally, we will systematically investigate how individual differences in internal models mimic idiosyncrasies in visual and linguistic experience, functional brain architecture, and scene exploration.Our project will illuminate natural vision from a new angle – starting from a characterization of individual people’s internal models of the world. Through this change of perspective, we can make true progress in understanding what exactly is predicted in the predictive brain.