Detecting and identifying targets in cluttered scenes is critical for successful interactions in the complex and dynamic environments we inhabit. Despite the ease and speed with which we recognize objects, visual recognition entai...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
SEGMENT
3D scene understanding in two glances
2M€
Cerrado
TIN2012-39051
DESARROLLO DE LA HERMENEUTICA VISUAL EN LA INTERPRETACION DE...
91K€
Cerrado
VISCUL
Visual Culture for Image Understanding
1M€
Cerrado
TIN2013-47630-C2-2-R
SUPERVISION DE PATRONES DE COMPORTAMIENTO HUMANO MEDIANTE VI...
90K€
Cerrado
BES-2013-064230
REVISITING REPRESENTATION MODELS FOR VISUAL RECOGNITION: OBJ...
84K€
Cerrado
VIDEOLEARN
Video and 3D Analysis for Visual Learning
1M€
Cerrado
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Detecting and identifying targets in cluttered scenes is critical for successful interactions in the complex and dynamic environments we inhabit. Despite the ease and speed with which we recognize objects, visual recognition entails the computationally challenging task of binding relevant features together for the perception of coherent meaningful objects. Most work on visual integration focuses on spatial binding processes. However, detecting dynamic objects in cluttered scenes entails storage of image fragments in memory and integration across time. Here we propose to investigate the mechanisms that mediate integration across space and time. We will exploit the contour slit-viewing paradigm: that is, an object moving behind a narrow slit is still perceived as a whole. In task 1, we will characterize quantitatively the ability for slit-viewing perception and generate a computational model describing the principles that guide spatiotemporal integration. In task 2, we will test for similarities and differences in the neural substrates underlying spatial and spatiotemporal integration using conventional GLM and advanced multivariate analyses. In task 3, we will test for cortical regions that mediate perceptual deformations under slit-viewing. Task 4 will use precise retinotopic mapping to investigate the memory store processes engaged in spatiotemporal integration. Finally, in task 5 we will employ simultaneous EEG-fMRI recordings to examine the dynamic information processing of spatiotemporal integration and the interactions within the cortical circuit involved. The proposed work programme will devise a new paradigm and take advantage of recent advances in multimodal recordings (EEG-fMRI) and advanced computational analysis methods to explore the neural mechanisms of spatiotemporal integration for the first time. This interdisciplinary work will offer new insights into the neural machinery that underlies our ability to recognize dynamic objects in cluttered scenes.