Innovating Works

SEGMENT

Financiado
3D scene understanding in two glances
The human mind understands visual scenes. We can usually tell what objects are present in a scene, we can imagine what the hidden parts of objects look like, and we can imagine what it would look like if we or an object moved. The... The human mind understands visual scenes. We can usually tell what objects are present in a scene, we can imagine what the hidden parts of objects look like, and we can imagine what it would look like if we or an object moved. The first step of visual scene understanding is segmentation, in which our brain tries to infer which parts of the scene belong to which objects. Adults can do this in photographs – but photographs are not how we learned to see as infants. We learned to see by moving around in a 3D world. The way that scenes project into our eyes, how light is affected by the optics of our eyes, how our photoreceptors sample the light, and how we move our eyes all provide rich information about our environment. However, we do not know how adults combine all this information to perceive segmented scenes, and we do not know how infants learn this combination. Two reasons for this are that standard visual display devices cannot precisely mimic these factors, and that it is unethical to manipulate these factors in human infants. The goals of this project are to understand how adults use the rich information present in active 3D vision to perform segmentation, and to understand how this is learned. We will develop a new display device and experimental methods to study how adults segment scenes when realistic visual information is available, and develop ground-breaking new technologies using advanced computer graphics and machine learning to simulate the inputs to the visual system from early development to adulthood. We will then conduct in silico experiments in artificial neural networks to understand segmentation learning, by systematically restricting or manipulating different factors. We will compare the learned behaviours of different artificial networks to adults performing segmentation during active exploration of 3D scenes, and use similarities and differences to better understand a fundamental puzzle of perception: how the mind makes sense of scenes. ver más
31/10/2028
2M€
Duración del proyecto: 67 meses Fecha Inicio: 2023-03-23
Fecha Fin: 2028-10-31

Línea de financiación: concedida

El organismo HORIZON EUROPE notifico la concesión del proyecto el día 2023-03-23
Línea de financiación objetivo El proyecto se financió a través de la siguiente ayuda:
Presupuesto El presupuesto total del proyecto asciende a 2M€
Líder del proyecto
TECHNISCHE UNIVERSITAT DARMSTADT No se ha especificado una descripción o un objeto social para esta compañía.
Perfil tecnológico TRL 4-5