Spatial 3D Semantic Understanding for Perception in the Wild
Understanding the 3D spatial semantics of the world around us is core to visual perception and digitization -- real-world environments are spatially three-dimensional, and must be understood in its 3D context, even from 2D image o...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
SEGMENT
3D scene understanding in two glances
2M€
Cerrado
CHAMELEON
Intuitive editing of visual appearance from real world datas...
2M€
Cerrado
TIN2010-21771-C02-01
INTRODUCCION DE INFORMACION DE COLOR Y MECANISMOS DE ATENCIO...
149K€
Cerrado
RYC-2017-22563
From perceptual modeling to semantic understanding of images...
309K€
Cerrado
TIN2009-14173
COMPRENSION HOLISTICA DE IMAGENES EN COLOR: COMBINACION DE P...
86K€
Cerrado
TIN2008-04752
PROCESAMIENTO Y ANALISIS DIGITAL DE VIDEO E IMAGENES 3D
106K€
Cerrado
Información proyecto SpatialSem
Duración del proyecto: 63 meses
Fecha Inicio: 2023-06-01
Fecha Fin: 2028-09-30
Descripción del proyecto
Understanding the 3D spatial semantics of the world around us is core to visual perception and digitization -- real-world environments are spatially three-dimensional, and must be understood in its 3D context, even from 2D image observations. This will lead to spatially-grounded reasoning and higher-level perception of the world around us. Such 3D perception will provide the foundation for transformative, next-generation technology across machine perception, immersive communications, mixed reality, architectural or industrial modeling, and more. This will enable a new paradigm in semantic understanding that derives primarily from a spatially-consistent, 3D representation rather than relying on image-based reasoning that captures only projections of the world. However, 3D semantic reasoning from visual data such as RGB or RGB-D observations remains in its infancy, due to challenges in learning from limited amounts of real-world 3D data, and moreover, the complex, high-dimensional nature of the problem. In this proposal, we will develop new algorithmic approaches to effectively learn robust visual 3D perception, with new learning paradigms for features, representations, and operators, to encompass 3D semantic understanding.