Spatial 3D Semantic Understanding for Perception in the Wild
Understanding the 3D spatial semantics of the world around us is core to visual perception and digitization -- real-world environments are spatially three-dimensional, and must be understood in its 3D context, even from 2D image o...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Información proyecto SpatialSem
Duración del proyecto: 63 meses
Fecha Inicio: 2023-06-01
Fecha Fin: 2028-09-30
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Understanding the 3D spatial semantics of the world around us is core to visual perception and digitization -- real-world environments are spatially three-dimensional, and must be understood in its 3D context, even from 2D image observations. This will lead to spatially-grounded reasoning and higher-level perception of the world around us. Such 3D perception will provide the foundation for transformative, next-generation technology across machine perception, immersive communications, mixed reality, architectural or industrial modeling, and more. This will enable a new paradigm in semantic understanding that derives primarily from a spatially-consistent, 3D representation rather than relying on image-based reasoning that captures only projections of the world. However, 3D semantic reasoning from visual data such as RGB or RGB-D observations remains in its infancy, due to challenges in learning from limited amounts of real-world 3D data, and moreover, the complex, high-dimensional nature of the problem. In this proposal, we will develop new algorithmic approaches to effectively learn robust visual 3D perception, with new learning paradigms for features, representations, and operators, to encompass 3D semantic understanding.