"Visualizing our surroundings and imagination has been an integral part of human history. In today's era, we have the privilege to immerse in 3D digital environments and interact with virtual objects and characters. However, creat...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
PID2021-122392OB-I00
VIRTUALIZACION Y VISUALIZACION DE AVATARES PERSONALIZADOS PO...
214K€
Cerrado
Duración del proyecto: 67 meses
Fecha Inicio: 2024-02-23
Fecha Fin: 2029-09-30
Líder del proyecto
POLYTECHNEIO KRITIS
No se ha especificado una descripción o un objeto social para esta compañía.
TRL
4-5
Presupuesto del proyecto
2M€
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
"Visualizing our surroundings and imagination has been an integral part of human history. In today's era, we have the privilege to immerse in 3D digital environments and interact with virtual objects and characters. However, creating digital representations of environments (i.e., 3D models) often requires excessive amount of manual effort and time even for trained 3D artists. Over the recent years, there have been remarkable advances in deep learning methods that attempt to reconstruct 3D models from real-world data captured in images or scans. However, we are still far from automatically producing 3D models usable in interactive 3D environments and simulations i.e., the resulting reconstructed 3D models lack controllers and metadata related to their articulation structure, possible motions, and interaction with other objects or agents. Automating the synthesis of interactive 3D models is crucial for several applications, such as (a) virtual and mixed reality environments where objects and characters are not static, but instead move and interact with each other, (b) automating animation pipelines, (c) training robots for object interaction in simulated environments, (d) 3D printing of functional objects, (e) digital entertainment. In this project, we will answer the question: ""how can we automate the generation of interactive 3D models of objects and characters?"". Our project will include the following thrusts:
(1) We will design deep architectures that automatically infer motion controllers and interaction-related metadata for input 3D models, effectively making them interactive.
(2) We will develop learning methods that replace dynamic real-world objects and characters captured in scans and video with high-quality, interactive, and animated 3D models as digital representatives.
(3) We will develop generative models that synthesize interactive 3D objects and characters automatically, and further help reconstructing them from scans and video more faithfully."