Today’s robots are confined to tightly controlled environments: even the complex choreographies that the Atlas humanoid flawlessly executes heavily rely on handcrafted control strategies and detailed workspace models, with little...
ver más
31/08/2030
Líder desconocido
1M€
Presupuesto del proyecto: 1M€
Líder del proyecto
Líder desconocido
Fecha límite participación
Sin fecha límite de participación.
Financiación
concedida
El organismo HORIZON EUROPE notifico la concesión del proyecto
el día 2024-10-17
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Información proyecto ARTIFACT
Duración del proyecto: 70 meses
Fecha Inicio: 2024-10-17
Fecha Fin: 2030-08-31
Líder del proyecto
Líder desconocido
Presupuesto del proyecto
1M€
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Today’s robots are confined to tightly controlled environments: even the complex choreographies that the Atlas humanoid flawlessly executes heavily rely on handcrafted control strategies and detailed workspace models, with little place for sensing. To put it bluntly, robots are nowhere near the level of agility, dexterity, and even less so autonomy, robustness, and safety required for their deployment in the wild alongside people.
The tenet of ARTIFACT is that the key to an actual revolution will come from the algorithmic foundations of artificial motion intelligence, an AI challenged from the start to interact physically with dynamic environments and, ultimately, people. To do so, we will break away from the dichotomy between optimal control, where the role of perception is traditionally limited to an early state estimation stage, and reinforcement learning, where control policies are typically learned model-free with no guarantee to cope with the curse of dimensionality.
In ARTIFACT, we will devise a unified, structured, modular, and learnable control architecture for providing robots with advanced decision-making capabilities to solve complex tasks and face new interactions as they experience the world. It will leverage the notion of differentiable programming at all scales to enable robots to (i) capture models of their interactions directly from a sound combination of sensor data and first principles from physics, (ii) autonomously discover new complex gestures and movements leveraging their past experiences, and (iii) learn embodied representations to control their interactions finely and reason about the physical world. It will be implemented in open-source software and shown in real-world and challenging scenarios requiring fine dexterity and high agility. Altogether, these contributions will be the key enablers to enhance robot autonomy fundamentally, thus opening the age of ubiquitous robots at the service of mankind.