Descripción del proyecto
REAL-RL proposes a path to autonomous robots that learn from experience. By learning to solve new and challenging tasks and exploiting their specific capabilities, they could become ubiquitous assistants to humans in an uncountable number of tasks. Current control strategies for robots are developed only for particular tasks and are not versatile. To ensure their functioning, it is necessary to have highly accurate physical models that precisely match all the essential aspects of the real world. REAL-RL follows a different path: a learning approach to robot control. The dominant direction in the field uses model-free reinforcement learning methods that need an incredible number of interactions with the world – often prohibitive for real robots. As a bypass, simulations are used but require detailed knowledge of all possible situations that the robot might encounter. These problems are circumvented in REAL-RL by proposing a model-based approach. Models of the interaction with the world are learned from experience and will be used to plan and adapt behavior on the fly. This approach promises to be much more data-efficient and allows to transfer of valuable experience between tasks. Fundamental challenges in model-learning, safety-aware exploration and planning, and higher-order reasoning are identified and presented here with concrete novel solution ideas, such as a causal inductive bias for deep dynamics models, risk-aware real-time general trajectory optimization, and differentiable discrete planning. Critical stepping stones, such as probabilistic models and fast trajectory planning, have just been developed by the community, and the applicant. By aiming at a generic learning method that can be used to control any robot – rigid or soft – with legs, arms, or other end-effectors for manipulation and locomotion tasks, and make them improve with experience, the proposal develops a solid basis for future robotic applications.