The first autopilots in airplanes can be traced back to the beginning of the twentieth century. These devices greatly reduced the pilot's workload by taking over parts of the navigation. The success of autopilots in reducing navi...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
The first autopilots in airplanes can be traced back to the beginning of the twentieth century. These devices greatly reduced the pilot's workload by taking over parts of the navigation. The success of autopilots in reducing navigational complexity and improving safety explains the recent interest to introduce navigational assistance in other transportation means as well. However, implementing robotic navigation correction on a large scale also represents a potential safety risk for its users. For example, some plane crashes have been attributed to the incorrect estimation by pilots of the state of the plane's automatic pilot, an effect known as mode confusion.RADHAR therefore proposes a novel framework to design human-aware adaptive autonomy that avoids mode confusion by embedding a thorough understanding of diver behaviour and estimated intention into the decision making. Through lifelong, unsupervised learning, the robot will fuse the inherently uncertain information from environment and driver perception sensors; autonomously estimate the user model and intention and calculate a human-friendly trajectory. Since human characteristics vary over time a continuous interaction between two learning systems will emerge, hence RADHAR: Robotic ADaptation to Humans Adapting to Robots. In order to apply this framework to realistic real-world scenarios, sensor models will be developed to build 3D models of the environment with estimation of dynamic obstacles' motion and terrain traversability. To verify driver model assumptions such as focus-of-attention, the driver's posture and facial expression will be estimated with a camera and a haptic interface. The framework will be demonstrated on a wheelchair platform that navigates in an everyday environment with everyday objects. Tests on various levels of autonomy can be performed easily and safely on wheelchairs. Evaluation will happen by a diverse and challenging population of wheelchair users who currently drive unsafely.