Explanations are valuable because they scaffold the kind of learning that supports adaptive behaviour, e.g. explanations enable users to adapt themselves to the situations that are about to arise. Explanations allow us to attain a...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
CONVEY
Conveying Agent Behavior to People: A User-Centered Approac...
1M€
Cerrado
ADIX
Argumentation based Deep Interactive EXplanations ADIX
3M€
Cerrado
PID2021-122916NB-I00
INTELIGENCIA ARTIFICIAL EXPLICABLE PARA TOMA DE DECISIONES C...
260K€
Cerrado
PID2019-105093GB-I00
EXPLICACIONES AUTOMATICAS TRANS-DOMINIO EN VISION POR COMPUT...
163K€
Cerrado
TED2021-131787B-I00
BIOMETRIA DEL COMPORTAMIENTO MEJORADA PARA UNA IA CENTRADA E...
315K€
Cerrado
PEER
The hyPEr ExpeRt collaborative AI assistant
8M€
Cerrado
Información proyecto DEXIM
Duración del proyecto: 73 meses
Fecha Inicio: 2019-11-27
Fecha Fin: 2025-12-31
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Explanations are valuable because they scaffold the kind of learning that supports adaptive behaviour, e.g. explanations enable users to adapt themselves to the situations that are about to arise. Explanations allow us to attain a stable environment and have the possibility to control it, e.g. explanations put us in a better position to control the future. Explanations in the medical domain can help patients identify and monitor the abnormal behaviour of their ailment. In the domain of self-driving vehicles they can warn the user of some critical state and collaborate with her to prevent a wrong decision. In the domain of satellite imagery, an explanatory monitoring system justifying the evidence of a future hurricane can save millions of lives. Hence, a learning machine that a user can trust and easily operate need to be fashioned with the ability of explanation. Moreover, according to GDPR, an automatic decision maker is required to be transparent by law.
As decision makers, humans can justify their decisions with natural language and point to the evidence in the visual world which led to their decisions. In contrast, artificially intelligent systems are frequently seen as opaque and are unable to explain their decisions. This is particularly concerning as ultimately such systems fail in building trust with human users.
In this proposal, the goal is to build a fully transparent end-to-end trainable and explainable deep learning approach for visual scene understanding. To achieve this goal, we will make use of the positive interactions between multiple data modalities, incorporate uncertainty and temporal continuity constraints, as well as memory mechanisms. The output of this proposal will have direct consequences for many practical applications, most notably in mobile robotics and intelligent vehicles industry. This project will therefore strengthen the user’s trust in a very competitive market.