Conveying Agent Behavior to People: A User-Centered Approach to Explainable AI
From self-driving cars to agents recommending medical treatment, Artificial Intelligence (AI) agents are becoming increasingly prevalent. These agents have the potential to benefit society in areas such as transportation, healthca...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
TAPAS
Towards an Automated and exPlainable ATM System
997K€
Cerrado
RYC-2016-19802
Explaining Fuzzy Systems: Making Understandable Intelligent...
309K€
Cerrado
HumAIne
Hybrid Human-AI Decision Support for Enhanced Human Empowerm...
8M€
Cerrado
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
From self-driving cars to agents recommending medical treatment, Artificial Intelligence (AI) agents are becoming increasingly prevalent. These agents have the potential to benefit society in areas such as transportation, healthcare and education. Importantly, they do not operate in a vacuum—people interact with agents in a wide range of settings. To effectively interact with agents, people need to be able to anticipate and understand their behavior. For example, a driver of an autonomous vehicle will need to anticipate situations in which the carfails and hands over control, while a clinician will need to understand the treatment regime recommended by an agent to determine whether it aligns with the patient’s preferences.
Explainable AI methods aim to support users by making the behavior of AI systems more transparent. However, the state-of-the-art in explainable AI is lacking in several key aspects. First, the majority of existing methods focus on providing local explanations to one-shot decisions of machine learning models. They are not adequate for conveying the behavior of agents that act over an extended time duration in large state spaces. Second, most existing methods do not consider the context in which explanations are deployed, including to the specific needs and characteristics of users. Finally, most methods are not interactive, limiting users’ ability to gain a thorough understanding of the agents.
The overarching objective of this proposal is to develop adaptive and interactive methods for conveying the behavior of agents and multi-agent teams operating in sequential decision-making settings. To tackle this challenge, the proposed research will draw on insights and methodologies from AI and human-computer interaction. It will develop algorithms that determine what information about agents’ behavior to share with users, tailored to users’ needs and characteristics, and interfaces that allow users to proactively explore agents’ capabilities.