Human-Compatible Artificial Intelligence with Guarantees
In this proposal, we address the matter of transparency and explainability of AI using approaches inspired by control theory. Notably, we consider a comprehensive and flexible certification of properties of AI pipelines, certain c...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
RRR-XAI
RRR XAI Right for the Right Reason eXplainable Artificial I...
165K€
Cerrado
IJC2019-039152-I
Deep Learning and Neural-symbolic learning and reasoning for...
93K€
Cerrado
AI4REASON
Artificial Intelligence for Large Scale Computer Assisted Re...
1M€
Cerrado
AI4REALNET
AI for REAL-world NETwork operation
4M€
Cerrado
AEQUITAS
ASSESSMENT AND ENGINEERING OF EQUITABLE, UNBIASED, IMPARTIAL...
3M€
Cerrado
PFV-4-PTAI
Probabilistic Formal Verification for Provably Trustworthy A...
Cerrado
Información proyecto AutoFair
Duración del proyecto: 35 meses
Fecha Inicio: 2022-10-01
Fecha Fin: 2025-09-30
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
In this proposal, we address the matter of transparency and explainability of AI using approaches inspired by control theory. Notably, we consider a comprehensive and flexible certification of properties of AI pipelines, certain closed-loops and more complicated interconnections. At one extreme, one could consider risk averse a priori guarantees via hard constraints on certain bias measures in the training process. At the other extreme, one could consider nuanced communication of the exact tradeoffs involved in AI pipeline choices and their effect on industrial and bias outcomes, post hoc. Both extremes offer little in terms of optimizing the pipeline and inflexibility in explaining the pipeline’s fairness-related qualities. Seeking the middle-ground, we suggest a priori certification of fairness-related qualities in AI pipelines via modular compositions of pre-processing, training, inference, and post-processing steps with certain properties. Furthermore, we present an extensive programme in explainability of fairness-related qualities. We seek to inform both the developer and the user thoroughly in regards to the possible algorithmic choices and their expected effects. Overall, this will effectively support the development of AI pipelines with guaranteed levels of performance, explained clearly. Three use cases (in Human Resources automation, Financial Technology, and Advertising) will be used to assess the effectiveness of our approaches.