Multi-Attribute, Multimodal Bias Mitigation in AI Systems
Artificial Intelligence (AI) is increasingly employed by businesses, governments, and other organizations to make decisions with far-reaching impacts on individuals and society. This offers big opportunities for automation in diff...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
AEQUITAS
ASSESSMENT AND ENGINEERING OF EQUITABLE, UNBIASED, IMPARTIAL...
3M€
Cerrado
NoBIAS
Artificial Intelligence without Bias
4M€
Cerrado
XAI
Science and technology for the explanation of AI decision ma...
3M€
Cerrado
PID2021-127641OB-I00
BIOMETRIA Y COMPORTAMIENTO PARA UNA IA IMPARCIAL Y CONFIABLE...
274K€
Cerrado
PID2021-122916NB-I00
INTELIGENCIA ARTIFICIAL EXPLICABLE PARA TOMA DE DECISIONES C...
260K€
Cerrado
PID2020-116118GA-I00
APRENDIZAJE FEDERADO PARA PRESERVAR LA PRIVACIDAD DE LOS DAT...
27K€
Cerrado
Información proyecto MAMMOth
Duración del proyecto: 35 meses
Fecha Inicio: 2022-11-01
Fecha Fin: 2025-10-31
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Artificial Intelligence (AI) is increasingly employed by businesses, governments, and other organizations to make decisions with far-reaching impacts on individuals and society. This offers big opportunities for automation in different sectors and daily life, but at the same time it brings risks for discrimination of minority and marginal population groups on the basis of the so-called protected attributes, like gender, race, and age. Despite the large body of research to date, the proposed methods work in limited settings, under very constrained assumptions, and do not reflect the complexity and requirements of real world applications. To this end, the MAMMOth project focuses on multi-discrimination mitigation for tabular, network and multimodal data. Through its computer science and AI experts, MAMMOth aims at addressing the associated scientific challenges by developing an innovative fairness-aware AI-data driven foundation that provides the necessary tools and techniques for the discovery and mitigation of (multi-)discrimination and ensures the accountability of AI-systems with respect to multiple protected attributes and for traditional tabular data and more complex network and visual data. The project will actively engage with numerous communities of vulnerable and/or underrepresented groups in AI research right from the start, adopting a co-creation approach, to make sure that actual user needs and pains are at the centre of the research agenda and act as guidance to the project’s activities. A social science-driven approach supported by social science and ethics experts will guide project research, and a science communication approach will increase the outreach of the outcomes.The project aims to demonstrate through pilots the developed solutions into three relevant sectors of interest: a) finance/loan applications, b) identity verification systems, and c) academic evaluation.