Designing fair machine learning algorithms is challenging because the training data is often imbalanced and reflects (sometimes subconscious) biases of human annotators, leading to a possible propagation of biases into future deci...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
PID2021-124361OB-C32
FAIRTRANSNLP-DIAGNOSTICO: MIDIENDO Y CUANTIFICANDO EL SESGO...
110K€
Cerrado
FairER
Fairness in Language Models: Equally right for the right rea...
215K€
Cerrado
EIN2020-112480
DEL MODELO MATEMATICO A LA DECISION HUMANA: POSICIONANDO A E...
9K€
Cerrado
PID2019-105093GB-I00
EXPLICACIONES AUTOMATICAS TRANS-DOMINIO EN VISION POR COMPUT...
163K€
Cerrado
PID2021-127641OB-I00
BIOMETRIA Y COMPORTAMIENTO PARA UNA IA IMPARCIAL Y CONFIABLE...
274K€
Cerrado
TUCLA
Theoretical Understanding of Classic Learning Algorithms
2M€
Cerrado
Información proyecto FairML
Duración del proyecto: 32 meses
Fecha Inicio: 2023-11-30
Fecha Fin: 2026-07-31
Líder del proyecto
KOBENHAVNS UNIVERSITET
No se ha especificado una descripción o un objeto social para esta compañía.
TRL
4-5
Presupuesto del proyecto
215K€
Descripción del proyecto
Designing fair machine learning algorithms is challenging because the training data is often imbalanced and reflects (sometimes subconscious) biases of human annotators, leading to a possible propagation of biases into future decision-making. Besides, enforcing fairness usually leads to an inevitable deterioration of accuracy due to restrictions on the space of classifiers. In this project, I will address this challenge by developing oracle bounds of fairness restraints and a Pareto-dominated trade-off between fairness and accuracy using ensemble classifiers with the majority vote, to cancel out not only errors but also biases. I will also develop illegal bias tracing and long-term fairness capturing to comply with anti-subordination lawfully, using learning theory tools including causality and online learning for moral responsibility. The central objective of this proposal is to gain a theoretical understanding of fairness and to design machine learning algorithms that simultaneously improve both fairness and accuracy. The study is essential both for improved scientific understanding of fairness in machine learning models, and for the development of fairer algorithms for the numerous application domains, such as recruitment, criminal judging, or lending. Moreover, the project also takes interdisciplinary knowledge of economics and law into account to avoid fairness concepts in machine learning from being misaligned with their legal counterparts, enlarging the impact of machine learning applications and giving back to the wider community.