Fairness in Language Models: Equally right for the right reasons
Most of us use technology related to natural language processing (NLP) such as Google Search or virtual assistants in phones and other devices on a daily basis. Large-scale pre-trained language models hereby play a crucial role as...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
PID2021-124361OB-C33
FAIRTRANSNLP-LANGUAGE: ANALYSING TOXICITY AND STEREOTYPES IN...
147K€
Cerrado
PID2021-127641OB-I00
BIOMETRIA Y COMPORTAMIENTO PARA UNA IA IMPARCIAL Y CONFIABLE...
274K€
Cerrado
AI STORIES
Narrative Archetypes for Artificial Intelligence
3M€
Cerrado
PID2021-124361OB-C31
FAIRTRANSNLP-STEREOTYPES: IDENTIFICACION DE ESTEREOTIPOS Y P...
117K€
Cerrado
PID2021-124361OB-C32
FAIRTRANSNLP-DIAGNOSTICO: MIDIENDO Y CUANTIFICANDO EL SESGO...
110K€
Cerrado
DIALECT
Natural Language Understanding for non-standard languages an...
2M€
Cerrado
Información proyecto FairER
Duración del proyecto: 26 meses
Fecha Inicio: 2022-06-16
Fecha Fin: 2024-08-31
Líder del proyecto
KOBENHAVNS UNIVERSITET
No se ha especificado una descripción o un objeto social para esta compañía.
TRL
4-5
Presupuesto del proyecto
215K€
Descripción del proyecto
Most of us use technology related to natural language processing (NLP) such as Google Search or virtual assistants in phones and other devices on a daily basis. Large-scale pre-trained language models hereby play a crucial role as they often form the basis of those technologies. Those models are trained on a large amount of training data (e.g. the entire English Wikipedia and the Brown corpus) which makes it impossible to curate the training corpus and potential stereotypes and biases will be implemented into the model, often without researchers noticing. This can lead to problematic and unfair behaviour towards certain demographics, often those who already suffer from implicit biases in society.
With FairER, I aim to get a deeper understanding of the inner workings of these language models. In particular, I want to investigate how well their solution strategies align with those of humans and whether this depends on certain demographic attributes such as gender, race, age but also reading abilities and level of education. I will also probe those language models for fairness and inclusiveness, i.e., find out whether the performance of an NLP application depends on demographic attributes of the user. Furthermore, I will conduct this project in a multilingual setting and apply interpretability methods to better understand the rationale behind a model’s decision.
The main impact of FairER will be a better understanding of how language models treat different demographics. These insights will help to improve the fairness and inclusiveness of NLP applications. Furthermore, the datasets I will record and publish along with the code will encourage other researchers to replicate my findings and continue this line of research. Ultimately, this project will have both a scientific and societal impact on the NLP community and users of NLP applications.