Online hate speech and disinformation have emerged as a major problem for democratic societies worldwide. Governments, companies and civil society groups have responded to this phenomenon by increasingly turning to Artificial Inte...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Información proyecto AI4Dignity
Duración del proyecto: 22 meses
Fecha Inicio: 2020-08-10
Fecha Fin: 2022-06-30
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Online hate speech and disinformation have emerged as a major problem for democratic societies worldwide. Governments, companies and civil society groups have responded to this phenomenon by increasingly turning to Artificial Intelligence (AI) as a tool that can detect, decelerate and remove online extreme speech. However, such efforts confront many challenges. One of the key challenges is the quality, scope, and inclusivity of training data sets. The second challenge is the lack of procedural guidelines and frameworks that can bring cultural contextualization to these systems. Lack of cultural contextualization has resulted in false positives, over-application and systemic bias. The ongoing ERC project has identified the need for a global comparative framework in AI-assisted solutions in order to address cultural variation, since there is no catch-all algorithm that can work for different contexts. Following this, the proposed project will address major challenges facing AI assisted extreme speech moderation by developing an innovative solution of collaborative bottom-up coding. The model, AI4Dignity, moves beyond keyword-based detection systems by pioneering a community-based classification approach. It identifies fact-checkers as critical human interlocutors who can bring cultural contextualization to AI-assisted speech moderation in a meaningful and feasible manner. AI4Dignity will be a significant step towards setting procedural benchmarks to operationalize the human in the loop principle and bring inclusive training datasets for AI systems tackling urgent issues of digital hate and disinformation.