Responsive classifiers against hate speech in low-resource settings
"Hate speech is a worldwide phenomenon that is increasingly pervading online spaces, creating an unsafe environment for users. While tech companies address this problem by server-side filtering using machine learning models traine...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Información proyecto Respond2Hate
Duración del proyecto: 22 meses
Fecha Inicio: 2023-06-16
Fecha Fin: 2025-04-30
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
"Hate speech is a worldwide phenomenon that is increasingly pervading online spaces, creating an unsafe environment for users. While tech companies address this problem by server-side filtering using machine learning models trained on large datasets, these automatic methods cannot be applied to most languages due to lack of available training data.
Based on recent results of the PI's ERC project on multilingual representation models in low-resource settings, Respond2Hate aims at developing a pilot browser extension that allows users to locally remove hateful content from their social media feeds themselves, without having to rely on the support of tech companies.
Since hate speech is highly dependent on cultural context, responsive classifiers are needed that adapt to the individual environment. Commercial efforts focus on large-scale, general-purpose models which are often burdened with representation and bias problems, and therefore cope poorly with swiftly changing targets or information shift between regional contexts. In contrast, we seek to develop lightweight, adaptive models that require only a small dataset for initial fine-tuning by continuously enhancing model capabilities over time.
This is achieved by applying state-of-the-art Natural Language Processing (NLP) and deep learning techniques for pre-trained language models like low-resource transfer of hate speech representations from high-resource languages and few-shot learning based on limited user feedback. We have already successfully applied these methods in low-resource multilingual settings, and will now validate their use for hate speech filtering.
By making hate speech detection and reduction available in ""low-resource"" countries with little representation in current training datasets, which are currently not served well by governments, industry and NGOs, Respond2Hate will empower users to self-control their exposure to hate speech, fostering a healthier and safer online environment."