VIrtual GuardIan AngeLs for the post-truth Information Age
This project is motivated by the hypothesis that we are approaching a post-truth society, where it becomes practically impossible for anyone to distinguish fact from fiction in all but the most trivial questions. A main cause of t...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
WELCOME
Multiple Intelligent Conversation Agent Services for Recepti...
4M€
Cerrado
SOLARIS
Strengthening demOcratic engagement through vaLue-bAsed gene...
3M€
Cerrado
TED2021-129402B-C22
LA GOBERNANZA DE LA INTELIGENCIA ARTIFICIAL BASADA EN LA CIU...
58K€
Cerrado
NoBIAS
Artificial Intelligence without Bias
4M€
Cerrado
TED2021-131787B-I00
BIOMETRIA DEL COMPORTAMIENTO MEJORADA PARA UNA IA CENTRADA E...
315K€
Cerrado
Duración del proyecto: 59 meses
Fecha Inicio: 2024-10-01
Fecha Fin: 2029-09-30
Líder del proyecto
UNIVERSITEIT GENT
No se ha especificado una descripción o un objeto social para esta compañía.
TRL
4-5
Presupuesto del proyecto
2M€
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
This project is motivated by the hypothesis that we are approaching a post-truth society, where it becomes practically impossible for anyone to distinguish fact from fiction in all but the most trivial questions. A main cause of this evolution is people’s innate dependence on cognitive heuristics to process information, which may lead to biases and poor decision making. These biases risk being amplified in a networked society, where people depend on others to form their opinions and judgments and to determine their actions. Contemporary generative Artificial Intelligence technologies may further exacerbate this risk, given their ability to fabricate and efficiently spread false but highly convincing information at an unprecedented scale. Combined with expected advances in virtual, augmented, mixed, and extended reality technologies, this may create an epistemic crisis with consequences that are hard to fully fathom: a post-truth era on steroids.In the VIGILIA project, we will investigate a possible mitigation strategy. We propose to develop automated techniques for detecting triggers of cognitive biases and heuristics in humans when facing information, as well as their effect at an interpersonal level (their effect on trust, reputation, and information propagation), and at a societal level (in terms of possible irrational behaviour and polarization). We aim to achieve this by leveraging techniques from AI itself, in particular Large Language Models, as well as by building on advanced user modelling approaches from the past ERC Consolidator Grant FORSIED. Our results will be integrated within tools that we refer to as VIrtual GuardIan AngeLs (VIGILs), aimed at news and social media consumers, journalists, scientific researchers, and political decision makers. Ethical questions arising will be identified and dealt with as first-class questions within the research project.