Fairness and Explanations in Group Recommender Systems. Towards Trustworthiness...
Fairness and Explanations in Group Recommender Systems. Towards Trustworthiness and transparency
Today, most social media networks use automated tools to recommend content or products and to rank, curate and moderate posts. Recommender systems (RSs), and in particular Group recommender systems (GRSs), -a specific kind of RSs...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
TIN2016-80630-P
RECOMENDACION EN MEDIOS SOCIALES: CONTEXTO, DIVERSIDAD Y SES...
82K€
Cerrado
PID2019-108965GB-I00
MAS ALLA DE LA RECOMENDACION ESTATICA: EQUIDAD, INTERACCION...
99K€
Cerrado
PID2019-106493RB-I00
AUMENTO DE LA CALIDAD Y DE LA EQUIDAD, A GRUPOS MINORITARIOS...
28K€
Cerrado
AA4MD
Algorithmic Auditing for Music Discoverability
189K€
Cerrado
TIN2012-32682
AUMENTO DE PRESTACIONES EN LOS SISTEMAS DE RECOMENDACION BAS...
14K€
Cerrado
TIN2014-55006-R
PERSONALIZACION SOCIAL EN SISTEMAS DE RECOMENDACION
81K€
Cerrado
Información proyecto FIDELITY
Duración del proyecto: 33 meses
Fecha Inicio: 2023-07-17
Fecha Fin: 2026-04-30
Líder del proyecto
UNIVERSIDAD DE JAÉN
No se ha especificado una descripción o un objeto social para esta compañía.
Total investigadores981
Presupuesto del proyecto
181K€
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Today, most social media networks use automated tools to recommend content or products and to rank, curate and moderate posts. Recommender systems (RSs), and in particular Group recommender systems (GRSs), -a specific kind of RSs used to recommend items to a group of users-, are likely to become more ubiquitous, with expected market forecast to reach USD 16.13 billion by 2026.
These automated content governance tools are receiving emerging interest as both algorithms and decision-making processes behind the platforms are not sufficiently transparent, with a negative impact on domains such as fair job opportunities, fair e-commerce or news exposure.
Two of the key requirements that have to be fulfilled to build and keep users’ trust in AI systems while guaranteeing transparency are Fairness and Explainability. But, aside from some previous attempts to enhance both aspects in traditional-individual RS, they have hardly been explored in GRSs.
FIDELITY addresses this challenge by developing novel algorithms and computational tools in GRS to boost explanation, fairness, and synergy between them through a disruptive multidisciplinary research approach that: 1) extensively brings SHAP and LIME, as state-of-the-art post-hoc explanation approaches in AI, into RS and GRS contexts, 2) bridges explanation and fairness in RS and GRS, introducing an explanation paradigm shift moving from why are the recommendations generated? to how fair are the generated recommendations? and, 3) transversally evaluates the new methods through real-world GRSs and user studies. The ultimate goal is to guarantee greater user trust, and independence of RS output from any of the sociodemographic characteristics of users. The training programme, designed with the aim to fill the existing gaps between computing science, social research and business development reality, will provide the candidate with a multidisciplinary background that will boost his innovation potential and career prospects.