Enhancing Protections through the Collective Auditing of Algorithmic Personaliza...
Enhancing Protections through the Collective Auditing of Algorithmic Personalization
The structure of the current data ecosystem carries grave threats to individuals' privacy and autonomy, facilitates discrimination, promotes social fragmentation, and threatens our ability to govern ourselves. Many of these concer...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
PID2020-115115GB-I00
USO DE ANALITICAS DE APRENDIZAJE EN ENTORNOS DIGITALES UNIVE...
50K€
Cerrado
PrivacyForDataAI
A privacy layer to power all research and AI workflows
3M€
Cerrado
TIN2014-58259-JIN
MICROAGREGACION ANONIMA EN ENCUESTAS DEMOGRAFICAS A GRAN ESC...
163K€
Cerrado
EIN2019-103287
PERSEUS: SOCIETAL AND ECONOMICAL IMPACT OF PERSONALIZATION
25K€
Cerrado
TIN2014-55206-R
PRIVACIDAD EN ENTORNOS SOCIALES EDUCATIVOS DURANTE LA INFANC...
118K€
Cerrado
SODA
Scalable Oblivious Data Analytics
3M€
Cerrado
Información proyecto personAlg
Duración del proyecto: 65 meses
Fecha Inicio: 2024-03-08
Fecha Fin: 2029-08-31
Descripción del proyecto
The structure of the current data ecosystem carries grave threats to individuals' privacy and autonomy, facilitates discrimination, promotes social fragmentation, and threatens our ability to govern ourselves. Many of these concerns stem specifically from algorithmic personalization---the practice of providing individuals with personalized opportunities, information, or experiences, on the basis of their personal data and on patterns learned from others' data. Despite the urgency of the algorithmic personalization problem, the mathematical toolkit for studying and auditing for problematic algorithmic personalization remains extremely limited---particularly if we wish to do so in a manner that provides formal privacy guarantees.
The goal of this proposal is to tackle this important problem head-on by establishing the mathematical foundations needed to study algorithmic personalization and to collectively audit personalization systems while guaranteeing privacy to participants. Such tools could transform our collective ability to make the best possible use of our data while ensuring autonomy, privacy, and overall positive social impact.
My vision focuses on three core objectives: (1) building new mathematical concepts and definitions allowing us to articulate, prioritize, and study personalization-based problems, (2) addressing the key algorithmic challenges of privacy-preserving auditing of personalization systems, and (3) integrating deep understanding of the broader legal and ethical context into our approach. For each of these components, the proposal maps out a concrete research strategy, including preliminary steps that indicate the feasibility of this groundbreaking project.