Recent years have witnessed tremendous progress in the field of Machine Learning (ML). Learning algorithms are applied in an ever-increasing variety of contexts, ranging from engineering challenges such as self-driving cars all th...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
DIN2019-010876
Modelos de recomendación conscientes de privacidad para inte...
70K€
Cerrado
PID2020-116118GA-I00
APRENDIZAJE FEDERADO PARA PRESERVAR LA PRIVACIDAD DE LOS DAT...
27K€
Cerrado
PID2020-114596RB-C21
PERXAI: PERSONALIZACION DE INTELIGENCIA ARTIFICIAL EXPLICABL...
54K€
Cerrado
PID2020-112754GB-I00
MODELIZACION Y ANALISIS DE CONTEXTOS PARA EL DISEÑO DE SISTE...
58K€
Cerrado
TED2021-131291B-I00
APRENDIZAJE FEDERADO EXPLICABLE CON REDES BAYESIANAS
111K€
Cerrado
PID2020-114911GB-I00
INTELIGENCIA ARTIFICIAL PARA ESTIMAR AUTOMATICAMENTE LA PERS...
81K€
Cerrado
Información proyecto GENERALIZATION
Duración del proyecto: 65 meses
Fecha Inicio: 2022-03-04
Fecha Fin: 2027-08-31
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Recent years have witnessed tremendous progress in the field of Machine Learning (ML). Learning algorithms are applied in an ever-increasing variety of contexts, ranging from engineering challenges such as self-driving cars all the way to societal contexts involving private data. These developments pose important challenges (i) Many of the recent breakthroughs demonstrate phenomena that lack explanations, and sometimes even contradict conventional wisdom. One main reason for this is because classical ML theory adopts a worst-case perspective which is too pessimistic to explain practical ML: in reality data is rarely worst-case, and experiments indicate that often much less data is needed than predicted by traditional theory. (ii) The increase in ML applications that involve private and sensitive data highlights the need for algorithms that handle the data responsibly. While this need has been addressed by the field of Differential Privacy (DP), the cost of privacy remains poorly understood: How much more data does private learning require, compared to learning without privacy constraints? Inspired by these challenges, our guiding question is: How much data is needed for learning? Towards answering this question we aim to develop a theory of generalization which complements the traditional theory and is better fit to model real-world learning tasks. We will base it on distribution-, data-, and algorithm-dependent perspectives. These complement the distribution-free worst-case perspective of the classical theory, and are suitable for exploiting specific properties of a given learning task. We will use this theory to study various settings, including supervised, semisupervised, interactive, and private learning. We believe that this research will advance the field in terms of efficiency, reliability, and applicability. Furthermore, our work combines ideas from various areas in computer science and mathematics; we thus expect further impact outside our field.