Theoretical Understanding of Classic Learning Algorithms
Machine learning has evolved from being a relatively isolated discipline to have a disruptive influence on all areas of science, industry and society. Learning algorithms are typically classified into either deep learning or class...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
DYNASTY
Dynamics-Aware Theory of Deep Learning
1M€
Cerrado
PRE2018-085659
ALGORITMOS HIBRIDOS COMBINANDO APRENDIZAJE AUTOMATICO Y META...
93K€
Cerrado
TIN2017-85887-C2-1-P
ALGORITMOS HIBRIDOS COMBINANDO APRENDIZAJE AUTOMATICO Y META...
72K€
Cerrado
RYC-2013-14745
Learning Algorithms for Latent Variable Structured Predictio...
309K€
Cerrado
BeyondBlackbox
Data Driven Methods for Modelling and Optimizing the Empiric...
1M€
Cerrado
PID2021-127311NB-I00
BASES FISICAS DE LA INTELIGENCIA NATURAL: DE CELULAS A ORGAN...
145K€
Cerrado
Información proyecto TUCLA
Duración del proyecto: 67 meses
Fecha Inicio: 2023-12-15
Fecha Fin: 2029-07-31
Líder del proyecto
AARHUS UNIVERSITET
No se ha especificado una descripción o un objeto social para esta compañía.
TRL
4-5
Presupuesto del proyecto
2M€
Descripción del proyecto
Machine learning has evolved from being a relatively isolated discipline to have a disruptive influence on all areas of science, industry and society. Learning algorithms are typically classified into either deep learning or classic learning, where deep learning excels when data and computing resources are abundant, whereas classic algorithms shine when data is scarce. In the TUCLA project, we expand our theoretical understanding of classic machine learning, with a particular emphasis on two of the most important such algorithms, namely Bagging and Boosting. As a result of this study, we shall provide faster learning algorithms that require less training data to make accurate predictions. The project accomplishes this by pursuing several objectives:
1. We will establish a novel learning theoretic framework for proving generalization bounds for learning algorithms. Using the framework, we will design new Boosting algorithms and prove that they make accurate predictions using less training data than what was previously possible. Moreover, we complement these algorithms by generalization lower bounds, proving that no other algorithm can make better use of data.
2. We will design parallel versions of Boosting algorithms, thereby allowing them to be used in combination with more computationally expensive base learning algorithms. We conjecture that success in this direction may lead to Boosting playing a more central role also in deep learning.
3. We will explore applications of the classic Bagging heuristic. Until recently, Bagging was not known to have significant theoretical benefits. However, recent pioneering work by the PI shows that Bagging is an optimal learning algorithm in an important learning setup. Using these recent insights, we will explore theoretical applications of Bagging in other important settings.