While extremely successful, deep learning (DL) still lacks a solid theoretical foundation.
In the last 5 years the PI focused almost entirely on DL theory, yielding a strong publication record with 7 papers at NeurIPS (the leadin...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
DYNASTY
Dynamics-Aware Theory of Deep Learning
1M€
Cerrado
NoKnow
Not Knowing in Deep Representation Learning
2M€
Cerrado
TUCLA
Theoretical Understanding of Classic Learning Algorithms
2M€
Cerrado
PID2021-127311NB-I00
BASES FISICAS DE LA INTELIGENCIA NATURAL: DE CELULAS A ORGAN...
145K€
Cerrado
Rand4TrustPool
Random sampling for trustworthy pooling layers in graph neur...
227K€
Cerrado
BeyondBlackbox
Data Driven Methods for Modelling and Optimizing the Empiric...
1M€
Cerrado
Información proyecto Understanding DL
Duración del proyecto: 64 meses
Fecha Inicio: 2022-04-04
Fecha Fin: 2027-08-31
Descripción del proyecto
While extremely successful, deep learning (DL) still lacks a solid theoretical foundation.
In the last 5 years the PI focused almost entirely on DL theory, yielding a strong publication record with 7 papers at NeurIPS (the leading ML conference), including 2 spotlights (top 3% of submitted papers) and one oral (top 1%), 2 papers at ICLR (the leading DL conference), and 1 paper at COLT (the leading ML theory conference). These results are amongst the first that break a 20 years hiatus in NN theory, thereby giving some hope for a solid deep learning theory. This includes 1) the first poly-time learnability result for non-trivial function class by SGD on NN, 2) the first such result with near optimal rate, 3) new methodology to bound the sample complexity of NN, that established the first sample complexity bound that is sublinear in the number of parameters, under norm constraints that are valid in practice, 4) an explanation to the phenomena of adversarial examples.
We plan to go far beyond these and other results, and to build a coherent theory for DL, addressing all three pillars of learning theory:
Optimization: We plan to investigate the success of SGD in finding a good model, arguably the greatest mystery of modern deep learning. Specifically our goal is to understand what models are learnable by SGD on neural networks. To this end, we plan to come up with a new class of models that can potentially lead to new deep learning algorithms, with a solid theory behind them.
Statistical Complexity: We plan to crack the second great mystery of modern deep learning, which is their ability to generalize with fewer examples than parameters. Our plan is to investigate the sample complexity of classes of neural networks that are defined by bounds on the weights’ magnitude.
Representation: We plan to investigate functions that can be realized by NN. This includes classical questions such as the benefits of depth, as well as more modern aspects such as adversarial examples.