INFORMATION THEORETIC LIMITS FOR DEEP NEURAL NETWORKS
Over the last decade, deep-learning algorithms have dramatically improved the state of the art in many machine-learning problems, including computer vision, speech recognition, natural language processing, and audio recognition. D...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
TheoryDL
Practically Relevant Theory of Deep Learning
1M€
Cerrado
PID2021-125711OB-I00
COMBINANDO TECNICAS DE MODELIZACION Y APRENDIZAJE AUTOMATICO...
145K€
Cerrado
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Over the last decade, deep-learning algorithms have dramatically improved the state of the art in many machine-learning problems, including computer vision, speech recognition, natural language processing, and audio recognition. Despite their success, however, there is no satisfactory mathematical theory that explains the functioning of such algorithms. Indeed, a common critique is that deep-learning algorithms are often used as black box, which is unsatisfactory in all applications for which performance guarantees are critical (e.g., traffic-safety applications).
The purpose of this project is to increase our theoretical understanding of deep neural networks (DNN). This will be done by developing novel information-theoretic bounds on the generalization error attainable using DNN and by demonstrating how such bounds can guide the design of such network.