Machine learning has rapidly evolved in the last decade, significantly improving accuracy on tasks such as image classification. Much of this success can be attributed to the re-emergence of neural nets. However, learning algorith...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
PID2020-120611RB-I00
CONTROLANDO EL OLVIDO EN REDES NEURONALES: APRENDIZAJE CONTI...
122K€
Cerrado
TIN2016-74946-P
UN MARCO DE VISION POR COMPUTADOR BASADO EN UN CICLO DE ANAL...
100K€
Cerrado
IJCI-2017-33014
Redes neuronales profundas: arquitecturas y aplicaciones
64K€
Cerrado
IT-DNN
INFORMATION THEORETIC LIMITS FOR DEEP NEURAL NETWORKS
204K€
Cerrado
RED
Robust Explainable Deep Networks in Computer Vision
2M€
Cerrado
PID2021-125711OB-I00
COMBINANDO TECNICAS DE MODELIZACION Y APRENDIZAJE AUTOMATICO...
145K€
Cerrado
Información proyecto HOLI
Duración del proyecto: 72 meses
Fecha Inicio: 2019-01-17
Fecha Fin: 2025-01-31
Líder del proyecto
TEL AVIV UNIVERSITY
No se ha especificado una descripción o un objeto social para esta compañía.
TRL
4-5
Presupuesto del proyecto
2M€
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Machine learning has rapidly evolved in the last decade, significantly improving accuracy on tasks such as image classification. Much of this success can be attributed to the re-emergence of neural nets. However, learning algorithms are still far from achieving the capabilities of human cognition. In particular, humans can rapidly organize an input stream (e.g., textual or visual) into a set of entities, and understand the complex relations between those. In this project I aim to create a general methodology for semantic interpretation of input streams. Such problems fall under the structured-prediction framework, to which I have made numerous contributions. The proposal identifies and addresses three key components required for a comprehensive and empirically effective approach to the problem.
First, we consider the holistic nature of semantic interpretations, where a top-down process chooses a coherent interpretation among the vast number of options. We argue that deep-learning architectures are ideally suited for modeling such coherence scores, and propose to develop the corresponding theory and algorithms. Second, we address the complexity of the semantic representation, where a stream is mapped into a variable number of entities, each having multiple attributes and relations to other entities. We characterize the properties a model should satisfy in order to produce such interpretations, and propose novel models that achieve this. Third, we develop a theory for understanding when such models can be learned efficiently, and how well they can generalize. To achieve this, we address key questions of non-convex optimization, inductive bias and generalization. We expect these contributions to have a dramatic impact on AI systems, from machine reading of text to image analysis. More broadly, they will help bridge the gap between machine learning as an engineering field, and the study of human cognition.