Machine learning and artificial intelligence techniques are progressing at a tremendous pace and impressive results appear across scientific fields. However, as machine learning models grow in capacity, they become increasingly ‘b...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Descripción del proyecto
Machine learning and artificial intelligence techniques are progressing at a tremendous pace and impressive results appear across scientific fields. However, as machine learning models grow in capacity, they become increasingly ‘black box’, and it becomes harder for humans to reason about the patterns discovered by the machine. A root cause of this difficulty is that most machine learning models can express the same pattern in infinitely many, equally good, ways within their internal representations of the world. This is known as an identifiability problem.
Today we lack a general solution to identifiability problems, and either give up on understanding the patterns discovered by the machine or reduce model complexity to lessen the problem. The latter also reduces the fidelity and applicability of the model. NoKnow rephrase the question of identifiability to be concerned with tasks solved by the representation rather than the representation itself. Using tools from differential geometry and Bayesian inference, we develop the theoretical tools to ensure that tasks solved in the representation have an identifiable outcome even if the representations themselves are not identifiable.
To turn theory into practice, we develop state-of-the-art algorithms for assessing the uncertainty of learned representations in order to indirectly estimate the topology of the representation manifold. We further develop novel, high-fidelity predictive models that have identifiable outcomes when trained on learned representations.
NoKnow provides the fundamental tools needed to engage with learned representations to guarantee identifiable outcomes. This, in turn, increase trust in findings as they are not dependent on arbitrariness in learned representations. As society increasingly automates decisions, this trust in machine learning becomes ever more important.