Descripción del proyecto
Recent years have seen a rapid increase in available information. This has created an urgent need for fast statistical and machine learning methods that can scale up to big data sets. Standard approaches, including the now routinely used Bayesian methods, are becoming computationally infeasible, especially in complex models with many parameters and large data sizes. A variety of algorithms have been proposed to speed up these procedures, but these are typically black box methods with very limited theoretical support. In fact empirical evidence shows the potentially bad performance of such methods. This is especially concerning in real-world applications, e.g. in medicine. In this project I shall open up the black box and provide a theory for scalable Bayesian methods combining recent, state-of-the-art techniques from Bayesian nonparametrics, empirical process theory, and machine learning. I focus on two very important classes of scalable techniques: variational and distributed Bayes. I shall establish guarantees, but also limitations, of these procedures for estimating the parameter of interest, and for quantifying the corresponding uncertainty, within a framework that will also convince outside of the Bayesian paradigm. As a result, scalable Bayesian techniques will have more accurate performance, and also better acceptance by a wider community of scientists and practitioners. The proposed research, although motivated by real world problems, is of a mathematical nature. In the analysis I consider mathematical models, which are routinely used in various fields (e.g. high-dimensional linear and logistic regressions are the work horses in econometrics or genetics). My theoretical results will provide principled new insights that can be used, for instance in multiple specific applications I am involved in, including developing novel statistical methods for understanding fundamental questions in cosmology and the early detection of dementia using multiple data sources.