Descripción del proyecto
As the scale and complexity of available data increase, developing rigorous understanding of the computational properties of statistical procedures has become a key scientific priority of our century. In line with such priority, this project develops a mathematical theory of computational scalability for Bayesian learning methods, with a focus on extremely popular high-dimensional and hierarchical models.
Unlike most recent literature, we will integrate computational and statistical aspects in the analysis of Bayesian learning algorithms, providing novel insight into the interaction between commonly used model structures and fitting algorithms. Key methodological breakthroughs will include a novel connection between computational algorithms for hierarchical models and random walks on the associated graphical models; the use of statistical asymptotics to derive computational scalability statements; and novel understanding of the computational implications of model misspecification and data heterogeneity.
We will derive a broad collection of results for popular Bayesian computation algorithms, especially Markov chain Monte Carlo ones, in a variety of modeling frameworks, such as random-effect, shrinkage, hierarchical and nonparametric ones. These are routinely used for various statistical tasks, such as multilevel regression, factor analysis and variable selection in various disciplines ranging from political science to genomics. Our theoretical results will have direct implications on the design of novel and more scalable computational schemes, as well as on the optimization of existing ones. Focus will be given to develop algorithms with provably linear overall cost both in the number of datapoints and unknown parameters. The above contributions will dramatically reduce the gap between theory and practice in Bayesian computation and allow to fully benefit of the huge potential of the Bayesian paradigm.