Robust statistical methodology and theory for large scale data
Modern technology allows large-scale data to be collected in many new forms, and their underlying generating mechanisms can be extremely complex. In fact, an interesting (and perhaps initially surprising) feature of large-scale d...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Información proyecto RobustStats
Duración del proyecto: 64 meses
Fecha Inicio: 2021-05-27
Fecha Fin: 2026-09-30
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Modern technology allows large-scale data to be collected in many new forms, and their underlying generating mechanisms can be extremely complex. In fact, an interesting (and perhaps initially surprising) feature of large-scale data is that it is often much harder to feel confident that one has identified a plausible statistical model. This is largely because there are so many forms of model violation and both visual and more formal statistical checks can become infeasible. It is therefore vital for trust in conclusions drawn from large studies that statisticians ensure that their methods are robust. The RobustStats proposal will introduce new statistical methodology and theory for a range of important contemporary Big Data challenges. In transfer learning, we wish to make inference about a target data population, but some (typically, most) of our training data come from a related but distinct source distribution. The central goal is to find appropriate ways to exploit the relationship between the source and target distributions. Missing and corrupted data play an ever more prominent role in large-scale data sets because the proportion of cases with no missing attributes is typically small. We will address key challenges of testing the form of the missingness mechanism, and handling heterogeneous missingness and corruptions in classification labels. The robustness of a statistical procedure is intimately linked to model misspecification. We will advocate for two approaches to studying model misspecification, one via the idea of regarding an estimator as a projection onto a model, and the other via oracle inequalities. Finally, we will introduce new methods for robust inference with large-scale data based on the idea of data perturbation. Such approaches are attractive ways of exploring a space of distributions in a model-free way, and we will show that aggregation of the results of carefully-selected perturbations can be highly effective.