Descripción del proyecto
Most statistical methods require that all aspects of data collection and inference are determined in advance, independently of the data. These include when to stop collecting data, what decisions can be made (e.g. accept/reject hypothesis, classify new point) and how to measure their quality (e.g. loss function/significance level). This is wildly at odds with the flexibility required in practice! It makes it impossible to achieve error control in meta-analyses, and contributes to the replication crisis in the applied sciences.
I will develop a novel statistical theory in which all data-collection and decision-aspects may be unknown in advance, possibly imposed post-hoc, depending on data itself in unknowable ways. Yet this new theory will provide small-sample frequentist error control, risk bounds and confidence sets.
I base myself on far-reaching extensions of e-values/processes. These generalize likelihood ratios and replace p-values, capturing 'evidence' in a much cleaner fashion. As lead author of the first paper (2019) that gave e-values a name and demonstrated their enormous potential, I kicked off and then played an essential role in the extremely rapid development of anytime-valid inference, the one aspect of flexibility that is by now well-studied. Still, efficient e-value design principles for many standard problems (e.g. GLMs and other settings with covariates) are still lacking, and I will provide them. I will also develop theory for full decision-task flexibility, about which currently almost nothing is known. A major innovation is the e-posterior, which behaves differently from the Bayesian one: if priors are chosen badly, e-posterior based confidence intervals get wide rather than wrong.
Both the existing Wald-Neyman-Pearson and Bayesian statistical theories will arise as special, extreme cases of the new theory, based on perfect (hence unrealistic) knowledge of the data-collection/decision problem or the underlying distribution(s), respectively.