Most statistical methods require that all aspects of data collection and inference are determined in advance, independently of the data. These include when to stop collecting data, what decisions can be made (e.g. accept/reject hy...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
ACME
Assumption-Lean (Causal) Modelling and Estimation: A Paradig...
2M€
Cerrado
MTM2009-07302
SUFICIENCIA E INVARIANZA, ESTIMACION DE DENSIDADES Y MODELIZ...
6K€
Cerrado
TIN2013-42351-P
ALGORITMOS AVANZADOS PARA ANALISIS DE DATOS
68K€
Cerrado
CausalHighDim
Causal Statistical Inference from High Dimensional Data
185K€
Cerrado
PID2019-105058GA-I00
PROCESADO DE DATOS UNIFICADO MEDIANTE TRANSFORMACIONES PROBA...
41K€
Cerrado
PSI2008-03624
DISEÑOS UNIVARIADOS Y MULTIVARIADOS DE MEDIDAS REPETIDAS: DE...
48K€
Cerrado
Información proyecto FLEX
Duración del proyecto: 65 meses
Fecha Inicio: 2024-05-07
Fecha Fin: 2029-10-31
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Most statistical methods require that all aspects of data collection and inference are determined in advance, independently of the data. These include when to stop collecting data, what decisions can be made (e.g. accept/reject hypothesis, classify new point) and how to measure their quality (e.g. loss function/significance level). This is wildly at odds with the flexibility required in practice! It makes it impossible to achieve error control in meta-analyses, and contributes to the replication crisis in the applied sciences.
I will develop a novel statistical theory in which all data-collection and decision-aspects may be unknown in advance, possibly imposed post-hoc, depending on data itself in unknowable ways. Yet this new theory will provide small-sample frequentist error control, risk bounds and confidence sets.
I base myself on far-reaching extensions of e-values/processes. These generalize likelihood ratios and replace p-values, capturing 'evidence' in a much cleaner fashion. As lead author of the first paper (2019) that gave e-values a name and demonstrated their enormous potential, I kicked off and then played an essential role in the extremely rapid development of anytime-valid inference, the one aspect of flexibility that is by now well-studied. Still, efficient e-value design principles for many standard problems (e.g. GLMs and other settings with covariates) are still lacking, and I will provide them. I will also develop theory for full decision-task flexibility, about which currently almost nothing is known. A major innovation is the e-posterior, which behaves differently from the Bayesian one: if priors are chosen badly, e-posterior based confidence intervals get wide rather than wrong.
Both the existing Wald-Neyman-Pearson and Bayesian statistical theories will arise as special, extreme cases of the new theory, based on perfect (hence unrealistic) knowledge of the data-collection/decision problem or the underlying distribution(s), respectively.