Machine learning in science and society A dangerous toy?
"Deep learning (DL) models are encroaching on nearly all our knowledge institutions. Ever more scientific fields—from medical science to fundamental physics—are turning to DL to solve long-standing problems or make new discoveries...
ver más
31/12/2029
Líder desconocido
2M€
Presupuesto del proyecto: 2M€
Líder del proyecto
Líder desconocido
Fecha límite participación
Sin fecha límite de participación.
Financiación
concedida
El organismo HORIZON EUROPE notifico la concesión del proyecto
el día 2024-10-17
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Información proyecto TOY
Duración del proyecto: 62 meses
Fecha Inicio: 2024-10-17
Fecha Fin: 2029-12-31
Líder del proyecto
Líder desconocido
Presupuesto del proyecto
2M€
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
"Deep learning (DL) models are encroaching on nearly all our knowledge institutions. Ever more scientific fields—from medical science to fundamental physics—are turning to DL to solve long-standing problems or make new discoveries. At the same time, DL is used across society to inform and provide knowledge. We urgently need to evaluate the potentials and dangers of adopting DL for epistemic purposes, across science and society. This project uncovers the epistemic strengths and limits of DL models that are becoming the single most way we are structuring all our knowledge, and it does so by starting with an innovative hypothesis: that DL models are toy models.
A toy model is a type of highly idealized model that greatly distorts the gritty details of the real world. Every scientific domain has their own toy models that are used to ""play around"" with different features, gaining insight into complex phenomena. Conceptualizing DL models as toy models exposes the epistemic benefits of DL, but also the enormous risk of overreliance. Since toy models are so divorced from the real world, how do we know they are not leading us astray? TOY addresses this fundamental issue. TOY 1) identifies interlocking model puzzles that face DL models and toy models alike, 2) develops a theory of DL (toy) models in science and society based on the function of their idealizations and 3) develops a philosophical theory for evaluating the epistemic value of DL (toy) models across science and society. In so doing, TOY solves existing problems, answers open questions, and identifies new challenges in philosophy of science, on the nature and epistemic value of idealization and toy models; in philosophy of ML, by looking beyond DL opacity and developing a philosophical method for evaluating the epistemic value of DL models; and by bringing siloed debates in ethics of AI together with philosophy of science, providing necessary guidance on the appropriate use and trustworthiness of DL in society."