Human collaboration with AI agents in national health governance: organizational...
Human collaboration with AI agents in national health governance: organizational circumstances under which data analysts and medical experts follow or deviate from AI.
This project will study a multi-sited ethnography of a currently evolving revolution in global health systems: big data/AI-informed national health governance. With health data being considered countries’ ‘future oil’, public and...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
TED2021-132025B-C41
COORDINACION Y FEDERACION DE DATOS PARA UN ANALITICA DISTRIB...
524K€
Cerrado
SIMTWIN
Health Simulations: Ethical and Societal Challenges of Digi...
1M€
Cerrado
PHASE IV AI
Privacy compliant health data as a service for AI developmen...
7M€
Cerrado
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
This project will study a multi-sited ethnography of a currently evolving revolution in global health systems: big data/AI-informed national health governance. With health data being considered countries’ ‘future oil’, public and scholarly concerns about ‘algorithmic ethics’ rise. Research has long shown that datasets in AI (re)produce social biases, discriminate and limit personal autonomy. This literature, however, has merely focused on AI design and institutional frameworks, examining the subject through legal, technocratic and philosophical perspectives, whilst overlooking the socio-cultural context in which big data and AI systems are embedded, most particularly organizations in which human agents collaborate with AI. This is problematic, as frameworks for ‘ethical AI’ currently consider human oversight crucial, assuming that humans will correct or resist AI when needed; while empirical evidence for this assumption is extremely thin. Very little is known about when and why people intervene or resist AI. Research done consists of single, mostly Western studies, making it impossible to generalize findings. The innovative force of our research is fourfold: 1) To empirically analyze decisive moments in which data-analysts follow or deviate AI: moments deeply impacting national health policies and individual human lives. 2) To do research in six national settings with various governmental frameworks and in different organizational contexts, enabling us to contrast findings, eventually leading to a theory on the contextual, organizational factors underlying ethical AI. 3) To use innovative anthropological methods of future-scenarioing, which will enrich the anthropological discipline by developing and finetuning future-focused research. 4) The research connects anthropological insights with the expertise of AI-developers, and partners with relevant health decisionmakers and policy-institutions, allowing to both analyze and contribute to fair AI.