Descripción del proyecto
The project proposes an alternative epistemology of artificial intelligence (AI). It argues that what is at stake in AI is not its similarity to human rationality (anthropomorphism), but its epistemic difference. Rather than speculating in the abstract on whether a machine can think, the project addresses a historical question: What is the logical and technical form of the current paradigm of AI, machine learning, and what is its origin? The project traces the origins of machine learning back to the invention of algorithmic modelling (more precisely, algorithmic statistical modelling) that took shape in the artificial neural networks research of the mid 1950s, and records that a coherent history and epistemology of this groundbreaking artefact is still missing. The project pursues three objectives to turn its findings into a constructive paradigm: 1) a new history of AI that stresses the key role of algorithmic models in the evolution of statistics, computer science, artificial neural networks, and machine learning; 2) a new epistemology of AI that engages with the psychology of learning and the historical epistemology of science and technology; 3) a study of the impact of the large multi-purpose models (e.g. Bert, GPT-3, Codex and other recent foundation models) on work automation, data governance, and digital culture. Through consolidating a model theory of AI, the research will benefit the reception of AI in general and fields such as digital humanities, scientific computing, robotics, and AI ethics, among others. Ultimately, it will help situate AI in the global horizon of the current technosphere and in the long history of knowledge systems.